patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11943578
DETAILED DESCRIPTION OF THE INVENTION In the following a detailed description of an example embodiment of the invention is given. It is, however understood that the principles of the invention could be embodied in other ways. With reference toFIG.1there is shown a plot of loudspeaker driver diaphragm excursion as a function of frequency for a loudspeaker driver mounted in a closed box2and a ported box1, respectively, necessary for the given loudspeaker to generate a sound pressure level of 88 dB SPL at a distance of 1 metre from the loudspeaker box. With reference toFIG.2there is shown a plot of obtainable frequency response at a sound pressure level of 94 dB SPL for a loudspeaker driver mounted in a closed box5and a ported box4, respectively. and with a driver diaphragm excursion limit of 6 mm. With reference toFIG.3there is shown a plot of obtainable frequency response at a sound pressure level of 104 dB SPL for a loudspeaker driver mounted in a closed box9and a ported box10, respectively and with a driver diaphragm excursion limit of 6 mm. With reference toFIG.4there is shown a plot of an obtainable frequency response at a sound pressure level of 114 dB SPL for a loudspeaker driver mounted in a closed box13and a ported box14, respectively. And with a driver diaphragm excursion limit of 6 mm. With reference toFIG.5there is shown acoustic response of the loudspeaker driver mounted in the closed box with16and without15equalization and the corresponding equalizer frequency response17. With reference toFIG.6there is shown acoustic response of the loudspeaker driver mounted in the ported box with18and without19equalization and the corresponding equalizer frequency response20. With reference toFIG.7there is shown a schematic block diagram illustrating signal processing required in order to take account of the state of the enclosure28, i.e. whether the port30,34is closed or open as schematically illustrated by the closing means32. In order to accommodate the acoustic system, the acoustic change of the system should be accompanied by a change in the signal processing feeding the amplifier of the driver. The signal processing change will be comprised by different equalizations and protection limiter settings. The signal processing comprises first and second equalizers22,26that receive an input signal21and which are configured to provide low frequency equalization. These equalizers22and26are linear filters which equalize the low frequency response to obtain the desired low frequency roll off. The desired low frequency roll-off is different depending on whether the enclosure is closed or ported. If the port is open (a ported enclosure), the switch25is in position P as shown inFIG.7, whereas if the port is closed (a closed enclosure), the switch25is in position C. The port velocity limiter23is only present in the signal processing path in the case where the enclosure is ported and limits the air velocity in the port in order to keep port noise at a minimum. The displacement limiters24,27limit the excursion of the loudspeaker diaphragm29to avoid damage to the diaphragm, its suspension and the loudspeaker driver and jarring sounds from the loudspeaker. In an embodiment, the limiters23,24,27are implemented by level adjustments, which are controlled by the input level at21. Thereby the limiters23,24,27are designed such that the level of the signal provided to the loudspeaker driver will be proportional to the level of the input signal at21until a threshold value is reached. Above this threshold value the level of the signal provided to the loudspeaker driver is maintained substantially constant even if the level of the input signal increases, for instance by the provision of suitable AGC or compressor means. The following figures show various embodiments of the channel entity, i.e. the sound channel leading from the interior space of the loudspeaker enclosure via the port opening to the surroundings and the opening/closing mechanism provided in the channel. Throughout, sound entrance from the interior space of the enclosure to the channel entity is indicated by an arrow designated “In” and sound exit from the port opening is indicated by an arrow designated “Out”. With reference toFIGS.8(a) and (b)there is shown a schematic representation of an embodiment of an opening/closing mechanism35for application in an embodiment of the present invention. The port region of the channel leading from the interior of the enclosure to the surroundings is designated by36and the entrance to the channel from the interior of the enclosure is designated by39. In the channel37there is provided an opening/closing mechanism formed as a cylindrical body40mounted for rotation about the longitudinal axis C of the cylindrical body40. Through the cylindrical body40there extends a channel portion45bounded by wall portions43and44that in the shown embodiment provides a continuation of the interior wall portions37″ and37′, respectively of the channel36. The curvatures of the interior surface of the body portion41and the interior surface of body portion42, respectively correspond to the outer circumferential surface of the cylindrical body40, whereby the cylindrical body40can rotate (as indicated by arrow R) within these body portions of the channel37. When the cylindrical body40is rotated as indicated by arrow R, it is brought to the state shown inFIG.8(b)in which it tightly closes the channel37. With reference toFIG.9there is shown a schematic representation of a practical implementation of a port channel unit comprising the opening/closing mechanism illustrated inFIGS.8(a) and (b).FIG.9shows the port region36of the channel and the entrance region39connecting the channel with the interior space of the enclosure. The cylindrical body40is rotated by means of an actuator or motor46via a transmission47. With reference toFIG.10there is shown a schematic representation of a dual channel embodiment of the invention comprising two separate channel portions49,50with sound inlets51and52, respectively configured to be in acoustic communication with the interior space of the loudspeaker enclosure. The two channel portions49,50coincide to the port tuning and are both in acoustic communication with a common port region48(alternatively designated by reference numeral36inFIG.9), wherein the opening/closing mechanism illustrated inFIGS.8(a),8(b)and9is inserted between the channels49,50and the common port region48(36). With reference toFIGS.11(a) and (b)there are shown images of the port channel entity shown inFIG.9comprising the sound inlet portion39, the port region55(alternatively designated by reference numerals36and48inFIGS.8and9, respectively), and the cylindrical body40of the opening/closing mechanism illustrated inFIGS.8(a) and8(b)mounted in a loudspeaker enclosure53with an internal space with which the sound inlet portion39is in acoustic communication. The opening of the port region55(36,48) is provided in an extension54to the loudspeaker enclosure53in which the opening56for the loudspeaker driver is provided. With reference toFIG.12(a) through (e)there are shown schematic representations of alternative implementations of opening/closing mechanisms for application in embodiments of the present invention. FIG.12(a)illustrates a first alternative opening/closing mechanism provided in a sound channel with a sound inlet39and a sound outlet (port region)36. The opening/closing mechanism comprises a rotatable plate member57, the length of which is chosen such that it blocks sound passage through the channel in the closed state as indicated by reference numeral58′ and opens the sound channel in the open state as indicated by58″. The rotatable plate member57is coupled to a controllable actuator (not shown in the figure). FIG.12(b)illustrates a second alternative opening/closing mechanism comprising a plate member59connected to a wall portion of the channel by a hinge member61such that the plate member can rotate about the hinge member61between an open state indicated by60′ and a closed state indicated by60″. FIGS.12(c1andc2) illustrates two different configurations of a third alternative opening/closing mechanism designed to be provided in a dual channel embodiment of the invention. With reference toFIG.12(c1) there is shown a schematic representation of the channel and port seen from above (as opposed to the embodiments shown inFIGS.12(a) and12(b)in which the channel and port are seen from the side). The port region (corresponding to36inFIG.12(a)) is designated by64and two branches62,63of the channel are leading from the enclosure to the port64via the opening/closing mechanism65,66,67,68. Two blocking members65and66, respectively are mounted for rotation about an axle, such that they can be brought from the closed position (65,66) to the open position as indicated by68and67, respectively, in which position the two members65and66extend in opposite directions as shown in the figure. With reference toFIG.12(c2) there is shown a schematic representation of the channel and port seen from above (as opposed to the embodiments shown inFIGS.12(a) and12(b)in which the channel and port are seen from the side). The port region (corresponding to36inFIG.12(a)) is designated by64and two branches62,63of the channel are leading from the enclosure to the port64via the opening/closing mechanism65,66,67,68. Two blocking members65and66, respectively are mounted for rotation about an axle, such that they can be brought from the closed position (65,66) to the open position as indicated by67and68, respectively, in which position the two members65and66extend parallel to each other as shown in the figure. FIG.12(d)illustrates a fourth alternative opening/closing mechanism in which a plate member69is mounted for introduction into the channel portion in a direction substantially perpendicular to the sound channel. The plate member69is operated by a controllable activator70. FIG.12(e)illustrates a fifth alternative opening/closing mechanism inserted as an integral part of the sound channel71between the sound inlet73and the sound outlet (port region)72. The opening/closing member comprises a flexible tubular member74forming a tight seal with the respective channel portions and being dimensioned such that a closing mechanism75can bring the flexible tubular member from a state in which its diameter is substantially equal to the diameter of the sound channel at the portion hereof, in which the flexible tubular member74is provided to a state in which the flexible tubular member closes the passage through the channel as indicated by74′ in the figure. In all of the described embodiments of opening/closing mechanisms—as well as in any other opening/closing mechanisms that should be used in the present invention, it is important that a tight blockage of the sound channel is provided in the closed state and the respective opening/closing mechanisms may therefore be provided with suitable means, such as this rubber strips, to ensure that a sufficiently tight seal is indeed achieved in the closed state. Although the invention has been explained in relation to the embodiments described above, it is to be understood that many other possible modifications and variations can be made without departing from the scope of the present invention.
11,383
11943579
DETAILED DESCRIPTION Herein described are systems and methods for a coaxial loudspeaker configured to reproduce sounds in low, mid-, and high frequency audio ranges using three diaphragms, three voice coils, and two magnets. A perspective view of a first embodiment of the coaxial loudspeaker is shown inFIG.1. An expanded view of elements of the first embodiment of the coaxial loudspeaker is shown inFIG.5. Perspective views of a second embodiment of the coaxial loudspeaker are shown inFIGS.6-8. A first diaphragm and a second diaphragm may be configured to reproduce sounds in the high frequency audio range and a third diaphragm may be configured to reproduce sounds in the low-mid frequency audio range. For example, the first diaphragm and the second diaphragm may be annular diaphragms and the third diaphragm may be a cone diaphragm. Each of the three diaphragms is coupled to a respective voice coil, which may be positioned along a central axis (e.g., a central linear axis) of the coaxial loudspeaker. A cross-sectional view of the first embodiment of the coaxial loudspeaker is shown inFIG.2, including positioning of the two magnets, three diaphragms, and three voice coils. A first voice coil may be positioned in proximity to a first magnet and coupled to the first diaphragm. A second voice coil may be positioned in proximity to a second magnet and coupled to the second diaphragm. A third voice coil may also be positioned in proximity to the second magnet and be coupled to the third diaphragm. The first magnet may generate a first magnetic field and the second magnet may generate a second magnetic field, where each of the first magnetic field and the second magnetic field may be permanent magnetic fields, in some embodiments. FIG.9illustrates a method for the coaxial loudspeaker ofFIGS.1-8. Electric signals may be provided to connectors of the loudspeaker, which may energize each of the first voice coil, the second voice coil, and the third voice coil, creating an induced magnetic field at each of the three voice coils. For example, when energized, the first voice coil may have a first induced magnetic field, the second voice coil may have a second induced magnetic field, and the third voice coil may have a third induced magnetic field. Each of the three voice coils may be positioned in such a way, relative to a respective magnet of the two magnets, that the induced magnetic field interacts with the permanent magnetic field.FIGS.3A and3Bshow positioning of the three voice coils with respect to the permanent magnetic fields for the first embodiment of the coaxial loudspeaker. For example, the first induced magnetic field may interact with the first permanent magnetic field, the second induced magnetic field may interact with the second permanent magnetic field, and the third induced magnetic field may interact with the second permanent magnetic field.FIG.4shows a graph illustrating magnetic flux density for different arc lengths in voice coil gaps (e.g., regions where an induced magnetic field interacts with a respective permanent magnetic field), including a first voice coil gap of the first voice coil, a second voice coil gap of the second voice coil, and a third voice coil gap of the third voice coil. Interaction of the permanent magnetic field with the induced magnetic field may cause motion (e.g., oscillation) of the respective voice coil along the central axis and, in turn, oscillation of the coupled diaphragm along the central axis. Oscillation of the coupled diaphragm may convert the electrical signal into acoustic signals, which may be interpreted as audible sound by a listener. As described above, a configuration of a diaphragm may dictate a reproducible frequency range. For example, oscillation of the first diaphragm and the second diaphragm may produce acoustic signals in the high range and oscillation of the third diaphragm may produce acoustic signals in the low-mid range. In this way, two magnets, three diaphragms, and three voice coils may be used to reproduce an acoustic signal range including signals in the low, mid-, and high ranges. The herein described coaxial loudspeaker may therefore have a less complex configuration and be less costly to produce, compared to coaxial loudspeakers which employ three magnets to reproduce the acoustic signals in the low, mid-, and high ranges. FIGS.1-3B,5are drawn approximately to scale however, other relative component dimensions may be used, in other embodiments. An axis system150is provided inFIGS.1-3B,6-8for reference. The y-axis may be a vertical axis (e.g., parallel to a gravitational axis), the x-axis may be a lateral axis (e.g., horizontal axis), and the z-axis may be a longitudinal axis, in one example. However, the axes may have other orientations, in other examples. FIG.1illustrates a 90-degree cutaway view of a coaxial loudspeaker100including a dual compression driver and a direct-radiating cone diaphragm configured to reproduce a high frequency spectrum and a low-mid frequency spectrum, respectively. Acoustic signals in the high frequency spectrum may be herein referred to as being in the high frequency range and/or as being a high frequency sound. Acoustic signals in the low-mid frequency spectrum may be herein referred to as being in the low-mid frequency range and/or as being a low-mid frequency sound. Various elements of the coaxial loudspeaker100may be disposed generally about a central axis102(e.g., a central linear axis). For descriptive purposes, some components are described as being “front” components while other components are described as being “rear components”. Relative to rear components, front components are generally closer to an output side152of the coaxial loudspeaker100at which sound waves emanate. It will be understood, however, that the terms “front” and “rear” in this context are not intended to limit the coaxial loudspeaker100to any particular orientation in space. The coaxial loudspeaker100is herein described in terms of a low-mid section120, which may reproduce the low-mid frequency spectrum, and a high frequency section122, which may reproduce the high frequency spectrum. A dual compression driver124may be configured with a first driver assembly and a second driver assembly for reproducing sounds in the high frequency spectrum. For example, the dual compression driver124may be provided by combining two single compression drivers into a single unit that includes two magnets, two diaphragms, and two voice coils with a single exit for sound output (e.g., a phasing plug108). The phasing plug108may be formed of a front phasing plug and a rear phasing plug, as further described herein. The dual compression driver124may span the low-mid section120and the high frequency section122, where the first driver assembly is positioned in the high frequency section122and the second driver assembly is positioned at an interface between the low-mid section120and the high frequency section122. The high frequency section122includes the first driver assembly, which may include a first magnet144, a first diaphragm140, and a first voice coil142coupled to the first diaphragm140. The first diaphragm140may be an annular diaphragm configured to reproduce sound in the high frequency range. The first magnet144may be positioned between a rear top plate136and a rear back plate138, where the first magnet144and the rear top plate136are each configured as annular rings. The rear back plate138may have a pole piece138aextending along the central axis102, as further described with respect toFIG.2. The phasing plug108may extend through a center of the first magnet144and the rear top plate136from the pole piece138aof the rear back plate138. The second driver assembly may span the low-mid section120and the high frequency section122. A second magnet134of the second driver assembly may be part of both the low-mid section120and the high frequency section122and may be positioned between a front top plate126(positioned in the low-mid section120) and a front back plate128(positioned in the high frequency section122). Each of the front top plate126, the front back plate128, and the second magnet134may be configured as annular rings. The phasing plug108may further extend through a center of the second magnet134, the front top plate126, and the front back plate128. The second driver assembly may further include a second diaphragm130coupled to a second voice coil132, both of which are positioned in the high frequency section122. The second diaphragm130may be an annular diaphragm configured to produce sound waves in the high frequency spectrum. In some embodiments, the first diaphragm140and the second diaphragm130may be equivalent. In other embodiments, the first diaphragm140and the second diaphragm130may be differently configured to reproduce sound in the high frequency spectrum. Further details of elements of the first driver assembly and the second driver assembly are described with respect toFIG.2. As shown inFIG.1and further elaborated on with respect toFIGS.2-3B, the low-mid section120may be based on a direct-radiating cone diaphragm transducer, herein referred to as a third driver assembly. The third driver assembly may include a third diaphragm110and a third voice coil112coupled to the third diaphragm110. In the embodiment ofFIG.1, the third diaphragm110may be coupled to the third voice coil112via a voice coil former118. For example, the third voice coil112may be coupled to the voice coil former118such that, when a magnetic field is induced on the third voice coil112, the third voice coil112and the voice coil former118oscillate axially along the central axis102and consequently oscillate the third diaphragm110. The voice coil former118may extend axially (e.g., along the central axis102) into the second driver assembly (e.g., in the low-mid section120) and be positioned in a central space of the front top plate126. The third diaphragm110may be a direct-radiating cone diaphragm configured to reproduce sound in the low-mid frequency range. A frame104may support the third diaphragm110and the third diaphragm110may be held in position by a spider113and a surround106. Further detail regarding configurations of the first driver assembly, the second driver assembly, and the third driver assembly are described with respect toFIGS.2-4. The first driver assembly, the second driver assembly, and the third driver assembly may be axially aligned along the central axis102. The dual compression driver124may be coupled to the third driver assembly by any suitable means. For example, a front top plate126of the low-mid section120may be coupled to the frame104of the third driver assembly via a bolt-on connection, via a screw-on connection, and so on. When coupled to the dual compression driver124, the third diaphragm110may functionally perform as an axisymmetric horn for the dual compression driver124. In this way, the coaxial loudspeaker described with respect toFIG.1and further described with respect toFIGS.2-4may reproduce both the high frequency spectrum and the low-mid frequency spectrum using two magnets (e.g., the first magnet144and the second magnet134), three diaphragms (e.g., the first diaphragm140, the second diaphragm130, and the third diaphragm110), and three voice coils (e.g., the first voice coil142, the second voice coil132, and the third voice coil112). This configuration may reduce a complexity, a footprint, and a cost of the coaxial loudspeaker100compared to a conventional coaxial loudspeaker which may use three magnets to reproduce both the high frequency spectrum and the low-mid frequency, and/or compared to non-coaxial loudspeakers which may use a compression driver to reproduce the high frequency spectrum and a direct-radiating cone diaphragm transducer to reproduce the low-mid frequency spectrum, where the compression driver and the direct-radiating cone diaphragm transducer may not be in axial alignment. Briefly turning toFIG.5, an expanded view500of elements of the coaxial loudspeaker100ofFIG.1are shown. Elements shown inFIG.1as well as other figures described herein are similarly numbered inFIG.5. As briefly described with respect toFIG.1, the phasing plug108may include a front phasing plug502and a rear phasing plug504. The rear phasing plug504may include a central hub which extends through an annular center ring of the front phasing plug502. As described with respect toFIGS.2-3B, the second voice coil132and the third voice coil112may annularly surround the annular center ring of the front phasing plug502. Additional elements of the coaxial loudspeaker100are described with respect toFIGS.2-3B. FIG.2shows a cross-sectional view200of the coaxial loudspeaker100ofFIG.1. As described with respect toFIG.1, the coaxial loudspeaker100includes the low-mid section120and the high frequency section122. Elements ofFIG.1which are included inFIG.2may be equivalently numbered and may not be reintroduced for brevity. Each of the first magnet144and the second magnet134may be permanent magnets which generate a permanent magnetic field. For example, the first magnet144may generate a first permanent magnetic field and the second magnet134may generate a second permanent magnetic field. Each of the first permanent magnetic field and the second permanent magnetic field may be radially oriented. Permanent magnetic fields generated by the first magnet144and the second magnet134are further described with respect toFIG.3B. The coaxial loudspeaker100may receive an input of electrical signals at connections, such as contacts222of the high frequency section122of the dual compression driver124, as shown inFIG.2. In some embodiments, the coaxial loudspeaker100may include at least two contacts222. Each contact222may be coupled to at least one voice coil of the first voice coil142, the second voice coil132, and the third voice coil112, such that each voice coil may receive electrical signal. Turning toFIG.3A, a detailed view300of the dual compression driver124of the coaxial loudspeaker100is shown. Elements ofFIGS.1-2which are shown inFIG.3Aare equivalently numbered and may not be reintroduced. The detailed view300shows positioning of each of the first voice coil142, the second voice coil132, and the third voice coil112in a respective voice coil gap. When a voice coil of the first voice coil142, the second voice coil132, and the third voice coil112is provided with an electrical signal, a magnetic field may be induced at the respective voice coil. Interaction of an induced magnetic field with an adjacent permanent magnetic field of a respective magnet may cause motion (e.g., oscillation) of the voice coil in the respective voice coil gap along the central axis102, thus oscillating a coupled diaphragm and producing sound waves (e.g., acoustic signals). The detailed view300shows positioning of the first voice coil142in a first voice coil gap340, the second voice coil132in a second voice coil gap330, and the third voice coil112in a third voice coil gap310. The first voice coil gap340may be formed as a space between the rear top plate136and the pole piece138aof the rear back plate138, with the first voice coil142positioned therein. The first voice coil gap340is therefore positioned above the first magnet144(e.g., between the first magnet144and the output side152of the coaxial loudspeaker100). The first voice coil142may be formed as windings of a conductive material, such as copper wire, around the pole piece138a.Electrical signal may be provided to the first voice coil142via a coupling of at least one contact222. When the first voice coil142receives an electrical signal, a first electromagnetic field (EMF) may be induced at the first voice coil142. The first voice coil gap340, and therefore the first voice coil142may be positioned in proximity to the first magnet144, such that a first permanent magnetic field of the first magnet144interacts with the first induced magnetic field (e.g., the first EMF) of the first voice coil142. Briefly turning toFIG.3B, a detailed cross-sectional view350of the dual compression driver124is shown. The first permanent magnetic field may be radially oriented and is represented in part by a first plurality of arrows312. As shown by a first arrow312a,the first permanent magnetic field passes through the first voice coil gap340and therefore interacts with the first induced magnetic field of the first voice coil142positioned therein. The first magnet144may be configured such that the first permanent magnetic field may extend into the first voice coil gap340and may not extend into the second voice coil gap330and/or the third voice coil gap310. For example, the first magnet144may have a first thickness344(e.g., along the y-axis, with respect to the axis system150), which may allow the first permanent magnetic field to extend axially and radially into the first voice coil gap340and may not extend a further axial distance into at least one of the second voice coil gap330and the third voice coil gap310. Additionally or alternatively, the first magnet144may be formed of a material which may provide the first permanent magnetic field which extends axially and radially into the first voice coil gap340and may not extend a further axial distance (e.g., towards the front of the dual compression driver124) into the second voice coil gap330and/or the third voice coil gap310. Due to interaction of the first permanent magnetic field of the first magnet144and the first induced magnetic field of the first voice coil142, the first voice coil142may oscillate axially along the pole piece138a(e.g., along the central axis102) within the first voice coil gap340. In other words, the pole piece138amay be stationary and the first voice coil142may oscillate axially along an exterior of the pole piece138a(e.g., axial motion may be induced by interaction of the first permanent magnetic field and the first induced magnetic field). As described above, the first voice coil142may be coupled to the first diaphragm140, which is configured to reproduce sounds in the high frequency range. Oscillation of the first voice coil142may therefore result in oscillation of the first diaphragm140. Returning toFIG.3A, the dual compression driver124further comprises the second driver assembly, which spans the low-mid section120and the high frequency section122. The second voice coil gap330may be formed as a space between the front back plate128and the phasing plug108(e.g., the front phasing plug502), with the second voice coil132positioned therein. The second voice coil132may be formed as windings of a conductive material, such as copper wire, and may annularly surround a voice coil former which annularly surrounds the front phasing plug502. Electrical signal may be provided to the second voice coil132via a coupling of at least one contact222. When the second voice coil132receives an electrical signal, a second electromagnetic field may be induced at the second voice coil132. The second voice coil gap330, and therefore the second voice coil132may be positioned in proximity to the second magnet134, such that a second permanent magnetic field of the second magnet134interacts with the second induced magnetic field (e.g., the second EMF) of the second voice coil132. The third voice coil112and the voice coil former118of the third driver assembly may extend into a central region of the front top plate126, as described above. The third voice coil gap310may be formed as a space between the front top plate126and the front phasing plug502, with the third voice coil112and the voice coil former118positioned therein. The third voice coil112may be formed as windings of a conductive material, such as copper wire, and may annularly surround the voice coil former118, which annularly surrounds the phasing plug108. Electrical signal may be provided to the third voice coil112via a coupling of at least one contact222. When the third voice coil112receives an electrical signal, a third electromagnetic field may be induced at the third voice coil112. The third voice coil gap310, and therefore the third voice coil112, may be positioned in proximity to the second magnet134, such that the second permanent magnetic field of the second magnet134interacts with the third induced magnetic field (e.g., the third EMF) of the third voice coil112. As described herein, the third voice coil gap310is positioned above a gap between an interior of the second magnet134(e.g., a central space of the annularly shaped second magnet134) and the phasing plug108, (e.g., between the second magnet134and the output side152of the coaxial loudspeaker100) where the phasing plug108is positioned in the central space of the annularly shaped second magnet134. The second voice coil gap330is positioned below the second magnet134(e.g., where the second magnet134is positioned axially between the second voice coil gap330and the output side152of the coaxial loudspeaker100) and radially closer to the central axis102than the third voice coil gap310. As described with respect toFIG.3B, both the third voice coil gap310and the second voice coil gap330are positioned such that the second permanent magnetic field may interact with respective magnetic fields induced at each of the second voice coil132and the third voice coil112when electrical signals are provided thereto. Returning toFIG.3B, the detailed cross-sectional view350of the dual compression driver124is shown. The second permanent magnetic field may be radially oriented and is represented in part by a second plurality of arrows314. As shown by a second arrow314a,the second permanent magnetic field passes through the second voice coil gap330and therefore interacts with the second induced magnetic field of the second voice coil132positioned in the second voice coil gap330. As shown by a third arrow314b,the second permanent magnetic field further passes through the third voice coil gap310and interacts with the third induced magnetic field of the third voice coil112positioned in the third voice coil gap310. The second magnet134may be configured such that the second permanent magnetic field may extend into the second voice coil gap330and the third voice coil gap310and may not extend into the first voice coil gap340. For example, the second magnet134may have a second thickness334, which may allow the second permanent magnetic field to extend axially and radially into the second voice coil gap330and the third voice coil gap310, and may not extend a further axial distance (e.g., towards the rear of the dual compression driver124) into the first voice coil gap340. Additionally or alternatively, the second magnet134may be formed of a material which may provide the second permanent magnetic field which extends axially and radially into the second voice coil gap330and the third voice coil gap310, and may not extend a further distance axially into the first voice coil gap340. As described above, the first permanent magnetic field may be smaller, or cover a smaller area, than the second permanent magnetic field. For example, the second magnet134may be stronger than the first magnet144. The first magnet144and the second magnet134may be formed of different materials, such that the second magnet134may produce a stronger magnetic field than the first magnet144. Additionally or alternatively, the first magnet144and the second magnet134may be formed of the same material. In the embodiment shown inFIG.3B, the first permanent magnetic field may have a first direction and the second permanent magnetic field may have a second direction. The first direction and the second direction may be opposite each other. For example, the first permanent magnetic field may be polarized in a counter clockwise direction, and the second permanent magnetic field may be polarized in a clockwise direction. In other embodiments, the first permanent magnetic field may be polarized in the clockwise direction and the second permanent magnetic field may be polarized in the counter clockwise direction. In further embodiments, the first permanent magnetic field and the second permanent magnetic field may be polarized in the same direction, either clockwise or counter clockwise. Positioning the second magnet134in proximity to the second voice coil gap330and the third voice coil gap310, such that the second permanent magnetic fields extends into both the second voice coil gap330and the third voice coil gap310to interact with the second induced magnetic field and the third induced magnetic field, respectively, allows a single magnet (e.g., the second magnet134) to be used for reproduction of sound in the high frequency and low-mid frequency audio range. The first magnet144is placed in proximity to the first voice coil gap340such that the first permanent magnetic field extends into the first voice coil gap340and not into the second voice coil gap330or the third voice coil gap310. The first permanent magnetic field thus interacts with the first induced magnetic field to reproduce sound in the high frequency range. The acoustic signals generated by the first diaphragm140and the second diaphragm130propagate throughout the interior of the dual compression driver124and exit the coaxial loudspeaker100guided by the phasing plug108and the third diaphragm110. In this way, low-mid and high frequency sound may propagate from the coaxial loudspeaker100in a same, first direction along the central axis102. Using the dual compression driver124, and therefore the first driver assembly and the second driver assembly, to reproduce sound in the high frequency range may provide increased power handling, lower thermal compression, provide a smoother frequency response, and decrease non-linear distortion and sub-harmonics, compared to a conventional single compression driver. Compared to other coaxial loudspeaker embodiments which use three magnets (e.g., two magnets of a dual compression driver and a third magnet of a direct-radiating cone diaphragm-based transducer) to reproduce sound in the high frequency and low-mid frequency audio ranges, the coaxial loudspeaker100described herein may have a less complex configuration and be less costly, as the coaxial loudspeaker100uses two magnets (e.g., less components) to produce sounds in the low, mid-, and high frequency ranges. Turning now toFIG.4, a graph400is shown which compares magnetic flux density norm in each of the first voice coil gap340, the second voice coil gap330, and the third voice coil gap310. An arc length in millimeters (mm) is shown along the abscissa and magnetic flux density norm in Tesla (T) is shown along the ordinate. As described above, the first permanent magnetic field generated by the first magnet144interacts with the first induced magnetic field of the first voice coil142in the first voice coil gap340, the second permanent magnetic field generated by the second magnet134interacts with the second induced magnetic field of the second voice coil132in the second voice coil gap330, and the second permanent magnetic field further interacts with the third induced magnetic field of the third voice coil112in the third voice coil gap310. Each of the first permanent magnetic field and the second permanent magnetic field may have different strengths, which may be represented inFIG.3Bas an axial and radial range of the permanent magnetic fields (e.g., of the first plurality of arrows312and the second plurality of arrows314). Interaction of the first permanent magnetic field and the second permanent magnetic field with respective induced magnetic fields may be observed as magnetic flux density, as described herein. A first plot440illustrates magnetic flux density of the first voice coil gap340, a second plot430illustrates magnetic flux density of the second voice coil gap330, and a third plot410illustrates magnetic flux density of the third voice coil gap310. Magnetic flux density of the first voice coil gap340(e.g., the first plot440) and the second voice coil gap330(e.g., the second plot430) are similar. At an arc length of less than 5 mm, magnetic flux density of the first voice coil gap340(e.g., the first plot440) increases from approximately 0.1 T to approximately 1.6 T. At an arc length of approximately 6 mm, the magnetic flux density of the first voice coil gap340peaks at greater than 1.7 T and as the arc length continues to increase beyond 6 mm, the magnetic flux density of the first voice coil gap340decreases to approximately 0.15 at an arc length of 11 mm. At an arc length of less than 5 mm, the magnetic flux density of the second voice coil gap330(e.g., the second plot430) increases from approximately 0.4 T at an arc length of 1 mm to approximately 1.7 T at an arc length of 5 mm. As the arc length continues to increase, the magnetic flux density of the second voice coil gap330decreases from approximately 1.7 T to approximately 0.175 T at an arc length of 11 mm. The third voice coil gap310(e.g., the third plot410) may have a different magnetic flux density compared to the first voice coil gap340and the second voice coil gap330, as a structure of the third voice coil gap310is based on a direct-radiating cone diaphragm transducer for a woofer-midrange (e.g., the third diaphragm110), as described above. At an arc length of less than 3 mm, the magnetic flux density increases from approximately 0.2 mm to 1.1 mm. The magnetic flux density may be approximately equal to 1.1 mm for an arc length of approximately 3 mm to approximately 9 mm. As the arc length continues to increase, the magnetic flux density may decrease to approximately 0.5 mm at an arc length of 12 mm. Magnetic flux density of a voice coil gap may be determined based on a permanent magnetic field strength and an induced magnetic field strength. Each of the first voice coil142, the second voice coil132, and the third voice coil112may be configured (e.g., material, number of windings) such that interaction of a respective induced magnetic field and a respective permanent magnetic field results in approximately equivalent magnetic flux density for the first voice coil gap340and the second voice coil gap330, and magnetic flux density of the third voice coil gap310is less than that of the first voice coil gap340and the second voice coil gap330. Different amounts of magnetic flux may be used to reproduce different frequency sound waves. For example, magnetic flux density of the first voice coil gap340and the second voice coil gap330may be approximately equal, as both the first voice coil gap340and the second voice coil gap330are used to reproduce sound in the high frequency range. Magnetic flux density of the third voice coil gap310may be less than that of the first voice coil gap340and the second voice coil gap330, as the third voice coil gap310is used to reproduce sound in the low-mid frequency range. Turning now toFIGS.6-8, perspective views of a second embodiment602of the coaxial loudspeaker100are shown. The second embodiment602may be an embodiment of the coaxial loudspeaker100ofFIGS.1-3B,5, and may be equivalently configured. Elements of the second embodiment602which are equivalent to elements of the coaxial loudspeaker100and are shown inFIGS.6-8are similarly numbered. For example,FIG.6shows a first perspective view600of the second embodiment602resting on an output side152. The second embodiment602further includes a dual compression driver124, which spans a low-mid section120and a high frequency section122.FIG.7shows a second perspective view700of the second embodiment602, where an outlet of the dual compression driver124(e.g., a phasing plug108) may be visualized. High frequency sounds generated by the dual compression driver may propagate from the outlet of the dual compression driver124and be guided in part by the third diaphragm110, which is coupled to a frame104by a surround106.FIG.8shows a top-down view800of the second embodiment602. The frame104of the second embodiment602may include a plurality of coupling points802,804,806, and808, which may be used to couple the second embodiment602to a speaker housing, audio system, and so on. FIG.9illustrates a method900for a loudspeaker, such as the coaxial loudspeaker100and/or the second embodiment602of the coaxial loudspeaker100. The method900comprises applying an electrical signal to a contact and translating the electrical signal into acoustic signals (e.g., which may be interpreted as sound) using each of a first voice coil, a second voice coil, a third voice coil, a first diaphragm, a second diaphragm, a third diaphragm, a first magnet, and a second magnet. The method900shall be described with respect toFIGS.1-8, and may be applied to other embodiments without departing from the scope of the disclosure. At902, the method900includes applying an electrical signal to a contact. The electrical signal may be sourced from an amplifier, which may be coupled to the contact, such as at least one of the contacts222, via a wire or other sufficient coupling through which the electrical signal may flow. At904, the method900includes flowing the electrical signal to a voice coil. The contact may be coupled to at least one of a first voice coil, a second voice coil, and a third voice coil, such as the first voice coil142, the second voice coil132, and the third voice coil112, respectively. The contact may be coupled to at least one of the first voice coil, the second voice coil, and the third voice coil via a wire or other sufficient coupling through which the electrical signal may flow. When the electrical signal is applied to a voice coil, a magnetic field may be generated at the voice coil (e.g., an induced magnetic field). At906, the method900includes translating the electrical signal into an acoustic signal using a first magnet and a second magnet. Each of the first magnet (e.g., the first magnet144) and the second magnet (e.g., the second magnet134) may be permanent magnets which have a first permanent magnetic field and a second permanent magnetic field, respectively. The first permanent magnetic field may interact with a first induced magnetic field of the first voice coil in a first voice coil gap. Interaction of the first permanent magnetic field and the first induced magnetic field may induce axial motion of the first voice coil. The first voice coil may be coupled to a first diaphragm which is configured to reproduce sounds in the high frequency spectrum. Axial motion (e.g., oscillation) of the first voice coil may induce oscillation of the first diaphragm, generating sounds in the high frequency spectrum based on the electrical signals. The second permanent magnetic field may interact with a second induced magnetic field of the second voice coil in a second voice coil gap and with a third induced magnetic field of the third voice coil in a third voice coil gap. Interaction of the second permanent magnetic field and the second induced magnetic field may induce axial motion of the second voice coil. The second voice coil may be coupled to a second diaphragm which is configured to reproduce sounds in the high frequency spectrum. Axial motion (e.g., oscillation) of the second voice coil may induce oscillation of the second diaphragm, generating sounds in the high frequency spectrum based on the electrical signals. Interaction of the second permanent magnetic field and the third induced magnetic field may induce axial motion of the third voice coil. The third voice coil may be coupled to a third diaphragm which is configured to reproduce sounds in the low-mid frequency spectrum. Axial motion (e.g., oscillation) of the third voice coil may induce oscillation of the third diaphragm, generating sounds in the low-mid frequency spectrum based on the electrical signals. At908, the method900includes outputting acoustic signals, which may be interpreted as sound, in a low-mid frequency range and a high frequency range. Acoustic signals in the low-mid frequency range may be generated by the third diaphragm based on interaction of the third induced magnetic field and the second permanent magnetic field. Acoustic signals in the high frequency range may be generated by the first diaphragm based on interaction of the first induced magnetic field and the first permanent magnetic field, as well as generated by the second diaphragm based on interaction of the second induced magnetic field and the second permanent magnetic field. Acoustic signals in the low-mid frequency range may be directly output by the third diaphragm, which may be configured as a cone diaphragm. Acoustic signals in the high frequency range may propagate through a short horn structure formed by a phasing plug and may be further directed out of the loudspeaker by the cone diaphragm. In this way, sounds in the low-mid and high range frequency spectrum may be generated using two magnets, three voice coils, and three diaphragms, where two of the three diaphragms are configured to generate high frequency sound and one of the three diaphragms is configured to generate low-mid frequency sound. The disclosure also provides support for a coaxial loudspeaker configured with a dual compression driver and a cone diaphragm. The dual compression driver may include a first driver assembly and a second driver assembly, each of which include a magnet, a voice coil, a voice coil gap, and a diaphragm. The dual compression driver may be coupled to a third driver assembly which includes the cone diaphragm, a voice coil, and a voice coil gap. The third driver assembly and the dual compression driver may be coupled in such a way that the second driver assembly is positioned between the first driver assembly and the third driver assembly. Within the first driver assembly, a first magnet may generate a first magnetic field in a first voice coil gap to energize a first voice coil, which may oscillate a first diaphragm to generate sound waves in the high frequency spectrum. Within the second driver assembly, a second magnet may generate a second magnetic field in a second voice coil gap to energize a second voice coil, which may oscillate a second diaphragm to generate sound waves in the high frequency spectrum (e.g., axial motion may be induced by interaction of the second permanent magnetic field and the second induced magnetic field). The second magnetic field may extend into a third voice coil gap of the third driver assembly and may thus energize a third voice coil to oscillate the cone diaphragm to generate sound waves in a low-mid frequency spectrum (e.g., axial motion may be induced by interaction of the second permanent magnetic field and the third induced magnetic field). In this way, two magnets (e.g., of the first driver assembly and the second driver assembly) and the cone diaphragm may be used to generate sound waves in the high frequency spectrum and in the low-mid frequency spectrum. In a further embodiment, a dual compression driver includes a first magnet assembly including an annular first air gap, a first voice coil assembly axially movable in the first air gap, a first diaphragm coupled to the first voice coil assembly, a second magnet assembly including an annular second air gap, a second voice coil assembly axially movable in the second air gap, and a second diaphragm coupled to the second voice coil assembly. The dual compression driver is coupled to a third driver assembly including an annular third air gap, a third voice coil assembly axially movable in the third air gap, and a third diaphragm attached to the third voice coil assembly. The first voice coil assembly may be axially movable by a first magnetic field generated by the first magnet assembly in the annular first air gap. The second voice coil assembly and the third voice coil assembly may be axially movable by a second magnetic field generated in the annular second air gap and the annular third air gap, respectively. The disclosure also provides support for a loudspeaker, comprising a compression driver having a first diaphragm assembly including a first magnet, and a first diaphragm coupled to a first voice coil, and a second diaphragm assembly including a second magnet and a second diaphragm coupled to a second voice coil, and a third diaphragm assembly including a third diaphragm coupled to a third voice coil. In a first example of the system, the first magnet drives the first diaphragm via the first voice coil and the second magnet drives the second diaphragm via the second voice coil and further drives the third diaphragm via the third voice coil. In a second example of the system, optionally including the first example, the first diaphragm, the second diaphragm, and the third diaphragm are positioned along a central linear axis, such that sound emitted by each of the first diaphragm, the second diaphragm, and the third diaphragm is emitted in a first direction and the loudspeaker is a coaxial loudspeaker. In a third example of the system, optionally including one or both of the first and second examples, the loudspeaker is a two-way coaxial loudspeaker. In a fourth example of the system, optionally including one or more or each of the first through third examples, the third diaphragm is configured to reproduce sound in a low-mid frequency range. In a fifth example of the system, optionally including one or more or each of the first through fourth examples, the third diaphragm is a cone diaphragm. In a sixth example of the system, optionally including one or more or each of the first through fifth examples, the second diaphragm and the first diaphragm of the compression driver are configured to reproduce sound in a high frequency range. In a seventh example of the system, optionally including one or more or each of the first through sixth examples, an electrical signal is applied to the loudspeaker via at least one contact, and the electrical signal induces motion of the first voice coil, the second voice coil, and the third voice coil along a central linear axis. In an eighth example of the system, optionally including one or more or each of the first through seventh examples, motion of the first voice coil generates a first electromagnetic field (EMF), motion of the second voice coil generates a second EMF, and motion of the third voice coil generates a third EMF. In a ninth example of the system, optionally including one or more or each of the first through eighth examples, the first magnet generates a first magnetic field and the second magnet generates a second magnetic field. In a tenth example of the system, optionally including one or more or each of the first through ninth examples, interaction of the first EMF and the first magnetic field oscillate the first diaphragm, interaction of the second EMF and the second magnetic field oscillate the second diaphragm, and interaction of the third EMF and the second magnetic field oscillate the third diaphragm. The disclosure also provides support for a loudspeaker, comprising: a first voice coil gap, a second voice coil gap, and a third voice coil gap, wherein a first magnet creates a first magnetic field in the first voice coil gap and a second magnet creates a second magnetic field in the second voice coil gap and the third voice coil gap. In a first example of the system, the first magnet is included in a first diaphragm assembly which further includes a first voice coil and a first diaphragm, the second magnet is included in a second diaphragm assembly which further includes a second voice coil and a second diaphragm, and the loudspeaker further comprises a third diaphragm assembly which includes a third voice coil and a third diaphragm. In a second example of the system, optionally including the first example, the first voice coil gap is formed between a rear top plate and a pole piece of a rear back plate, the second voice coil gap is formed between a front back plate and a phasing plug, and the third voice coil gap is formed between a front top plate and the phasing plug. In a third example of the system, optionally including one or both of the first and second examples, the first voice coil is positioned in the first voice coil gap, the second voice coil is positioned in the second voice coil gap, and the third voice coil is positioned in the third voice coil gap. In a fourth example of the system, optionally including one or more or each of the first through third examples, the first magnetic field drives oscillation of the first diaphragm and the second magnetic field drives oscillation of the second diaphragm and the third diaphragm. The disclosure also provides support for a method for a loudspeaker, comprising applying an electrical signal to a contact, wherein the contact is coupled to a first voice coil, a second voice coil, and a third voice coil, translating the electrical signal into an acoustic signal using a first magnet and a second magnet, and outputting the acoustic signal in a low-mid frequency range and a high frequency range. In a first example of the method, the electrical signal generates a first electromagnetic field (EMF) at the first voice coil, a second EMF at the second voice coil, and a third EMF at the third voice coil. In a second example of the method, optionally including the first example, the first magnet generates a first magnetic field which interacts with the first EMF to oscillate a first diaphragm coupled to the first voice coil. In a third example of the method, optionally including one or both of the first and second examples, the second magnet generates a second magnetic field which interacts with the second EMF to oscillate a second diaphragm coupled to the second voice coil and wherein the second magnetic field further interacts with the third EMF to oscillate a third diaphragm coupled to the third voice coil. The description of embodiments has been presented for purposes of illustration and description. Suitable modifications and variations to the embodiments may be performed in light of the above description or may be acquired from practicing the methods. The described systems are exemplary in nature, and may include additional elements and/or omit elements. The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various systems and configurations, and other features, functions, and/or properties disclosed. As used in this application, an element or step recited in the singular and preceded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is stated. Furthermore, references to “one embodiment” or “one example” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects. The following claims particularly point out subject matter from the above disclosure that is regarded as novel and non-obvious.
47,246
11943580
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending on particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof. The speaker device or loudspeaker can be of any kind, e.g. a dynamic loudspeaker (using a permanent magnet and a (voice) coil connected to a diaphragm or cone; the coil (and hence the diaphragm) is axially moving in the field from the permanent magnet when an electric current of varying polarity (AC) is applied to the coil). Other loudspeaker types, e.g. based on piezoelectric or electrostatic principles, etc., can be used. The speaker device may be a speakerphone with or without video and/or collaboration bars. In an aspect the speaker device may be a video conference device. In an aspect the speaker device may be a Bluetooth speaker. The chamber surrounding the loudspeaker unit can be open or closed. Various types of acoustic couplings (drivers and acoustic resonators and transmission paths) of the loudspeaker unit and a surrounding chamber can be used, e.g. band pass, bass reflex, horn, etc. FIGS.1A-Fillustrates a speaker device1according to different aspects of the disclosure in a cross-sectional side view of the speaker device1. The speaker device1has a speaker enclosure structure2, the speaker enclosure structure2being a substantially hollow structure enclosing an internal space. The speaker enclosure structure2may have a cuboid shape, cylindrical shape, spherical shape or the like but not limited to those. The speaker enclosure structure2may have different topologies, such as closed box, vented or isobaric but not limited to these. A speaker4, which is an electronic component capable of emitting sound, is provided at the speaker enclosure2. The speaker4may be a piezoelectric speaker, a speaker having a voice coil, a digital speaker or any other commonly known type speaker. The speaker4may be attached to a wall of the speaker enclosure structure2or may be arranged within the internal space of the speaker enclosure structure2and connected to the speaker enclosure structure2. The speaker4may be connected to the speaker enclosure structure2by means of additional mounting devices, if required. The speaker4and the speaker enclosure2are arranged such that sound waves can be generated by collaboration of the speaker4and the speaker enclosure2. Although not shown, additional electronic components required for operation of the speaker4or used for other functions of the speaker device1may be provided in or at the speaker enclosure structure4. Further, the speaker device1has a speaker device housing3. The speaker device housing3is a general-type housing as commonly used for electronic devices. The speaker device housing3may be formed of plastic, metal or the like. The speaker device housing3is, for example, a housing formed by injection moulding. However, the speaker device housing3may also be a housing formed by other forming methods, such as milling. InFIGS.1A, C, E, the speaker device housing3is shown to be formed as a single part and inFIGS.1B, D, F, the speaker device housing3is shown to be formed by two parts, a first part3aand a second part3b, where the first part3ais arranged on top of the second part3b. The speaker device housing3accommodates microphones8and additional electrical components of the speaker device1, such as battery9, cables10, and PCB boards11. These additional components may be used for operation of the speaker4. They may also be used for other functions of the speaker device1, such as operation of the microphones8, establishing a remote connection to an external device or the like. The speaker device housing3may also accommodate components other than the components explicitly enumerated above. In some aspect, the first part3aof the speaker device housing3may accommodates the microphones8. Furthermore, the second part3bof the speaker device housing3may accommodates the additional electrical components of the speaker device1, such as battery9, cables10and PCB boards11. As shown inFIGS.1A-F, the speaker device1has a support6, provided at the speaker device housing3and adapted to support the speaker device housing3on an extraneous structure. The support6may be formed integrally with the speaker device housing3or may be a separate member attached to the speaker device housing3. The support6may be formed by at least one damping element (first damping element)6, which is flexible or at least substantially elastic and configured to dampen mechanical vibrations. That is, vibrations being applied to the speaker device housing3are dampened or cancelled by the damping element6. For example, the first damping element6is formed of rubber, metal or a rubber-metal compound. For example, the first damping element6is a rubber foot, a rubber pad, a rubber buffer or the like, attached to the speaker device housing3. Further, the damping elements may have high viscosity to enhance the absorption of energy. AlthoughFIG.1shows two first damping elements6supporting the speaker device housing3, the speaker device housing3may be supported by one single first damping element6or by a different number of first damping elements6. Although the first damping element6is shown inFIGS.1A-Fhaving a cubic shape, the first damping element6may be formed in an annular shape, a rib-shape, a plate-shape or the like. As shown inFIGS.1C-F, the speaker enclosure structure2has a support7assigned to it, which is capable of individually and at least partly supporting the speaker enclosure structure2extraneously. In other words, the support7is adapted to support at least part of the weight of the speaker enclosure structure2on an extraneous structure independently of the speaker device housing3. Although not shown inFIGS.1A-F, the speaker enclosure structure2may be supported from below the speaker enclosure structure2by means of the support7being placed on a flat surface without interposition or contribution of the speaker device housing3. However, the support7may also support the speaker enclosure structure2by being attached to a wall or a room ceiling or the like. As shown inFIGS.1E-F, the speaker enclosure structure2may be supported from below the speaker enclosure structure2by means of the support7being placed on an internal surface of the speaker device housing3. Although the speaker device housing is shown inFIGS.1A-Fto be configured as described above, the speaker device housing may have a different configuration. The additional electric components may be accommodated in the other part of the housing, respectively or all of them may be accommodated in the first part3aof the speaker device housing3or all of them may be accommodated in the second part3bof the speaker device housing3. In other words, the additional electrical components of the speaker device1are accommodated in the speaker device housing3according to the requirements regarding installation space, regarding the electrical connections between the components or the like. Although the speaker device housing3is shown inFIGS.1A-Fto be formed by a single part3or by two parts3a,3barranged on top of each other, other arrangements of the speaker enclosure structure2and the speaker device housing3are possible. For example, the speaker device housing3may be formed by only one part arranged at a side of the speaker enclosure structure2. Further, the speaker device housing3may be formed of more than two parts. Further, the speaker device housing3may be configured to at least partly surround the speaker enclosure structure2. Furthermore, the speaker device1may include more than one speaker enclosure structure2. Furthermore, the support7supporting the speaker enclosure structure2may be formed integrally with the speaker enclosure structure2or may be a separate member attached to the speaker enclosure structure2. The support7may be formed by at least one damping element (second damping element)7, which is flexible or at least substantially elastic and configured to dampen mechanical vibrations. That is, vibrations being applied to the speaker enclosure structure2are dampened or cancelled by the second damping element7. In particular, mechanical vibrations generated by the operation of the speaker4and applied to the speaker enclosure structure2are dampened or cancelled by the second damping element7. For example, the second damping element7is formed of rubber, metal or a rubber-metal compound. For example, the second damping element7is a rubber foot, a rubber pad, a rubber buffer or the like, attached to the speaker enclosure structure2. Further, the damping elements may have high viscosity to enhance the absorption of energy. AlthoughFIGS.1C-Fshows two second damping elements7supporting the speaker enclosure structure2, the speaker enclosure structure2may be supported by one single second damping element7or by a different number of second damping elements7. Although the second damping element7is shown inFIGS.1C-Fto have a cubic shape, the second damping element7may be formed in an annular shape, a rib-shape, a plate-shape or the like. As shown inFIGS.1A-F, the speaker enclosure structure2and the speaker device housing3are mechanically coupled. Thereby, at least part of the weight of the speaker enclosure structure2or the speaker device housing3is supported via the mechanical coupling5. However, since the speaker enclosure structure2and the speaker device housing2may be supported by the support6and the support7, respectively, the mechanical coupling5may be configured such that only a small part of the weight of the speaker device housing3or the speaker enclosure structure2is supported via the mechanical coupling5. In other words, the speaker enclosure structure2and the speaker device housing3are coupled by a soft suspension or the like. In addition, the mechanical coupling5between the speaker enclosure structure2and the speaker device housing3may be formed by at least one coupling element5, which has a vibration damping structure configured to inhibit mechanical vibrations being transmitted through the coupling element5. Thereby, transmission of mechanical vibrations applied to the speaker device housing3to the speaker enclosure structure2can be inhibited or suppressed and transmission of mechanical vibrations applied to the speaker enclosure structure2to the speaker device housing3can be inhibited or suppressed. In particular, transmission of mechanical vibrations generated by the operation of the speaker4and applied to the speaker enclosure structure2to the speaker device housing3can be inhibited or supressed. For example, the coupling element5is formed of rubber, metal, a rubber-metal compound or plastic. The coupling element5may also be formed to have a foam structure. For example, the coupling element is formed of polystyrene foam or polyurethane foam. Furthermore, the coupling element5may be formed integrally with the speaker enclosure structure2, the speaker device housing3or either of them. Although some ofFIGS.1A-Fshows one single coupling element5, the speaker enclosure structure2and the speaker device housing3may be mechanically coupled by a different number of coupling elements5, e.g. as shown inFIG.1A. Furthermore, the supports6and7and/or the coupling element5also allows to compensate for e.g. an uneven surface where the speaker device is placed, e.g., a table surface, and/or tolerance in the production of the speaker enclosure structure2and speaker device housing3. FIG.2illustrates a speaker device1shown inFIGS.1A-Fin a plan view of the speaker device. In this embodiment, the speaker device1includes a speaker device housing3,3awhich surrounds a speaker enclosure structure2. Similar to the above embodiments, microphones8can be accommodated in the speaker device housing3,3a. Although not shown inFIG.2, other electrical components may be accommodated in the speaker device housing3,3a. A speaker4is provided at the speaker enclosure2. The speaker4may be attached to a wall of the speaker enclosure structure2or may be arranged within the internal space of the speaker enclosure structure2and connected to the speaker enclosure structure2. The speaker4may be connected to the speaker enclosure structure2by means of additional mounting devices, if required. The speaker4and the speaker enclosure2are arranged such that sound waves can be generated by collaboration of the speaker4and the speaker enclosure2. Although not shown, additional electronic components required for operation of the speaker4or used for other functions of the speaker device1may be provided in or at the speaker enclosure structure4. As shown inFIG.2, the speaker enclosure structure2and the speaker device housing3,3aare mechanically coupled by a coupling element5. In this embodiment, the coupling element5is formed so as to surround the speaker enclosure structure2. In other words, the coupling element5in this embodiment is formed as an enclosing suspension. For example, the coupling element5is formed as a thin, plate-shaped structure having a cut-out corresponding to the outline of the speaker enclosure2. Although the coupling element5is shown inFIG.2as being formed in a rectangular ring around the speaker enclosure2, the coupling element may have an annular shape or the like, depending on the outline of the speaker enclosure2, which is not limited to a rectangle. Although only one coupling element is shown inFIG.2, the speaker enclosure structure2and the speaker device housing may be mechanically coupled by a plurality of coupling elements5formed as an enclosing suspension, which are stacked in a view direction ofFIG.2. Although the coupling element5is shown inFIG.2as being formed such as to completely surround the speaker enclosure structure without interruptions, the coupling element5may be formed with interruptions. In other words, a plurality of coupling elements5, which at least partially surround the speaker enclosure structure2, may be formed in order to mechanically couple the speaker enclosure structure2and the speaker device housing3. These coupling elements5may be arranged at different sides with respect to the speaker enclosure structure2. As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes”, “comprises”, “including” and/or “comprising”, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element but an intervening elements may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise. It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. The claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. Accordingly, the scope should be judged in terms of the claims that follow.
17,207
11943581
DETAILED DESCRIPTION FIG.1is a block diagram illustrating an example vehicle configured to perform various aspects of the transparent audio mode techniques described in this disclosure. Vehicle100is assumed in the description below to be an automobile. However, the techniques described in this disclosure may apply to any type of vehicle capable of conveying occupant(s) in a cabin, such as a bus, a recreational vehicle (RV), a semi-trailer truck, a tractor or other type of farm equipment, a train car, a plane, a personal transport vehicle, and the like. In the example ofFIG.1, the vehicle100includes processing circuitry112, an audio circuitry114, and a memory device116. In some examples, processing circuitry112and audio circuitry114may be formed as an integrated circuit (IC). For example, the IC may be considered as a processing chip within a chip package, and may be a system-on-chip (SoC). Examples of processing circuitry112and audio circuitry114include, but are not limited to, one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), fixed function circuitry, programmable processing circuitry, any combination of fixed function and programmable processing circuitry, or other equivalent integrated circuitry or discrete logic circuitry. Processing circuitry112may be the central processing unit (CPU) of the vehicle100. In some examples, audio circuitry114may be specialized hardware that includes integrated and/or discrete logic circuitry that provides audio circuitry114with parallel processing capabilities. Processing circuitry112may execute various types of applications, such as various occupant experience related applications including climate control interfacing applications, entertainment and/or infotainment applications, cellular phone interfaces (e.g., as implemented using Bluetooth® links), navigating applications, vehicle functionality interfacing applications, web or directory browsers, or other applications that enhance the occupant experience within the confines of the vehicle100. The memory device16may store instructions for execution of the one or more applications. Memory device116may include, be, or be part of the total memory for vehicle100. Memory device116may comprise one or more computer-readable storage media. Examples of memory device116include, but are not limited to, a random access memory (RAM), an electrically erasable programmable read-only memory (EEPROM), flash memory, or other medium that can be used to carry or store desired program code in the form of instructions and/or data structures and that can be accessed by a computer or one or more processors (e.g., processing circuitry112and/or audio circuitry114). In some aspects, memory device116may include instructions that cause processing circuitry112and/or audio circuitry114to perform the functions ascribed in this disclosure to processing circuitry112and/or audio circuitry114. Accordingly, memory device16may be a computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors (e.g., processing circuitry112and/or audio circuitry114) to perform various functions. Memory device116is a non-transitory storage medium. The term “non-transitory” indicates that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted to mean that memory device116is non-movable or that its contents are static. As one example, memory device116may be removed from vehicle100, and moved to another device. As another example, memory, substantially similar to memory device116, may be inserted into one or more receiving ports of vehicle100. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM). As further shown in the example ofFIG.1, the vehicle100may include an interface device122, camera(s)124, multiple microphones128, and one or more loudspeakers126. In some examples, interface device122may include one or more microphones that are configured to capture audio data of spoken commands provided by occupants of vehicle100. In some examples, interface device122may include an interactive input/output display device, such as a touchscreen or other presence sensitive display. For instance, display devices that can form a portion of interface device122may represent any type of passive screen on which images can be projected, or an active screen capable of projecting images (such as a light emitting diode (LED) display, an organic LED (OLED) display, liquid crystal display (LCD), or any other type of active display), with input-receiving capabilities built in. Although shown as a single device inFIG.1for ease of illustration, interface device122may include multiple user-facing devices that are configured to receive input and/or provide output. In various examples, interface device122may include displays in wired or wireless communication with vehicle100, such as a heads-up display, a head-mounted display, an augmented reality computing device (such as “smart glasses”), a virtual reality computing device or display, a laptop computer or netbook, a mobile phone (including a so-called “smartphone”), a tablet computer, a gaming system, or another type of computing device capable of acting as an extension of or in place of a display integrated into vehicle100. Interface device122may represent any type of physical or virtual interface with which a user may interface to control various functionalities of vehicle100. Interface device122may include physical buttons, knobs, sliders or other physical control implements. Interface device122may also include a virtual interface whereby an occupant of vehicle100interacts with virtual buttons, knobs, sliders or other virtual interface elements via, as one example, a touch-sensitive screen. Occupant(s) may interface with interface device122to control one or more of a climate within vehicle100, audio playback by vehicle100, video playback by vehicle100, transmissions (such as cell phone calls) through vehicle100, or any other operation capable of being performed by vehicle100. The interface device122may also represent interfaces extended from vehicle100when acting as an extension of or in place of a display integrated into vehicle100. That is, interface device122may include virtual interfaces presented via the above noted HUD, augmented reality computing device, virtual reality computing device or display, tablet computer, or any other of the different types of extended displays listed above. Vehicle100may include a steering wheel for controlling a direction of travel of vehicle100, one or more pedals for controlling a rate of travel of vehicle100, one or more hand brakes, etc. In some examples, the steering wheel and pedals may be included in a particular in-cabin vehicle zone of vehicle100, such as in the driver zone or pilot zone. For purposes of illustration, processing circuitry112, audio circuitry114and interface device122may form or otherwise support operation of a so-called head unit (which may also be referred to as a vehicle head unit). As such, reference to a head unit may refer to a computing device integrated within vehicle100that includes processing circuitry112, audio circuitry114, and interface device122. Processing circuitry112may execute an operating system (OS) having a kernel (which is an OS layer that facilitates interactions with underlying hardware of the head unit and other connected hardware components, and executes in protected OS space) that supports execution of applications in an application space provided by the OS. Camera(s)124of vehicle100may represent one or more image and/or video capture devices configured to capture image data (where a sequence of image data may form video data). Vehicle100may include a single camera capable of capturing 360 degrees of image/video data, or multiple cameras configured to capture a portion of the surroundings of vehicle100(where each portion may be stitched together to form 360 degrees of image/video data). In some examples, cameras124may only capture discrete portions of (and not all portions necessary to form) 360 degrees of image/video data. In other examples, cameras124may enable capture of a three-dimensional image/video data representative of an entire visual scene surrounding vehicle100. Cameras124may be disposed in a single location on a body of vehicle100(e.g., a roof of vehicle100) or multiple locations around the body of and externally directed from vehicle100to capture image/video data representative of an external visual scene in which vehicle100operates. Cameras124may assist in various levels of autonomous driving, safety systems (e.g., lane assist, dynamic cruise control, etc.), vehicle operation (e.g., backup cameras for assisting in backing up vehicle100), and the like. Microphones128of vehicle100may represent a microphone array representative of a number of different microphones128placed external to vehicle100in order to capture a sound scene of an environment within which vehicle100is operating. Microphones128may each represent a transducer that converts sound waves into electrical signals (which may be referred to as audio signals, and when processed into digital signals, audio data). One or more of microphones128may represent reference microphones and/or error microphones for performing audio signal processing (e.g., wind noise cancellation, active noise cancellation, etc.). Loudspeakers126represent components of the vehicle100that reproduce a soundfield based on audio signals provided directly or indirectly by processing circuitry112and/or audio circuitry114. For instance, loudspeakers126may generate pressure waves based on one or more electrical signals received from processing circuitry112and/or audio circuitry114. Loudspeakers126may include various types of speaker hardware, including full-range driver-based loudspeakers, individual loudspeakers that include multiple range-specific dynamic drivers, or loudspeakers that include a single dynamic driver such as a tweeter or a woofer. Audio circuitry114may be configured to perform audio processing with respect to audio signals/audio data captured via microphones128in order to drive loudspeakers126. Audio circuitry114may also receive audio signals/audio data from processing circuitry112that audio circuitry114may process in order to drive loudspeakers126. The term “drive” as used herein may refer to a process of providing audio signals to loudspeakers126, which includes a driver by which to convert the audio signals into pressure waves (which is another way of referring to sound waves). The term “drive” refers to providing such audio signals to the driver of loudspeakers126in order to reproduce a soundfield (which is another way of referring to a sound scene) represented by the audio signals. Many vehicles, such as vehicle100, are equipped with entertainment or infotainment systems, which reproduce a soundfield, based on audio data (or in other words, audio signals), via loudspeakers, such as loudspeakers126. While the reproduction of the soundfield by the infotainment system may increase immersion for occupants of the vehicle, such reproduction of the soundfield may diminish the ability of the operator of the vehicle (e.g., a driver of an automobile) to identify possible issues in an environment in which the operator is operating the vehicle. That is, in addition to road noise resulting from operating the vehicle at speed, the operator of the vehicle may have even further reduced awareness of the environment in which the vehicle is being operated. Such diminished awareness may result in potential safety hazards (e.g., as the operator may not hear sirens, bicycles, pedestrians, etc. due to road noise and the addition of the soundfield reproduced by the infotainment system via the loudspeakers). In accordance with various aspects of the techniques described in this disclosure, vehicle100may include microphones128externally disposed around a body of vehicle100, where such microphones128capture audio signals (or, in other words, audio data) representative of a sound scene external to vehicle100. Processing circuitry112may receive such audio data from microphones128and provide the audio data to audio circuitry114. Audio circuitry114may, responsive to receiving the audio data, invoke a transparency module115. Transparency module115(“TM115”) may represent a module that supports a transparent audio mode for vehicle100, enabling reproduction of various audio objects in the externally captured sound scene to be reproduced internally within vehicle100. Transparency module115may perform various types of audio signal processing in order to accurately reproduce the audio object internally within vehicle100. For example, transparency module115may perform beamforming with respect to the audio data to obtain object audio data representative of an audio object in the sound scene external to vehicle100. Beamforming may refer to a number of audio signal processing algorithm by which to perform spatial filtering of audio data, usually involving combining audio signals from each of microphones128to extract (e.g., by constructive combining) the object audio data and reject (e.g., by destructive combining) interfering audio signals from each of microphones128according to spatial locations of microphones128. Transparency module115may perform one or more pre-processing audio algorithms to remove (e.g., filter out) noise, such as ambient noise due to wind, weather, animals, etc. In some instances, transparency module115may perform such beamforming concurrently in a number of different spatial directions to extract object audio data for multiple different audio objects of interest in the sound scene. In this respect, transparency module115may perform beamforming in multiple different directions with respect to the audio data to obtain two or more object audio data representative of two or more audio objects in the sound scene external to the vehicle. Transparency module115may then interface with loudspeakers126to reproduce, based on the object audio data, the audio object. When existing audio data is being reproduced, such as audio data from processing circuitry112in support of entertainment/infotainment audio content being reproduced for consumption by occupants of vehicle100, transparency module115may mix the reproduced audio object with such other audio content. In some instances, transparency module115may also invoke cameras124to provide video data representative of the visual scene external to vehicle100. Cameras124and/or processing circuitry112may perform object detection with respect to the video data to identify a location of the audio object in the visual scene external to vehicle100. Processing circuitry112may utilize machine learning in order to train an object detection model to perform object detection. In some instances, the object detection model is trained off-line (e.g., at a manufacturer or other component provider) and installed within vehicle100(e.g., stored to memory116). Some object detection models may involve a distance transform-based matching involving neural networks or other forms of artificial intelligence. In any event, processing circuitry112may implement such object detection to identify a location of potential audio objects in the sound scene. For example, processing circuitry112may perform object detection to identify a location and/or direction of a pedestrian relative to vehicle100. Processing circuitry112may obtain a programmed location of each one of cameras124and a programmed width of field of each of cameras124to identify in which direction and/or at which location each potential audio object resides relative to vehicle100. Processing circuitry112may provide the identified location/direction to audio circuitry114, which may pass such location/direction to transparency module115. Transparency module115may then perform, based on the location/direction of the audio object, beamforming with respect to the audio data to obtain the object audio data representative of the audio object in the sound scene external to vehicle100. Moreover, transparency module115may, due to beamforming (and possibly visual object detection), which requires a programmed definition of locations of microphones128, determine a direction at which the audio object resides within the three-dimensional (3D) sound scene in which vehicle100operates. Transparency module115may mix the reproduced audio object in such a manner that the audio object appears to audibly arrive from the direction in which the audio objects reside in the 3D sound scene. Transparency module115may spread the audio object across two or more speaker feeds (which may also be referred to as speaker channels) in order to place the audio object in locations at which loudspeakers126are not located (e.g., using vector based amplitude panning—VBAP, or other audio signal post-processing). Transparency module115may effectively generate virtual speakers at the location at which the audio object resides in the sound scene relative to vehicle100and reproduce the audio object as speaker feeds to one or more loudspeakers126(potentially mixing in additional audio content into the audio object speaker feeds) that drive one or more loudspeakers126to reproduce the audio object. In this way, various aspects of the techniques may increase awareness by an operator of vehicle100through external audio object identification and reproduction internally within vehicle100. For example, vehicle100may identify as audio objects a pedestrian, bicycle, cross vehicular traffic, sirens, horns, etc. and reproduce such audio objects internally within vehicle100to bring operator attention to potential safety hazards that may impact operation of vehicle100. Reducing and potentially avoiding safety hazards may allow the vehicle to operate more safely in difficult situations (e.g., where various objects are occluded but considering the diffraction properties of sound waves may be identified despite being occluded). As such, various aspects of the techniques may improve operation of vehicle100itself. FIGS.2A and2Bare diagrams illustrating a vehicle configured to implement a transparency mode in accordance with various aspects of the techniques described in this disclosure. As shown in the example ofFIG.2A, a vehicle200includes audio circuitry214that may represent an example of audio circuitry114described above with respect to the example ofFIG.1. As such, audio circuitry214may be configured to implement transparency module115. Vehicle200may represent an example of vehicle100, where vehicle200includes four cameras224A-224D (“cameras224”), five loudspeakers226A-226E (“loudspeakers226”), and four microphones228A-228D (“microphones228”). Cameras224may represent examples of camera(s)24. Loudspeakers226may represent examples of loudspeakers126, while microphones228may represent examples of microphones128. While described as having four cameras224, five loudspeakers226, and four microphones228, vehicle200may include more or less of each of cameras224, loudspeakers226, and microphones228. In the example ofFIG.2A, camera224A is disposed at a front of vehicle200, while cameras224B and224C are disposed at the driver and passenger sides of vehicle200. Camera224D is disposed at a rear of vehicle200. Loudspeakers226are disposed about a cabin of vehicle200in a common (5.1) configuration having a center channel, right and left channels and back right and back left channels (where the subwoofer is not shown for ease of illustration purposes). Microphones228are disposed at each corner of vehicle200. While shown in a particular location and/or arrangement, it should be understood that locations of cameras224, loudspeakers226and microphones228can reside anywhere external to vehicle200or internal to vehicle200. For example, cameras224are shown as externally located on the body of vehicle200, but such cameras224may be internal to vehicle200but facing outward to capture the external visual scene in which vehicle200operates. Microphones228may, as another example, be located externally on the body of vehicle200but in different locations and/or arrangements. Loudspeakers226, on the other hand, reside internal to vehicle200for purposes of reproducing sound scenes for occupants of vehicle200, but may be arranged in different configurations to accommodate different intended use cases. In any event, audio circuitry214may interface with microphones228to capture audio data representative of an external sound scene. In the example ofFIG.2A, a pedestrian230A resides in the vicinity (e.g., within some threshold distance, such as100,200,300, etc. feet) of vehicle200and forms part of an external sound scene in which vehicle200operates. Cameras224may capture the video data representative of the visual scene, where processing circuitry112may identify pedestrian230A as a potential audio object and thereby determine a location/direction of pedestrian230A relative to vehicle200. Processing circuitry112may pass this location/direction of pedestrian230A to transparency module115via audio circuitry214. Transparency module115may perform beamforming, based on the location/direction, to capture pedestrian230A as an audio object in the sound scene represented by the audio data captured by microphones228. Beamforming is denoted in the example ofFIG.2Aas lobes240A, where the main lobe is directed towards pedestrian230A based on the location/direction identified via visual object detection. Lobes240A also include secondary lobes on both sides of the main lobes that provide some diffuseness. In other words, microphones228may fix a particular angle from which to capture audio data representative of the sound scene. Because the number of microphones228are finite (i.e., four in this example), the main lobe may have a non-impulse width, meaning that for a particular angle θ′ there may be a slight ambiguity over a cone of δθ (so that a potential ground truth value is somewhere between θ′−δθ and θ′+δθ). Transparency module115may next perform such beamforming using a weighted delay and sum (WDAS) algorithm defined in accordance with the following equation: y(k)=Σn=0N−1wn*xn(k−τn), where the variable N denotes the number of microphones, the variable wndenotes amplitude weights that emphasis certain microphones228over others, the variable xndenotes the audio data provided by each of microphones228, and the variable τndenotes independent delays for each microphone channel (which is another way to refer to audio data) captured by microphones228to amplify the sum-microphone response at a target direction. The variable k denotes a current time. In some instances, the weights (wn) and delays (τn) are defined through offline calibration, e.g., at a factory or manufacturer. Although described with respect to a WDAS algorithm, transparency module115may apply any other type of beamforming algorithm. Examples of other types of beamforming algorithms include a constant beamwidth broadband beamforming algorithm, a minimum variance distortionless response beamforming algorithm, a broadband constrained minimum variance beamforming algorithm, a statistical eigen beamforming algorithm, a beamspace beamforming algorithm, a near field adaptive beamforming algorithm, a frost beamforming algorithm, a near field acoustic beamforming algorithm, and a degenerate unmixing estimation technique (DUET) beamforming algorithm. In any event, transparency module115may process audio data captured by microphones228disposed at different locations on the body of vehicle200to effectively filter and amplify (in the case of the WDAS algorithm) to form directed lobes240A that target pedestrian230A to extract object audio data representative of the audio object (i.e., pedestrian230A in this example). Transparency module115may, in this way, perform beamforming with respect to the multi-channel audio data provided by microphones228to extract the object audio data. Transparency module115may next assign a location to object audio data and render the object audio data to one or more loudspeaker feeds used to drive one or more corresponding loudspeakers226. As noted above, transparency module115may perform vector-based amplitude panning or other audio signal processing algorithms for generating virtual speakers such that reproduction, based on the object audio data, of the audio object at the location relative to vehicle200from which the audio objects reside in the sound scene relative to vehicle200. In this example, transparency module115may assign a forward center location to pedestrian230A and generate a center channel speaker feed that drives front center speaker226A (and possible one or more additional speakers of speakers226, which may also be referred to as a speaker array) to reproduce the audio object (again, pedestrian230A in this example). Transparency module115may mix the rendered audio data (e.g., a speaker feed) with an existing front center channel speaker feed (that may include audio content from the infotainment system, which may also be referred to as the head unit). In this respect, transparency module115may select a subset of the one or more loudspeakers226that are capable of reproducing the audio object in a direction in which the audio object is relative to vehicle200(where subset is used to mean one or more but not all, and is not intended to denote the classical mathematical definition of subset that can include zero or all items of the entire set). Moreover, transparency module115may reproduce, by interfacing with the subset of the one or more speakers226(which is another way to refer to loudspeakers226) and based on the object audio data, the audio object. Referring next to the example ofFIG.2B, audio circuitry214of vehicle200may perform beamforming (as denoted by lobes240B) in a different direction responsive to identifying a new audio object representative of a pedestrian230B. Audio circuitry214may invoke transparency module115to perform such beamforming in the manner described above and render, based on identification of the location/direction (e.g., by way of cameras224and/or microphones228) and extracted object audio data, speaker feeds for driving a back left loudspeaker226D. Again, transparency module115may mix existing back left speaker feeds (for other audio content) with the speaker feed rendered from the extracted object audio data. While shown as only performing beamforming in one direction in both of the examples ofFIGS.2A and2B, transparency module115may perform beamforming in multiple different directions with respect to the audio data captured by microphones228to obtain two or more object audio data representative of two or more audio objects (e.g., pedestrian230A and230B) in the sound scene external to vehicle200. Such beamforming in multiple directions may occur concurrently (and potentially simultaneously) as the microphone channels are captured a single time (or, in other words, only once) and transparency module115may perform beamforming on board vehicle200and in near-real-time or real-time (e.g., with minimal processing delay). Such real-time or near-real-time beamforming may allow transparency module115to perform a current-time (except for possible minimal processing delay) reproduce audio objects in the sound scene to enable a clear pass-through audio experience. FIG.3is a diagram illustrating a potential safety hazard detected via application of a transparent mode by a vehicle in accordance with various aspects of the techniques described in this disclosure. In the example ofFIG.3, a system300is shown that includes vehicles310A-310C. Vehicle310A may represent an example of vehicle100shown in the example ofFIG.1and/or vehicle200shown in the examples ofFIGS.2Aand2B, which may be configured to perform various aspects of the transparency mode techniques described in this disclosure. System300depicts an intersection in which vehicle310C is traveling from left to right at some speed (e.g., 25 miles per hour—MPH, 30 MPH, 45 MPH, 55 MPH, etc.). By virtue of traveling at such a speed, vehicle310C may produce noise, e.g., road noise, wind noise, engine noise (for internal combustion engines), simulated engine noise (for electric vehicles), etc. As such, vehicle310C may represent another vehicle in a sound scene in which vehicle310A operates. Vehicle310A may capture via microphones (e.g., microphones228) audio data representative of the sound scene, performing beamforming in the manner described above to extract object audio data representative of vehicle310C. Such beamforming is illustrated by lobes340. As further shown in the example ofFIG.3, vehicle310B may at least partially occlude view of vehicle310C by vehicle310A, presenting a significant safety hazard (e.g., a potential accident) should vehicle310A pull into the intersection in front of vehicle310C. Further, even should vehicle310A incorporate safety equipment, such as cameras, light detection and ranging (LIDAR), and/or radio detection and ranging (RADAR), may be unable to detect vehicle310C due to vehicle310B occluding such safety equipment of vehicle310A. However, given that sound (from vehicle310C) is different than LIDAR/RADAR because sound has diffractive and diffuse properties over space (e.g., sound can be heard behind walls and occlusions), transparency module115of vehicle310A may detect and extract object audio data for vehicle310C in the sound scene (using beamforming) and thereby reproduce the audio object (and/or alert from the similar direction) via internal loudspeakers to allow the operator of vehicle310A to be aware of fast approaching vehicle310C. As such, transparency module115may improve safety while operating vehicle310A as the operator of vehicle310A may take appropriate action (e.g., break to avoid entering the intersection) to prevent an accident with vehicle310C. Furthermore, in some instances, interface122(described above with respect to the example ofFIG.1) may provide for vehicle to vehicle (V2V) communication and/or vehicle to everything (V2X) communication to transmit object audio data to nearby cars or other computing devices (such as smartphones) that do not natively support such as transparency mode. Likewise, other vehicles, such as vehicle310B may capture audio data and extract, via beamforming, object audio data that can be sent via V2V or V2X communication to a different vehicle, such as vehicle310A. Vehicle310B may provide such object audio data to vehicle310A regarding vehicle310C as vehicle310B may provide object audio data having a better signal to noise ratio (SNR) given that there is no occlusion between vehicle310B and vehicle310A. In addition, vehicles located further away from vehicle310A may provide object audio data via V2V or V2X communication to facilitate better awareness of upcoming audio objects in distant sound scenes (e.g., sirens, accidents, traffic, etc.). As such, vehicle310A may obtain, from a different vehicle, such as vehicle310B, object audio data representative of an audio object in a sound scene external to vehicle310B. The object audio data may specify a location or direction in which the audio object resides in the sound scent relative to vehicle310B. Vehicle310A may pass the audio object data to audio circuitry114, which may reproduce, based on the audio object data, the audio object, and mix the reproduced audio object based on the location specified in the audio object data to accurately reproduce the location of the audio object in the sound scene relative to vehicle310A. FIG.4is a flowchart illustrating example operation of the vehicle shown in the example ofFIG.1in performing various aspects of the transparency mode techniques described in this disclosure. As described above, vehicle100may include microphones128externally disposed around a body of vehicle100, where such microphones128capture audio signals (or, in other words, audio data) representative of a sound scene external to vehicle10(400). Processing circuitry112may receive such audio data from microphones128and provide the audio data to audio circuitry114. Audio circuitry114may, responsive to receiving the audio data, invoke a transparency module115. In some instances, transparency module115may optionally (as denoted by the dashed lines around box402) also invoke cameras124to provide video data representative of the visual scene external to vehicle100. Cameras124and/or processing circuitry112may perform object detection with respect to the video data to identify a location of the audio object in the visual scene external to vehicle100(404). Processing circuitry112may provide the identified location/direction to audio circuitry114, which may pass such location/direction to transparency module115. Transparency module115may then perform beamforming (possibly based on the location/direction of the audio object) with respect to the audio data to obtain the object audio data representative of the audio object in the sound scene external to vehicle100406). In some instances, transparency module115may perform such beamforming concurrently in a number of different spatial directions to extract object audio data for multiple different audio objects of interest in the sound scene. In this respect, transparency module115may perform beamforming in multiple different directions with respect to the audio data to obtain two or more object audio data representative of two or more audio objects in the sound scene external to the vehicle. Transparency module115may then interface with loudspeakers126to reproduce, based on the object audio data, the audio object (408). When existing audio data is being reproduced, such as audio data from processing circuitry112in support of entertainment/infotainment audio content being reproduced for consumption by occupants of vehicle100, transparency module115may mix the reproduced audio object with such other audio content. In this way, various aspects of the techniques may enable the following examples. Example 1. A method comprising: capturing, by one or more microphones, audio data representative of a sound scene external to a vehicle; performing, by one or more processors, beamforming with respect to the audio data to obtain object audio data representative of an audio object in the sound scene external to the vehicle; and reproducing, by one or more speakers included within the vehicle and based on the object audio data, the audio object in the sound scene external to the vehicle. Example 2. The method of example 1, wherein performing beamforming comprises performing beamforming in multiple different directions with respect to the audio data to obtain two or more object audio data representative of two or more audio objects in the sound scene external to the vehicle. Example 3. The method of any combination of examples 1 and 2, wherein the vehicle is a first vehicle, and wherein the object audio data is representative of a second vehicle at least partially occluded from view by a driver of the first vehicle. Example 4. The method of any combination of examples 1-3, wherein the object audio data is representative of one or more of a pedestrian, a bicyclist, and another vehicle. Example 5. The method of any combination of examples 1-4, further comprising: capturing, by a camera, video data representative of a visual scene external to the vehicle, performing object detection with respect to the video data to identify a location of the audio object in the visual scene external to the vehicle, wherein performing beamforming comprises performing, based on the location of the audio object, beamforming with respect to the audio data to obtain the object audio data representative of the audio object in the sound scene external to the vehicle. Example 6. The method of any combination of examples 1-5, wherein the one or more microphones comprise a first microphone and a second microphone, each of the first and second microphones located in different positions on a body of the vehicle, wherein capturing the audio data comprises: capturing, by the first microphone, first audio data representative of the sound scene external to the vehicle; and capturing, by the second microphone, second audio data representative of the sound scene external to the vehicle, and wherein performing beamforming comprises performing a weighted sum and delay algorithm with respect to the first audio data and the second audio data to obtain the object audio data representative of the audio object in the sound scene external to the vehicle. Example 7. The method of any combination of examples 1-6, wherein performing beamforming comprises performing beamforming with respect to the audio data to obtain the object audio data representative of only the audio object in the sound scene and exclude any other object audio data representative of different audio objects in the sound scene at different locations, and wherein reproducing the audio object comprises reproducing, by the one or more speakers included within the vehicle and based on the object audio data, only the audio object in the sound scene. Example 8. The method of any combination of examples 1-7, wherein reproducing the audio object comprises: selecting a subset of the one or more speakers that are capable of reproducing the audio object in a direction in which the audio object is relative to the vehicle; and reproducing, by the subset of the one or more speakers and based on the object audio data, the audio object. Example 9. The method of any combination of examples 1-8, wherein a number of the one or more microphones is different than a number of the one or more speakers. Example 10. The method of any combination of examples 1-9, wherein the vehicle comprises a first vehicle, wherein the object audio data comprises first object audio data representative of a first audio object in a first sound scene external to the first vehicle, and wherein the method further comprises: obtain, from a second vehicle, second object audio data representative of a second audio object in a second sound scene external to the second vehicle; and reproduce, by the one or more speakers included within the first vehicle and based on the second object audio data, the second audio object in the second sound scene external to the second vehicle. Example 11. A device comprising: one or more microphones configured to capture audio data representative of a sound scene external to a vehicle; and one or more processors configured to: perform beamforming with respect to the audio data to obtain object audio data representative of an audio object in the sound scene external to the vehicle; and reproduce, by interfacing with one or more speakers included within the vehicle and based on the object audio data, the audio object in the sound scene external to the vehicle. Example 12. The device of example 11, wherein the one or more processors are, when configured to perform beamform, configured to perform beamforming in multiple different directions with respect to the audio data to obtain two or more object audio data representative of two or more audio objects in the sound scene external to the vehicle. Example 13. The device of any combination of examples 11 and 12, wherein the vehicle is a first vehicle, and wherein the object audio data is representative of a second vehicle at least partially occluded from view by a driver of the first vehicle. Example 14. The device of any combination of examples 11-13, wherein the object audio data is representative of one or more of a pedestrian, a bicyclist, and another vehicle. Example 15. The device of any combination of examples 11-14, further comprising a camera configured to capture video data representative of a visual scene external to the vehicle, wherein the one or more processors are further configured to perform object detection with respect to the video data to identify a location of the audio object in the visual scene external to the vehicle, wherein the one or more processors are, when configured to perform beamforming, configured to perform, based on the location of the audio object, beamforming with respect to the audio data to obtain the object audio data representative of the audio object in the sound scene external to the vehicle. Example 16. The device of any combination of examples 11-15, wherein the one or more microphones comprise a first microphone and a second microphone, each of the first and second microphones located in different positions on a body of the vehicle, wherein the first microphone is, when configured to capture the audio data, configured to capture first audio data representative of the sound scene external to the vehicle; and wherein the second microphone is, when configured to capture the audio data, configured to capture second audio data representative of the sound scene external to the vehicle, and wherein the one or more processors are, when configured to perform beamforming, configured to perform a weighted sum and delay algorithm with respect to the first audio data and the second audio data to obtain the object audio data representative of the audio object in the sound scene external to the vehicle. Example 17. The device of any combination of examples 11-16, wherein the one or more processors are, when configured to perform beamforming, configured to perform beamforming with respect to the audio data to obtain the object audio data representative of only the audio object in the sound scene and exclude any other object audio data representative of different audio objects in the sound scene at different locations, and wherein the one or more processors are, when configured to reproduce the audio object, configured to reproduce, by interfacing with the one or more speakers included within the vehicle and based on the object audio data, only the audio object in the sound scene. Example 18. The device of any combination of examples 11-17, wherein the one or more processors are, when configured to reproduce the audio object, configured to: select a subset of the one or more speakers that are capable of reproducing the audio object in a direction in which the audio object is relative to the vehicle; and reproduce, by the subset of the one or more speakers and based on the object audio data, the audio object. Example 19. The device of any combination of examples 11-18, wherein the vehicle comprises a first vehicle, wherein the object audio data comprises first object audio data representative of a first audio object in a first sound scene external to the first vehicle, and wherein the method further comprises: obtain, from a second vehicle, second object audio data representative of a second audio object in a second sound scene external to the second vehicle; and reproduce, by the one or more speakers included within the first vehicle and based on the second object audio data, the second audio object in the second sound scene external to the second vehicle. Example 20. A non-transitory computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors to: obtain audio data representative of a sound scene external to a vehicle; perform beamforming with respect to the audio data to obtain object audio data representative of an audio object in the sound scene external to the vehicle; and reproduce, by interfacing with one or more speakers included within the vehicle and based on the object audio data, the audio object in the sound scene external to the vehicle. In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims.
48,105
11943582
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS OF THE INVENTION In this description, the directional prepositions of up, upwardly, down, downwardly, front, back, top, upper, bottom, lower, left, right and other such terms refer to the device as it is oriented and appears in the drawings and are used for convenience only. They are not intended to be limiting or to imply that the device has to be used or positioned in any particular orientation. Now referring to drawings inFIGS.1-13, wherein similar components are identified by like reference numerals, there is seen inFIG.1an overhead or plan view of the device10. As shown, the device10includes a housing12configured to hold a sliding sound deflector18having a distal edge21, between the stowed and deployed positions noted herein. An attachment surface11of the housing12is adapted for engagement to a rear side of a smartphone16. By engagement to a rear side of a smartphone16is meant herein, an attachment to a rear surface15of a smartphone16or to the rear surface of phone case engaged therewith. This rear surface15is located on the opposite side of the smartphone16from the video display29(as shown inFIG.3). Currently, a preferred mode of such an attachment to the rear side of the smartphone16, employs an adhesive14such as a peel and stick type of adhesive14surface, which is adhered to either the rear surface15of the smartphone16, or the phone case engaged to the smartphone16. Within a cavity20of the housing12is slidably engaged the sound deflector18which is translatable from a retracted or stowed position, such as inFIG.4, to a deployed position such as is shown inFIG.2. In all modes of the device10herein, while in the deployed position, a projecting portion25of the deflector18extends between a bottom edge17of the smartphone16, and a distal edge21of the deflector18. The deflector18has a body22which has a substantially planar first surface26, which will function without the noted parabolic or curved recess24below, which is configured to help focus sound on a microphone28as noted herein. In a particularly preferred mode of the device10herein, a curved area of the surface26, or a curved or parabolic recess24, is formed into or by curving the first surface26, at or adjacent to a distal edge21of the body22of the deflector. By adjacent to the distal edge21is meant, that the area of the curved area starts to curve at the distal edge21and back toward the cavity20, or is formed into the surface26in a curved area having an edge of the curved areas within 1 inch to ¼ inches of the distal edge21. The device10as shown inFIGS.2-3is in the deployed position with the deflector18in the sliding engagement with the housing12which is adhered to or otherwise engaged to the rear surface15of a smartphone16. In this deployed position, the reflective curved area30shown as the parabolic recess24, is positioned such that it extends past both sides of a bottom edge (FIG.6) of the smartphone16to which it is operatively engaged. In this deployed position the curved area such as the parabolic surface24on the sound deflector18will serve to help reflect or redirect the sound waves of the voice of the user of the smartphone16toward a microphone28. Such microphones28are conventionally positioned on or adjacent the bottom edge17of the smartphone16and may be visible only as a small opening communicating to the microphone. Thus, by microphone28used herein is meant, a microphone positioned within the phone case itself, which is in communication with an opening in the phone case. As noted, such microphones28are conventionally positioned on the bottom edge17or immediately adjacent thereto on the rear surface16. By adjacent is meant within one inch or less of the bottom edge17. As shown, in the preferred mode of the device10, the sound deflector18has a reflective curved area30shown as a parabolic recess24, formed by or into the first surface26of the body22of the sound deflector18. Of course, the reflective curved area30may also be formed by a formation of a curve of the surface26itself, by curving a portion of the body22of the deflector18such as inFIGS.11-13. As such, as used herein, by reflective curved area30is meant, either a curved area formed into the surface26of the body22of the deflector18, or a curving of the body22itself, to form an area of the surface26. into a reflective curved area30. Either type of reflective curved area30will be positioned with the deflector18in the deployed position, to reflect the sound waves generated by the voice of a user of the smartphone16, back toward the microphone28to increase the DB of the sound waves of the voice of the user which communicate to the microphone28. This reflective curved area30thus forms a reflector and lens which upon contact with such sound waves will reflect them and focus them on the microphone28area of the smartphone16. The reflective curved area30shown as a curved round or elongated recess or curved parabolic recess24, so situated during use of the device10in the deployed position on the smartphone16, thus serves to gather the sound waves of the voice of the user talking, and act as a reflective lens to focus them in amplified fashion, upon the microphone28area of the smartphone16. This reflection and focusing action, significantly enhances the ability to use the smartphone16. This is especially true in a noisy environment or when the user angles the bottom edge17away from their face, which will normally cause their voice sound waves not to be communicated to the microphone28, or to be very poorly communicated thereto. As noted, the device10has a stowed configuration which is shown for example inFIGS.4-5. As can be seen, the sound deflector18, is translated back within the cavity20formed within the housing12and the distal edge21of the deflector18is positioned behind the bottom edge17of the smartphone16, or barely projecting therefrom. As shown inFIG.6, the device10may be formed with differently shaped reflective curved areas30, such a parabolic recesses24which are round as inFIG.1, or more oval in circumference as shown inFIG.6. Additionally shown in FIG.6are registration notches32which can interface with a deflecting pin located within the cavity20, to hold the sound deflector18within the cavity20. If included, the pin can also hold the length of protrusion of the sound deflector18from the cavity20. This engagement of pin to notches32will removably maintain the deflector18in a deployed position to focus the voice of the user on the microphone28. The plurality of notches32sequentially located along one or both edges of the sound deflector18body22, will thus allow the user to pull it from the housing12and adjust it for the best focusing and voice transmission by the smartphone. InFIG.7is shown a mode of the device10having a housing12which is U-shaped such as inFIG.11, and has an open area13in between parallel side rails19which are adapted to slidably engage with the opposing sides of the body22of the sound deflector18. The adhesive14or other means for engagement to a phone or phone case, can be positioned on the side rails19, instead of the central area of the housing12as shown inFIG.12, or as inFIG.1on the surface of the housing12which will contact against the smartphone16or phone case. This configuration allows for a deeper depth of the curved reflective area30such as a parabolic recess24which extends from a rear surface of the body22of the deflector18, to project through the open area13of the housing12. FIG.8depicts the device10as inFIG.7, showing the projecting side of the parabolic recess24formed into the body22of the deflector18, projecting from the housing12through the open area13. The housing12, as noted, is engaged to the rear of a smartphone or case, using adhesive14or other means to hold the housing12operatively engaged to the rear of the smartphone16. FIG.9shows that the reflective curved area30such as formed by a parabolic recess24, may be formed in an elongated configuration and is not limited to just the circular and oval and other configurations noted above. The housing12can have a width such as substantially 6 centimeters and a length such as substantially 8.3 centimeters, which size it to adhere to the rear of a smartphone. As shown, the sound deflector18would have a width adapted for a sliding engagement into and out of the cavity20of the housing12and the width of the parabolic recess24along the long axis would be such that it focuses the voice to a microphone such as substantially 3 centimeters. InFIG.10is depicted the device10, configured with a wider configuration than that ofFIGS.1-9, showing the housing12with the deflector18slidably engaged therein. An elongated reflective curved area30of the first surface26, is formed as an elongated parabolic recess24which has an axis which runs parallel to the longer axis of the housing12. FIG.11depicts an exploded view of another particularly preferred mode of the device10which is adapted for positioning in an adhesive or other engagement, upon a rear surface of a smartphone or smartphone case engaged therewith. As shown, the housing12is substantially U-shaped and has a pair of side rails19on opposing sides of an open area13. A first recess33is formed into a first of the side rails19and a second recess34is formed into a second of the side rails19. The attachment surface11of the housing12, which is the surface which mates to the rear of the smartphone16or its case, has adhesive14positioned thereon (FIG.12) which may be employed to engage the device10to the smartphone16of choice. Additionally shown inFIG.11, is the deflector18, which has a body22, configured to slidably engage with the housing12. This sliding engagement is formed between a first shoulder36engaged within the first recess33and the second shoulder38engaged within the second recess34. A contact of the shoulders33and38with a recess endwall40formed at the open end of the housing12, forms a stop or limiter to the sliding of the deflector18toward the deployed position, thereby preventing detachment once the housing12in engaged to the smartphone16or case. The device herein preferably has position locators which will hold the position of the deflector18either in the stowed position collapsed into the housing12, or in the deployed position ofFIG.12, wherein it has been slid to a position locating the curved endwall31a distance from the bottom edge17of a smartphone16, with a projecting portion25of the deflector18between the bottom edge17of the smartphone16and the endwall31. By position locator herein is meant a first position locator provided by a removable engagement between the housing12and the deflector18which will maintain the deflector18in the retracted or stowed position until pulled therefrom by force, and a second position locator in the form of a second removable engagement between the housing12and the deflector18which will hold the deflector18in the deployed position, until forced to slide toward the retracted or stowed position by the user pushing thereon. A current preferred configuration of such position locators is shown inFIG.11which depicts cavities42which may be formed into the surface of one or both of the first recess33and second recess34. These cavities42are shaped complimentary to one or more projections44extending from a surface of one or both shoulders36and38. Because the body of the deflector18is formed of flexible material such as a polymeric material, the shoulders36and38will deflect slightly to disengage each projection44from a mating cavity42from the force when the deflector18is pulled toward the deployed position to thereby unlock the engagement therebetween. Thus, one or more projections44engaged in a cavity42closest to the housing12on an end opposite the bottom edge17of the smartphone16, provide a first such position locator and will hold the deflector18in the stowed position. One or more projection44engaged into a second cavity43formed into the housing12, closest to the bottom edge17of the smartphone16when mounted, will form a second such position locator to maintain the deflector in the deployed position, until force from the user pushing on it slides it back to the retracted or stowed position. Of course other position locators as would occur to those skilled in the art upon reviewing this specification may be employed, such as a ratcheting engagement of the shoulders36and38with the housing12such as by forming teeth on side edges of the shoulders36and38which slide upon mating teeth formed on the sides of the first and second recesses33and34only under force in one direction or the other. FIG.12shows the device10as inFIG.11assembled to a sliding engagement. As shown the deflector18, is pulled to the deployed position and held there by the second position locator noted above. This is the same configuration of the device10shown inFIG.13, wherein the dotted line rectangle is provided to depict the smartphone16from the front side thereof on which the display27is located, and showing a view through the smartphone16of the device10engaged to the rear surface thereof as noted above. InFIG.13, is shown, the assembled device10ofFIG.12, in an as-used positioning engaged to the rear surface of a smartphone16. As shown, the deflector18body22has been translated to a deployed position, and the shoulders38and40contact the recess endwall40to provide a stop or sliding limiter to the translation of the deflector18toward the deployed position. As shown, the reflective curved area30, is formed by a curved section of the body22of the deflector18, at an intersection with the endwall31which projects above the first surface26of the deflector18to a distal end21of both the deflector18and the endwall31. This reflective curved area30on the first surface26at the intersection with the endwall31, which is depicted as projecting substantially perpendicular to the planar first surface26. However, the upward projection of the endwall31from the planar first surface26may run at an angle between twenty to ninety five degrees relative to the planar first surface26with a current favored angle being between 80 to 95 degrees. The elongated curved surface30extends at the intersection of the endwall31with the first surface26for substantially the entire width of the first surface26in-between the two opposing side edges49of the deflector18. This elongated curved surface30forms an operative reflective curved area30to capture, reflect, and reflect and focus sound waves to the microphone28. The reflections of sound from this entire elongated curved surface30extending between the two side edges49has shown to provide a significant enhancement in sound from the voice of the user reaching the microphone of the smartphone16, and to concurrently block ambient noise from reaching the microphone. Such has been found in experimentation to yield both noise canceling and an increase volume 10+ db. Thus, the reflective curved area30will reflect sound waves contacting it during use, back toward the microphone28. Further, because the microphone28may not be centered on or adjacent the bottom edge17of many smartphones16, the wide curved area30insures that sound will be enhanced and reflected to microphones28located off center. As shown inFIG.14, the reflective curved area30extending across the width of the deflector18currently curves in a radius R which currently is preferred as in a range substantially between 0.35 inches to 0.5 inches. Currently a radius substantially between 0.23 inches and 0.27 inches has been shown to maximize the increase in volume and decrease in noise noted above and as such is preferred. A notch46may communicate through the endwall31from the distal edge21. The notch46will provide an alternate path for sound waves to the microphone28when the deflector18is in the stowed position. This notch46may also define a passage for communication of a charging cord for engagement into a charging port on the end of the phone. This configuration also works well when the user tends to hold the phone at different angles during a conversation since the reflective curved area30extends the entire width of the first surface26of the deflector18extending to both side edges49. As noted,FIG.15shows an end view of the sliding deflector18ofFIGS.11-14and shows the endwall31having a height H running from the first surface26of the deflector18to to the distal edge21of the deflector18. This height H, can vary and currently is preferred that the height H is between 0.4 inches to 0.7 inches which experimentation has shown to allow the device to enhance sound and reduce noise on a wide cross section of available smartphones16. Currently particularly favored is a height H substantially 0.47 inches. Also shown inFIG.15is the notch46which extends from the distal edge21into the endwall31. As noted, this notch46can be located to allow passage of a charge cord to be plugged into the phone, and it also provides a path for sound to reach the microphone of the smartphone16when the deflector18is in the retracted or stowed position. While all of the fundamental characteristics and features of the sound enhancing invention for a smartphone have been shown and described herein, with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosure and it will be apparent that in some instances, some features of the invention may be employed without a corresponding use of other features without departing from the scope of the invention as set forth. It should also be understood that various substitutions, modifications, and variations may be made by those skilled in the art without departing from the spirit or scope of the invention. Consequently, all such modifications and variations and substitutions are considered included within the scope of the invention as defined by the following claims.
17,980
11943583
DETAILED DESCRIPTION To provide a better understanding of the present invention to those skilled in the art, preferred embodiments and typical material or range parameters for key components will be detailed in the follow description. These preferred embodiments of the present invention are illustrated in the accompanying drawings with numbered elements to elaborate on the contents and effects to be achieved. It should be noted that the drawings are simplified schematics, and the material and parameter ranges of key components are illustrative based on the present day technology, and therefore show only the components and combinations associated with the present invention, so as to provide a clearer description for the basic structure, implementing or operation method of the present invention. The components would be more complex in reality and the ranges of parameters or material used may evolve as technology progresses in the future. In addition, for ease of explanation, the components shown in the drawings may not represent their actual number, shape, and dimensions; details may be adjusted according to design requirements. In the following description and in the claims, the terms “include”, “comprise” and “have” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Thus, when the terms “include”, “comprise” and/or “have” are used in the description of the present invention, the corresponding features, areas, steps, operations and/or components would be pointed to existence, but not limited to the existence of one or a plurality of the corresponding features, areas, steps, operations and/or components. In the following description and in the claims, when “a A1 component is formed by/of B1”, B1 exist in the formation of A1 component or B1 is used in the formation of A1 component, and the existence and use of one or a plurality of other features, areas, steps, operations and/or components are not excluded in the formation of A1 component. In the following description and in the claims, the term “substantially” generally means a small deviation may exist or not exist. For instance, the terms “substantially parallel” and “substantially along” means that an angle between two components may be less than or equal to a certain degree threshold, e.g., 10 degrees, 5 degrees, 3 degrees or 1 degree. For instance, the term “substantially aligned” means that a deviation between two components may be less than or equal to a certain difference threshold, e.g., 2 μm or 1 μm. For instance, the term “substantially the same” means that a deviation is within, e.g., 10% of a given value or range, or mean within 5%, 3%, 2%, 1%, or 0.5% of a given value or range. In the description and following claims, the term “horizontal direction” generally means a direction parallel to a horizontal plane, the term “horizontal plane” generally means a plane parallel to a direction X and a direction Yin the drawings (i.e., the direction X and the direction Y of the present invention may be considered as the horizontal directions), the term “vertical direction” and the term “top-view direction” generally mean a direction parallel to a direction Z and perpendicular to the horizontal direction in the drawings, and the direction X, the direction Y and the direction Z are perpendicular to each other. In the description and following claims, the term “top view” generally means a viewing result viewing along the vertical direction. In the description and following claims, the term “side view” generally means a viewing result viewing along the horizontal direction. In the description and following claims, the term “cross-sectional view” generally means a viewing result viewing a structure cutting along the vertical direction along the horizontal direction. Although terms such as first, second, third, etc., may be used to describe diverse constituent elements, such constituent elements are not limited by the terms. The terms are used only to discriminate a constituent element from other constituent elements in the specification, and the terms do not relate to the sequence of the manufacture if the specification do not describe. The claims may not use the same terms, but instead may use the terms first, second, third, etc. with respect to the order in which an element is claimed. Accordingly, in the following description, a first constituent element may be a second constituent element in a claim. It should be noted that the technical features in different embodiments described in the following can be replaced, recombined, or mixed with one another to constitute another embodiment without departing from the spirit of the present invention. In the present invention, an acoustic transducer is configured to perform an acoustic transformation, wherein the acoustic transducer may be a sound producing component, a speaker, a micro speaker or other suitable device, such that the acoustic transformation of the acoustic transducer may convert signals (e.g. electric signals) into an acoustic wave. In the present invention, a frequency range of the acoustic wave produced by the acoustic transducer may be designed based on requirement(s). For instance, the acoustic transducer may produce the acoustic wave with the frequency range covering the whole human audible frequency range (e.g., from 20 Hz to 20 kHz), but not limited thereto. For instance, the acoustic transducer may produce the acoustic wave with the frequency higher than a specific frequency, such that this acoustic transducer may be a high frequency sound producing unit (tweeter), but not limited thereto. For instance, the acoustic transducer may produce the acoustic wave with the frequency lower than a specific frequency, such that this acoustic transducer may be a low frequency sound producing unit (woofer), but not limited thereto. Note that the specific frequency may be a value ranging from 800 Hz to 4 kHz (e.g., 1.44 kHz), but not limited thereto. The details of the high frequency sound producing unit may be referred to U.S. application Ser. No. 17/153,849 or Ser. No. 17/720,333 filed by Applicant, which is not narrated herein for brevity. Referring toFIG.1,FIG.1is a schematic diagram of a cross-sectional view illustrating a core part of a speaker system according to an embodiment of the present invention. As shown inFIG.1, the speaker system100includes a first acoustic transducer110configured to generate a first acoustic wave W1, wherein a first frequency range of the first acoustic wave W1may be designed based on requirement(s). For example, the first acoustic transducer110may be a high frequency sound producing unit (which may function as a tweeter), such that the first frequency range of the first acoustic wave W1may be higher than a specific frequency, but not limited thereto. In some embodiments, the first acoustic transducer110may be a MEMS (Micro Electro Mechanical Systems) fabricated device, such as a MEMS speaker including a MEMS structure (e.g., a unit has an anchor structure and a membrane anchored on the anchor structure, and the membrane is actuated to generate the first acoustic wave W1). For example, the first acoustic transducer110which is the MEMS speaker may be included in a MEMS chip, such that the first acoustic transducer110may be formed by semiconductor process, but not limited thereto. For example, the first acoustic transducer110may include silicon (e.g., single crystalline silicon or poly-crystalline silicon), silicon compound (e.g., silicon carbide, silicon oxide), germanium, germanium compound, gallium, gallium compound (e.g., gallium nitride or gallium arsenide) or a combination thereof, but not limited thereto. As shown inFIG.1, the speaker system100includes a spreading structure HS disposed on/over the first acoustic transducer110, wherein the spreading structure HS may have a sound inlet OP1and a sound outlet OP2, a sound passage SP exists between the sound inlet OP1and the sound outlet OP2, the first acoustic transducer110is corresponding to the sound inlet OP1, and the first acoustic wave W1generated by the first acoustic transducer110passes through the sound passage SP (i.e., the first acoustic wave W1passes through the spreading structure HS). In some embodiments, as shown inFIG.1, a size of the sound inlet OP1may be less than a size of the sound outlet OP2. A shape of the sound inlet OP1and a shape of the sound outlet OP2may be any suitable shape and be designed based on requirement(s). In some embodiments, the shape of the sound inlet OP1and the shape of the sound outlet OP2may be different. As shown inFIG.1, the sound passage SP of the spreading structure HS has a first (sound-passage) portion SP1, the first portion SP1is between the sound inlet OP1and the sound outlet OP2, and a first passage size of the first portion SP1is less than the size of the sound inlet OP1and the size of the sound outlet OP2. Note that, the term “passage size” may be referred to a cross-sectional size/area of a corresponding/related portion of the sound passage SP. In the present invention, since the first passage size of the first portion SP1is less than the size of the sound inlet OP1and the size of the sound outlet OP2, after the first acoustic wave W1passes through the sound passage SP of the spreading structure HS, the SPL of the first acoustic wave W1is increased (i.e., the SPL of the first acoustic wave W1at the sound outlet OP2is greater than the SPL of the first acoustic wave W1at the sound inlet OP1). Moreover, a directionality of the first acoustic wave W1may be spread via the spreading structure HS, such that the first acoustic wave W1becomes less directional when the first acoustic wave W1propagates out of the sound outlet OP2. Also, since the size of the sound inlet OP1is less than the size of the sound outlet OP2, the increasing effect of the SPL of the first acoustic wave W1and the spreading effect of the first acoustic wave W1would be enhanced. Because of the design of the sound passage SP of the spreading structure HS, a length of the sound passage SP is increased. In order to decrease the size of the spreading structure HS, the spreading structure HS may be designed to make the sound passage SP have at least one curve part (e.g., the curve parts CP1and/or CP2shown inFIG.1). Thus, the design of the sound passage SP of the spreading structure HS would be achieved in the condition of minimizing the size of the spreading structure HS. TakeFIG.1as an example, the sound passage SP has two curve parts CP1and CP2to be N-shaped in the cross-sectional view, but not limited thereto. In the following, an example of the speaker system100is described in detail, but the speaker system100of the present invention is not limited to the following. Further referring toFIG.2toFIG.4,FIG.2is a schematic diagram illustrating a speaker system according to an embodiment of the present invention,FIG.3is a schematic diagram illustrating a center part of a speaker system according to an embodiment of the present invention, andFIG.4is a schematic diagram illustrating a center part of a sound spreading plate of a speaker system according to an embodiment of the present invention. As shown inFIG.1toFIG.4, the spreading structure HS of the speaker system100includes a sound spreading plate120and a cover130disposed on the sound spreading plate120. InFIG.1toFIG.4, the sound spreading plate120includes a body122having the aforementioned sound inlet OP1and the aforementioned sound outlet OP2of the spreading structure HS, and the cover130is disposed on/over the body122and covers the sound inlet OP1. Note thatFIG.1is a cross-sectional view of the first acoustic transducer110, the body122of the sound spreading plate120and the cover130shown inFIG.3. In this embodiment, the core part of the speaker system100shown inFIG.1is formed of the first acoustic transducer110, the body122of the sound spreading plate120and the cover130shown inFIG.3. In the present invention, the body122may be situated at any suitable position of the sound spreading plate120. In some embodiments, as shown inFIG.2toFIG.4, the body122may be situated at the center of the sound spreading plate120, such that the first acoustic transducer110may be corresponding to the center of the sound spreading plate120in the direction Z, but not limited thereto. As shown inFIG.1andFIG.4, the body122has an inner channel structure122iand an outer expanding structure122aconnected to and surrounding the inner channel structure122i, wherein the inner channel structure122iis covered by the cover130. The outer expanding structure122ahas a bottom portion122aband a sidewall portion122awconnected to each other, the bottom portion122abis connected to the sidewall portion122awand the inner channel structure122i, and the sound outlet OP2of the body122(i.e., the spreading structure HS) is surrounded by the top of the sidewall portion122aw(i.e., the sidewall portion122awforms the sound outlet OP2). In some embodiments, inFIG.1andFIG.4, the outer expanding structure122amay be a bowl-shaped structure, and the shape of the sound outlet OP2may be a rectangle with some chamfers, but not limited thereto. InFIG.1andFIG.4, the outer expanding structure122amay be narrower at the bottom portion122aband wider at the top of the sidewall portion122aw, but not limited thereto. InFIG.1andFIG.4, the inner channel structure122imay have a hollow structure (e.g., a tubular structure) and higher than the bottom portion122abof the outer expanding structure122ain the direction Z, and an inner channel CBI exists inside the inner channel structure122i(i.e., the inner channel CBI is covered by the cover130), wherein the inner channel CBI has a first end CBI_1and a second end CBI_2, the first end CBI_1is the sound inlet OP1of the body122(i.e., the spreading structure HS), and the second end CBI_2higher than the bottom portion122abof the outer expanding structure122ain the direction Z faces the cover130. For example, the shape of the sound inlet OP1(i.e., the first end CBI_1of the inner channel CBI) may be (substantially) a rectangle, and the shape of the second end CBI_2of the inner channel CBI may be a rectangle with chamfers, but not limited thereto. For example, inFIG.1, the top of the inner channel structure122iforming the second end CBI_2of the inner channel CBI may be between the sound inlet OP1(i.e., the first end CBI_1of the inner channel CBI) and the sound outlet OP2in the direction Z (i.e., the top of the inner channel structure122imay not be higher than the sound outlet OP2in the direction Z), but not limited thereto. For example, the inner channel structure122imay be situated at a center of the body122(in the top view), but not limited thereto. As shown inFIG.1toFIG.3, the cover130may have a top structure132and a side structure134connected to and surrounding the top structure132, wherein the top structure132may overlap the inner channel structure122iof the body122in the direction Z, and the side structure134surrounds the inner channel structure122iof the body122. For example, inFIG.1toFIG.3, the cover130may be disposed between the sound inlet OP1and the sound outlet OP2of the body122, such that the cover130may not be higher than the sound outlet OP2in the direction Z, and the cover130is surrounded by the outer expanding structure122aof the body122, but not limited thereto. As shown inFIG.1, the cover130is disposed over the inner channel structure122i, which means that, in the spreading structure HS, a space exists between the body122and the cover130, such that a portion of the sound passage SP (e.g., the curve part CP1) is formed between the body122and the cover130. InFIG.1, the sound passage SP has the first portion SP1formed between the inner channel structure122iof the body122and an inner surface of the cover130, wherein the first portion SP1has the first passage size less than the size of the sound inlet OP1and the size of the sound outlet OP2. InFIG.1, the sound passage SP further has a second (sound-passage) portion SP2and a third (sound-passage) portion SP3, and the first portion SP1is connected between the second portion SP2and the third portion SP3, wherein the second portion SP2is between the outer expanding structure122aof the body122and an outer surface of the cover130, and the third portion SP3is formed of the inner channel CBI of the inner channel structure122i(i.e., the third portion SP3is formed within the inner channel structure122i). In some embodiments, inFIG.1, in the sound passage SP, the first passage size of the first portion SP1is less than a second passage size of the second portion SP2and a third passage size of the third portion SP3, such that the SPL of the first acoustic wave W1is increased and the first acoustic wave W1is spread to make the directionality of the first acoustic wave W1decreased after the first acoustic wave W1passes through the sound passage SP of the spreading structure HS. In some embodiments, the minimum passage size of the sound passage SP may exist in the first portion SP1. InFIG.1, in order to make the first portion SP1of the sound passage SP have the smaller first passage size, the top structure132of the cover130may be as close to the inner channel structure122ias possible, so as to enhance the increasing effect of the SPL of the first acoustic wave W1and the spreading effect of the first acoustic wave W1. Also, because of the existence of the cover130, the first portion SP1of the sound passage SP would have suitable length, wherein the increasing effect of the SPL of the first acoustic wave W1and the spreading effect of the first acoustic wave W1is enhanced as the length of the first portion SP1of the sound passage SP is increased. Moreover, inFIG.1, since the first passage size of the first portion SP1is less than the third passage size of the third portion SP3in the sound passage SP, the first passage size of the first portion SP1is less than the size of the sound inlet OP1(i.e., the first end CBI_1of the inner channel CBI) and the size of the second end CBI_2of the inner channel CBI. In some embodiments, the size of the sound inlet OP1(i.e., the first end CBI_1of the inner channel CBI) may be greater than the size of the second end CBI_2of the inner channel CBI, such that the first acoustic wave W1would be compressed when passing through the inner channel CBI, so as to enhance the increasing effect of the SPL of the first acoustic wave W1and the spreading effect of the first acoustic wave W1. For example, inFIG.1, in the inner channel CBI, the cross-section size of the inner channel CBI may be gradually decreased (narrower) from the first end CBI_1to the second end CBI_2. Furthermore, the top of the inner channel structure122iforming the second end CBI_2of the inner channel CBI may be designed to further affect the SPL of the first acoustic wave W1in a specific frequency range overlapping at least a portion of the first frequency range of the first acoustic wave W1, so as to enhance the clarity of the first acoustic wave W1in this specific frequency range. In some embodiments, since the first acoustic transducer110is the high frequency sound producing unit (tweeter) to make the first frequency range of the first acoustic wave W1higher than a specific frequency, the above specific frequency range may be higher than this specific frequency (e.g., the top of the inner channel structure122imay affect the SPL of the acoustic wave with mid and high frequencies). In some embodiments, the shape and the size of the top of the inner channel structure122i(or the shape and the size of the second end CBI_2of the inner channel CBI) are related to the above specific frequency range. For instance, the above specific frequency range may be higher as the length and/or the width of the second end CBI_2of the inner channel CBI is increased. On the other hand, as shown inFIG.1, the curve parts CP1and CP2of the sound passage SP may be caused by the cover130and the body122. For example, inFIG.1, the curve part CP1connected between the first portion SP1and the third portion SP3may be caused by the top structure132of the cover130and the inner channel structure122iof the body122, and the curve part CP2connected between the first portion SP1and the second portion SP2may be caused by the side structure134of the cover130and the bottom portion122abof the outer expanding structure122aof the body122, but not limited thereto. The curve part CP1may be viewed as being formed by the cover130, which means, the curve part CP1is formed near/around the cover130and/or the curve part CP1is formed because of the cover130. Similarly, the curve part CP2may be viewed as being formed by the bottom portion122ab, which means, the curve part CP2is formed near/around the bottom portion122aband/or the curve part CP2is formed because of the bottom portion122ab. In addition, in the embodiment shown inFIG.1, the first acoustic wave W1propagates toward a first direction, e.g., toward a direction of −Z, through the first portion SP1, and the first acoustic wave W1propagates toward a second direction opposite to the first direction, e.g., toward a direction of +Z, through the second portion SP2and the third portion SP3. By exploiting the curve part(s), the size of the spreading structure HS may be reduced for certain acoustic length corresponding to the sound passage SP. In other words, the inner channel CBI can be viewed as being formed within the inner channel structure122i, and the size of the inner channel CBI is gradually narrower from the sound inlet OP1toward the second end CBI_2of the inner channel CBI. The cover130is disposed over the inner channel structure122i, such that the curve part CP1of the sound passage SP is formed by the cover130. On the other hand, an outer channel, also known as the second (sound-passage) portion SP2, can be viewed as being formed between the cover130and the outer expanding structure122a. The sidewall portion122awhas a shape such that a size of the outer channel is gradually wider from a bottom of the outer expanding structure122atoward the sound outlet OP2. The curve part CP2of the sound passage SP is formed by a bottom portion122abof the outer expanding structure122a. Further referring toFIG.5,FIG.5is a schematic diagram of a cross-sectional view illustrating a center part of a speaker system according to an embodiment of the present invention, wherein the cross-sectional view of the structure shown inFIG.5may be taken along a cross-sectional line A-A′ inFIG.3. Compared withFIG.1,FIG.5further shows the surrounding of the core part of the speaker system100shown inFIG.1and another acoustic transducer. As shown inFIG.2toFIG.5, the speaker system100may further include a second acoustic transducer140configured to generate a second acoustic wave W2, wherein a second frequency range of the second acoustic wave W2may be designed based on requirement(s). For example, the second acoustic transducer140may be a low frequency sound producing unit (which may function as a woofer), such that the second frequency range of the second acoustic wave W2may be lower than a specific frequency, but not limited thereto. Note that an average value of the first frequency range is higher than an average value of the second frequency range. In other words, as shown inFIG.5, the spreading structure HS is disposed over the first acoustic transducer110and configured to guide the first acoustic wave W1to propagate through the sound passage SP formed within the spreading structure HS. The directionality of the first acoustic wave W1is spread at the sound outlet OP2of the spreading structure HS after the first acoustic wave W1propagates through the sound passage SP in the spreading structure HS. The second acoustic transducer140may be any suitable speaker. For example, the second acoustic transducer140may be a speaker with dynamic driver (e.g., an acoustic dynamic driver), a MEMS speaker including a MEMS structure or other suitable speaker. The second acoustic transducer140may be situated at any suitable position. In some embodiments, as shown inFIG.5, the second acoustic transducer140may be corresponding to the center of the sound spreading plate120in the direction Z, but not limited thereto. In some embodiments, as shown inFIG.5, the second acoustic transducer140may overlap the first acoustic transducer110in the direction Z, but not limited thereto. For example, inFIG.5, the center of the first acoustic transducer110is corresponding to the center of the second acoustic transducer140in the direction Z, but not limited thereto. The second acoustic wave W2also passes through the sound spreading plate120. InFIG.2toFIG.5, the sound spreading plate120may further include a sound passing structure124corresponding to the second acoustic transducer140, wherein the sound passing structure124may have at least one hollow part124h, wherein the second acoustic wave W2passes through the hollow part124h, so as to pass through the sound spreading plate120. The sound passing structure124may be designed based on requirement(s). For example, inFIG.2toFIG.5, the sound passing structure124may be directly connected to the body122, but not limited thereto. For example, inFIG.2toFIG.5, the body122may be surrounded by the sound passing structure124and/or the hollow part(s)124h, but not limited thereto. InFIG.2toFIG.5, the sound passing structure124and the body122may be concentric (i.e., the center of the sound passing structure124may be corresponding to the center of the body122in the direction Z), and/or the hollow part(s)124hand the body122may be concentric (i.e., the center of the hollow part(s)124hmay be corresponding to the center of the body122in the direction Z), but not limited thereto. As shown inFIG.2toFIG.5, the sound passing structure124may have a plurality of hollow parts124h(e.g., six hollow parts124hin figures), and each hollow part124hhas the same shape and the same size, but not limited thereto. InFIG.2toFIG.5, the hollow part124hmay overlap the second acoustic transducer140in the direction Z, but not limited thereto. According to above, the first acoustic wave W1generated by the first acoustic transducer110and the second acoustic wave W2generated by the second acoustic transducer140would pass through the spreading structure HS (i.e., the first acoustic wave W1passes through the sound passage SP formed of the body122and the cover130, and the second acoustic wave W2passes through the hollow part124hof the sound passing structure124), such that two acoustic transducers may produce the sound in the one speaker system100. In the present invention, the aforementioned speaker system100would be disposed within any suitable sound producing device, such that the sound producing device would use the MEMS speaker (i.e., the first acoustic transducer110) to generate a loud and spreading sound. Referring toFIG.6,FIG.6is a schematic diagram of an exploded view illustrating a headphone according to an embodiment of the present invention. As shown inFIG.6, the aforementioned speaker system100may be disposed within a headphone200which is a kind of sound producing device. The headphone200may further include any suitable component based on requirement(s). InFIG.6, the headphone200may further include a headphone cover210and a cushion220which are the outmost components, wherein the speaker system100is disposed between the headphone cover210and the cushion220. The headphone cover210and the cushion220are configured to protect the components disposed them, and the cushion220enhances a wearing comfort of the headphone200. Furthermore, inFIG.6, the headphone cover210has a hole212, such that an electric wire may be connected between the acoustic transducer of the speaker system100and an outer device through the hole212. InFIG.2andFIG.6, the sound spreading plate120of the speaker system100may have a peripheral region126surrounding the sound passing structure124and the body122, and the cushion220may be disposed on the peripheral region126. In addition, inFIG.6, the headphone200may further include a foam structure230disposed between the sound spreading plate120of the speaker system100and the cushion220, wherein the foam structure230may cover the sound passing structure124, so as to prevent an outer object (e.g., the dust) from entering the speaker system100through the hollow part124h, thereby protecting the second acoustic transducer140. InFIG.6, the headphone200may further include a ring240disposed on the foam structure230, so as to fix the foam structure230. In summary, according to the speaker system and a spreading structure of the present invention, after the acoustic wave passes through the sound passage of the spreading structure, the SPL of the acoustic wave is increased, and the acoustic wave is spread to make the directionality of the acoustic wave decreased. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
29,317
11943584
DETAILED DESCRIPTION OF THE DISCLOSURE The following description is of the best-contemplated mode of carrying out the disclosure. This description is made for the purpose of illustrating the general principles of the disclosure and should not be taken in a limiting sense. The scope of the disclosure is best determined by reference to the appended claims. In the following detailed description, the orientations of “on”, “above”, “under”, and “below” are used for representing the relationship between the relative positions of each element as illustrated in the drawings, and are not meant to limit the disclosure. Moreover, the formation of a first element on or above a second element in the description that follows may include embodiments in which the first and second elements are formed in direct contact, or the first and second elements have one or more additional elements formed in between them. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. Various features may be arbitrarily drawn in different scales for the sake of simplicity and clarity. Furthermore, some elements not shown or described in the embodiments have the forms known by persons skilled in the field of the disclosure. In the present disclosure, a micro-electro-mechanical system (MEMS) microphone for detecting sound waves and converting the sound waves (acoustic signal) into electric signal is provided, in accordance with various exemplary embodiments. In particular, the MEMS microphones in the various embodiments can achieve high reliable of anti-air pressure via the following described features. The variations of some embodiments are discussed. Throughout the various views and illustrative embodiments, like reference numbers are used to designate like elements. In accordance with some embodiments of the present disclosure,FIG.1Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.1Bshows a top view of the MEMS microphone M. It should be noted that the MEMS microphone M depicted inFIGS.1A and1Bhas been simplified for the sake of clarity to better understand the inventive concepts of the present disclosure. In some embodiments, additional features can be added into the MEMS microphone M. In addition, some of the features described below can be replaced or eliminated in other embodiments of the MEMS microphone M. As shown inFIGS.1A and1B, the MEMS microphone M is a capacitive microphone. The MEMS microphone M includes a MEMS structure10including a substrate11, a dielectric layer12, a backplate13, a diaphragm14and an electrode layer15. The substrate11is configured to support the dielectric layer12, the backplate13, the diaphragm14and the electrode layer15on one side of the substrate11. The substrate11may have an opening portion11A which allows sound waves received by the MEMS microphone M to pass through and/or enter the MEMS structure10. The substrate11may be made of silicon or the like. The dielectric layer12is disposed between the substrate11and the diaphragm14, and between the diaphragm14and the backplate13(That is, the diaphragm14is inserted in the dielectric layer12), so as to provide partial isolation between the substrate11, the diaphragm14and the backplate13from each other. Moreover, the dielectric layer12is disposed around the backplate13and the diaphragm14, such that the backplate13and the diaphragm14are supported at their edges by the dielectric layer12. Furthermore, the dielectric layer12may have an opening12A corresponding to the opening11A of the substrate11. The sound waves pass through the diaphragm14via ventilation holes14A to reach the opening12A, and then pass through the backplate13via acoustic hole13A. The dielectric layer12may be made of silicon oxide or the like. The backplate13is a stationary element disposed on one side of the substrate11. The backplate13may have sufficient stiffness such that it would not be bending or movable when the sound waves pass through the backplate13. In some embodiments, the backplate13is a stiff perforated element including a number of acoustic holes13A, each acoustic hole13A passing through the backplate13, as shown inFIG.1A. The acoustic holes13A are configured to allow the sound waves to pass through. In some embodiments, the backplate13includes a conductive layer131and an insulating layer132covering the conductive layer131for protection. The insulating layer132further includes a first insulating layer1321and a second insulating layer1322, such that the conductive layer131is disposed on the dielectric layer12, the first insulating layer1321is disposed on the conductive layer131, and the second insulating layer1322is disposed on the first insulating layer1321, as shown inFIG.1A. The conductive layer131may be made of poly-silicon or the like. The insulating layer132(e.g., the first and second insulating layers1321and1322) may be made of silicon nitride or the like. In some embodiments, the first and second insulating layers1321and1322may be made of the same material or they may be made of different materials. In some embodiments, the MEMS structure10is electrically connected to a circuit (not shown) via several electrode pads of the electrode layer15that is disposed on the backplate13and electrically connected to the conductive layer131and the diaphragm14. In some embodiments, the electrode layer15includes copper, silver, gold, aluminum, or alloy thereof. The diaphragm14is disposed on one side of the substrate11and extends across the opening portion11A of the substrate11. The diaphragm14is movable or displaceable relative to the backplate13. The diaphragm14is configured to sense the sound waves received by the MEMS microphone M. The displacement change of the diaphragm14relative to the backplate13causes a capacitance change between the diaphragm14and the backplate13. The capacitance change is then converted into an electric signal by circuitry connected with the diaphragm14and the backplate13, and the electrical signal is sent out of the MEMS microphone M through the electrode layer15. On the other hand, in order to increase the sensitivity of the diaphragm14, a number of ventilation holes14A may be provided in the diaphragm14to reduce the stiffness of the diaphragm14. In some alternative embodiments, there may be more than two ventilation holes14A. With this structural feature, high sensitivity of the MEMS microphone M can be achieved. In addition, the ventilation holes14A in the diaphragm14are also configured to relieve the high air pressure on the diaphragm14. In some embodiments, an air gap G is formed between the diaphragm14and the backplate13, as shown inFIG.1A. Referring toFIG.1A, the MEMS structure10further includes a protrusion133extending from the backplate13and towards the air gap G. In some embodiments, the protrusion133includes a double-layered structure. For example, the protrusion133includes an extension portion1321′ of the first insulating layer1321and an extension portion131′ of the conductive layer131covering the extension portion1321′ of the first insulating layer1321, as shown inFIG.1A. The extension portion131′ of the conductive layer131can serve as a protective layer of the extension portion1321′ of the first insulating layer1321to prevent the extension portion1321′ of the first insulating layer1321from being damaged during the etching process. In some embodiments, the distance between the center13cof the backplate13and the center14Ac of the ventilation hole14A is defined as a first distance r1. The distance between the center13cof the backplate13and the center133cof the protrusion133is defined as a second distance r2. The first distance r1is greater than the second distance r2. In some embodiments, the distance d between the protrusion133and the diaphragm14is greater than about 0.1 μm. Referring toFIG.1B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each protrusion133surrounds at least one acoustic hole13A. For example, each protrusion133surrounds one acoustic hole13A, as shown inFIG.1B, but the present disclosure is not limited thereto. InFIG.1B, the protrusion133surrounds the acoustic hole13A and forms a curve from a top view. In some embodiments, the protrusion133leaves at least one opening for process requirements. For example, the protrusion133leaves one opening133a, as shown inFIG.1B, but the present disclosure is not limited thereto. In some embodiments, the width w of the protrusion133is greater than about 0.5 μm. In accordance with some embodiments of the present disclosure,FIG.2Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.2Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.2A and2Bare similar to those of the MEMS structure10shown inFIGS.1A and1B, and will not be repeated here. The main difference fromFIGS.1A and1Bis the configuration of the additional protrusion structures. Referring toFIG.2A, the MEMS structure10includes a first protrusion133and a second protrusion134which extend from the backplate13and towards the air gap G. In some embodiments, the first protrusion133includes a double-layered structure. For example, the first protrusion133includes an extension portion1321′ of the first insulating layer1321and an extension portion131′ of the conductive layer131covering the extension portion1321′ of the first insulating layer1321, as shown inFIG.2A. The extension portion131′ of the conductive layer131can serve as a protective layer of the extension portion1321′ of the first insulating layer1321to prevent the extension portion1321′ of the first insulating layer1321from being damaged during the etching process. In some embodiments, the second protrusion134includes a single-layer structure. For example, the second protrusion134includes an extension portion1321′ of the first insulating layer1321. Specifically, the height “h2” of the second protrusion134is lower than the height “h1” of the first protrusion133, as shown inFIG.2A. The second protrusion134can prevent the backplate13from sticking to the diaphragm14. In addition, inFIG.2A, the air gap G is formed between the diaphragm14and each second protrusion134. In some embodiments, the air gap G between the diaphragm14and each second protrusion134may be the same, but the present disclosure is not limited thereto. Referring toFIG.2B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each first protrusion133surrounds at least one acoustic hole13A. For example, each first protrusion133surrounds one acoustic hole13A, as shown inFIG.2B, but the present disclosure is not limited thereto. InFIG.2B, the first protrusion133surrounds the acoustic hole13A and forms a curve from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves one opening133a, as shown inFIG.2B, but the present disclosure is not limited thereto. InFIG.2B, the second protrusions134are distributed on the backplate13. In accordance with some embodiments of the present disclosure,FIG.3Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.3Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.3A and3Bare similar to those of the MEMS structure10shown inFIGS.2A and2B, and will not be repeated here. The main difference fromFIGS.2A and2Bis the composition of the protrusion structures. Referring toFIG.3A, the MEMS structure10includes a first protrusion133and a second protrusion134which extend from the backplate13and towards the air gap G. In some embodiments, the first protrusion133includes a single-layer structure. For example, the first protrusion133includes an extension portion1321′ of the first insulating layer1321, as shown inFIG.3A. In some embodiments, the second protrusion134includes a single-layer structure. For example, the second protrusion134includes an extension portion1321′ of the first insulating layer1321. Specifically, the height “h2” of the second protrusion134is lower than the height “h1” of the first protrusion133, as shown inFIG.3A. The second protrusion134can prevent the backplate13from sticking to the diaphragm14. In addition, inFIG.3A, the air gap G is formed between the diaphragm14and each second protrusion134. In some embodiments, the air gap G between the diaphragm14and each second protrusion134may be the same, but the present disclosure is not limited thereto. In accordance with some embodiments of the present disclosure,FIG.4Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.4Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.4A and4Bare similar to those of the MEMS structure10shown inFIGS.2A and2B, and will not be repeated here. The main difference fromFIGS.2A and2Bis the configuration of the protrusion structures. The MEMS structure10shown inFIG.4Ais similar to that shown inFIG.2A, and will not be repeated here. Referring toFIG.4B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each first protrusion133surrounds at least one acoustic hole13A. For example, each first protrusion133surrounds two acoustic holes13A, as shown inFIG.4B, but the present disclosure is not limited thereto. When each first protrusion133surrounds more acoustic holes13A, the first protrusion133and the diaphragm14will have a larger contact area to limit the deformation of the diaphragm14. InFIG.4B, the first protrusion133surrounds the acoustic holes13A and forms a curve from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves one opening133a, as shown inFIG.4B, but the present disclosure is not limited thereto. InFIG.4B, the second protrusions134are randomly distributed on the backplate13. In accordance with some embodiments of the present disclosure,FIG.5Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.5Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.5A and5Bare similar to those of the MEMS structure10shown inFIGS.2A and2B, and will not be repeated here. The main difference fromFIGS.2A and2Bis the configuration of the protrusion structures. The MEMS structure10shown inFIG.5Ais similar to that shown inFIG.2A, and will not be repeated here. Referring toFIG.5B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each first protrusion133surrounds at least one acoustic hole13A. For example, each first protrusion133surrounds one acoustic hole13A, as shown inFIG.5B, but the present disclosure is not limited thereto. InFIG.5B, the first protrusion133surrounds the acoustic hole13A and forms a curve from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves two openings (133aand133b), as shown inFIG.5B, but the present disclosure is not limited thereto. InFIG.5B, the second protrusions134are distributed on the backplate13. In accordance with some embodiments of the present disclosure,FIG.6Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.6Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.6A and6Bare similar to those of the MEMS structure10shown inFIGS.2A and2B, and will not be repeated here. The main difference fromFIGS.2A and2Bis the configuration of the protrusion structures. The MEMS structure10shown inFIG.6Ais similar to that shown inFIG.2A, and will not be repeated here. Referring toFIG.6B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each first protrusion133surrounds at least one acoustic hole13A. For example, each first protrusion133surrounds one acoustic hole13A, as shown inFIG.6B, but the present disclosure is not limited thereto. InFIG.6B, the first protrusion133surrounds the acoustic hole13A and forms a curve from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves four openings (133a,133b,133cand133d), as shown inFIG.6B, but the present disclosure is not limited thereto. InFIG.6B, the second protrusions134are randomly distributed on the backplate13. In accordance with some embodiments of the present disclosure,FIG.7Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.7Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.7A and7Bare similar to those of the MEMS structure10shown inFIGS.2A and2B, and will not be repeated here. The main difference fromFIGS.2A and2Bis the configuration of the protrusion structures. The MEMS structure10shown inFIG.7Ais similar to that shown inFIG.2A, and will not be repeated here. Referring toFIG.7B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each first protrusion133surrounds at least one acoustic hole13A. For example, each first protrusion133surrounds four acoustic holes13A, as shown inFIG.7B, but the present disclosure is not limited thereto. InFIG.7B, the first protrusion133surrounds the acoustic holes13A and forms a polyline from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves one opening133a, as shown inFIG.7B, but the present disclosure is not limited thereto. InFIG.7B, the second protrusions134are randomly distributed on the backplate13. In accordance with some embodiments of the present disclosure,FIG.8Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.8Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.8A and8Bare similar to those of the MEMS structure10shown inFIGS.2A and2B, and will not be repeated here. The main difference fromFIGS.2A and2Bis the configuration of the protrusion structures. Referring toFIG.8A, the MEMS structure10includes a first protrusion133, a second protrusion134and a third protrusion135which extend from the backplate13and towards the air gap G. The third protrusion135is located between the second protrusions134. In some embodiments, the third protrusion135is located on the center13cof the backplate13. In some embodiments, each of the first protrusion133and the third protrusion135includes a double-layered structure. For example, each of the first protrusion133and the third protrusion135includes an extension portion1321′ of the first insulating layer1321and an extension portion131′ of the conductive layer131covering the extension portion1321′ of the first insulating layer1321, as shown inFIG.8A. The extension portion131′ of the conductive layer131can serve as a protective layer of the extension portion1321′ of the first insulating layer1321to prevent the extension portion1321′ of the first insulating layer1321from being damaged during the etching process. In some embodiments, the second protrusion134includes a single-layer structure. For example, the second protrusion134includes an extension portion1321′ of the first insulating layer1321. Specifically, the height “h2” of the second protrusion134is lower than the height “h1” of the first protrusion133and the height “h3” of the third protrusion135, and the height “h3” of the third protrusion135is similar to the height “h1” of the first protrusion133, as shown inFIG.8A. The second protrusion134can prevent the backplate13from sticking to the diaphragm14. In addition, inFIG.8A, the air gap G is formed between the diaphragm14and each second protrusion134. In some embodiments, the air gap G between the diaphragm14and each second protrusion134may be the same, but the present disclosure is not limited thereto. Referring toFIG.8B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each first protrusion133surrounds at least one acoustic hole13A. For example, each first protrusion133surrounds one acoustic hole13A, as shown inFIG.8B, but the present disclosure is not limited thereto. InFIG.8B, the first protrusion133surrounds the acoustic hole13A and forms a curve from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves one opening133a, as shown inFIG.8B, but the present disclosure is not limited thereto. InFIG.8B, the second protrusions134are randomly distributed on the backplate13. In some embodiments, the third protrusion135surrounds at least one acoustic hole13A. For example, the third protrusion135surrounds four acoustic holes13A, as shown inFIG.8B, but the present disclosure is not limited thereto. InFIG.8B, the third protrusion135surrounds the acoustic holes13A and forms a closed ring from a top view, but the present disclosure is not limited thereto. In some embodiments, the third protrusion forms a polyline from a top view. In accordance with some embodiments of the present disclosure,FIG.9Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.9Bshows a top view of the MEMS microphone M. The structure, material and configuration of the MEMS structure10shown inFIGS.9A and9Bare similar to those of the MEMS structure10shown inFIGS.2A and2B, and will not be repeated here. The main difference fromFIGS.2A and2Bis the configuration of other protrusion structures. Referring toFIG.9A, the MEMS structure10includes a first protrusion133and a second protrusion134which extend from the backplate13and towards the air gap G. In some embodiments, the first protrusion133includes a double-layered structure. For example, the first protrusion133includes an extension portion1321′ of the first insulating layer1321and an extension portion131′ of the conductive layer131covering the extension portion1321′ of the first insulating layer1321, as shown inFIG.9A. The extension portion131′ of the conductive layer131can serve as a protective layer of the extension portion1321′ of the first insulating layer1321to prevent the extension portion1321′ of the first insulating layer1321from being damaged during the etching process. In some embodiments, the second protrusion134includes a single-layer structure. For example, the second protrusion134includes an extension portion1321′ of the first insulating layer1321. Specifically, the height “h2” of the second protrusion134is lower than the height “h1” of the first protrusion133, as shown inFIG.9A. The second protrusion134can prevent the backplate13from sticking to the diaphragm14. InFIG.9A, the air gap G is formed between the diaphragm14and each second protrusion134. In some embodiments, the air gap G between the diaphragm14and each second protrusion134may be the same, but the present disclosure is not limited thereto. In addition, the MEMS structure10further includes a pillar16disposed on the backplate13. In some embodiments, the pillar16is located between the second protrusions134. In some embodiments, the pillar16is disposed on the center13cof the backplate13, and it is in contact with the diaphragm14. In some embodiments, the pillar16may include insulating material. For example, the pillar16may be made of silicon oxide or the like. Referring toFIG.9B, the backplate13includes a plurality of acoustic holes13A. In some embodiments, each first protrusion133surrounds at least one acoustic hole13A. For example, each first protrusion133surrounds one acoustic hole13A, as shown inFIG.9B, but the present disclosure is not limited thereto. InFIG.9B, the first protrusion133surrounds the acoustic hole13A and forms a curve from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves one opening133a, as shown inFIG.9B, but the present disclosure is not limited thereto. InFIG.9B, the second protrusions134are randomly distributed on the backplate13. The pillar16is located on the center13cof the backplate13. In accordance with some embodiments of the present disclosure,FIG.9Cshows a top view of a micro-electro-mechanical system (MEMS) microphone M. The embodiment shown inFIG.9Cis similar to that shown inFIG.9B. The main difference fromFIG.9Bis the configuration of the first protrusion. InFIG.9C, the first protrusion133forms two arcs (133′ and133″) around the acoustic hole13A from a top view. In some embodiments, the first protrusion133leaves at least one opening for process requirements. For example, the first protrusion133leaves two openings (133aand133b), as shown inFIG.9C, but the present disclosure is not limited thereto. In accordance with some embodiments of the present disclosure,FIG.10Ashows a cross-sectional view of a micro-electro-mechanical system (MEMS) microphone M.FIG.10Bshows a top view of the MEMS microphone M. The embodiment shown inFIGS.10A and10Bis similar to that shown inFIG.1A. The main difference fromFIG.1Ais the configuration of the ventilation holes. InFIGS.10A and10B, the ventilation holes14A include a plurality of outer slots14A1and inner slots14A2formed in an annular area14aof the diaphragm14and configured in concentric circles around the center14cof the diaphragm14. The outer slots14A1and the inner slots14A2respectively have a c-shaped structure and are oriented toward opposite directions, and the outer slots14A1and the inner slots14A2are arranged in a staggered manner with respect to the center14cof the diaphragm14, as shown inFIG.10B. In the present disclosure, the protrusions from the backplate limit the deformation of the diaphragm and reduce stress, thereby enhancing the reliability of the microphone against air pressure. In the present disclosure, the MEMS microphone structure has the protrusions from the backplate. When the pressure is induced by the air gun, the diaphragm of the microphone has large deformation and stress because of the large pressure difference. However, the protrusions from the backplate can limit the deformation of the diaphragm which can reduce the stress to prevent cracking. In the present disclosure, there are two advantages of the polyline (curve) protrusion structure than the single-point protrusion. First, the polyline (curve) protrusion structure has a larger area which can decrease the stress on the diaphragm when there is a collision between the diaphragm and the protrusions. Second, the polyline (curve) protrusion structure has higher stiffness which can prevent breakage if a collision does take place. Therefore, the polyline (curve) design of the protrusions can increase the reliability of the protrusions and the diaphragm, and at the same time improve the reliability of the microphone against air pressure. Although embodiments of the present disclosure and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. For example, it will be readily understood by those skilled in the art that many of the features, functions, processes, and materials described herein may be varied while remaining within the scope of the present disclosure. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps. In addition, each claim constitutes a separate embodiment, and the combination of various claims and embodiments are within the scope of the disclosure.
28,959
11943585
DETAILED DESCRIPTION A fundamental aspect of the present invention relates to an air-pulse generating device, and more particularly, to an air-pulse generating device comprising a modulating means and a demodulating means, where the said modulating means generates an ultrasonic air pressure wave/variation (UAW) having a frequency fUC, where the amplitude of UAW is modulated according to an input audio signal SIN, which is an electrical (analog or digital) representation of a sound signal SS. This amplitude modulated ultrasonic air pressure wave/variation (AMUAW) is then synchronously demodulated by the said demodulating means such that spectral components embedded in AMUAW are shifted by ±n·fUC, where n is a positive integer. As a result of this synchronous demodulation, spectral components of AMUAW, corresponding to sound signal SS, is partially transferred to the baseband and audible sound signal SS is reproduced as a result. Herein, the amplitude-modulated ultrasonic air pressure wave/variation AMUAW may be corresponding to a carrier component with the ultrasonic carrier frequency fUCand a modulation component corresponding to the input audio signal SIN. FIG.1illustrates a schematic diagram of an air-pulse generating (APG) device100according to an embodiment of the present invention. The device100may be applied as a sound producing device which produces an acoustic sound according to an input (audio) signal SIN, but not limited thereto. The device100comprises a device layer12and a chamber definition layer11. The device layer12comprises walls124L,124R and supporting structures123R,123L supporting a thin film layer which is etched to flaps101,103,105, and107. In an embodiment, the device layer12may be fabricated by MEMS (Micro Electro Mechanical Systems) fabrication process, for example, using a Si substrate of 250˜500 μM in thickness, which will be etched to form123L/R and124R/L. In an embodiment, on top of this Si substrate, a thin layer, typically 3˜6 μM in thickness, made of silicon on insulator SOI or POLY on insulator POI layer, will be etched to form flaps101,103,105and107. The chamber definition layer (which may be also viewed/named as “cap” structure)11comprises a pair of chamber sidewalls110R,110L and a chamber ceiling117. In an embodiment, the chamber definition layer (or cap structure)11may be manufactured using MEMS fabrication technology. A resonance chamber115is defined between this chamber definition layer11and the device layer12. In other words, the device100may be viewed as comprising a film structure10and the cap structure11, between which the chamber115is formed. The film structure10can be viewed as comprising a modulating portion104and a demodulating portion102. The modulating portion104, comprising the (modulating) flaps105and107, is configured to be actuated to form an ultrasonic air/acoustic wave within the chamber115, where air/acoustic wave can be viewed as a kind of air pressure variation, varying both temporally and spatially. In an embodiment, the ultrasonic air/acoustic wave or air pressure variation may be an amplitude DSB-SC (double-sideband suppress carrier) modulated air/acoustic wave with the ultrasonic carrier frequency fUC. The ultrasonic carrier frequency fUCmay be, for example, in the range of 160 KHz to 192 KHz, which is significantly larger than the maximum frequency of human audible sound. The terms air wave and acoustic wave will be used interchangeably below. The demodulating portion102, comprising the (demodulating) flaps101and103, is configured to operate synchronously with the modulating portion104, shifting spectral components of DSB-SC modulated acoustic wave generated by the modulating portion104by ±n×fUC, where n is positive integer, producing a plurality air pulses toward an ambient according to the ultrasonic air wave within the chamber115, such that the baseband frequency component of the plurality air pulses (which is produced by the demodulating portion102according to the ultrasonic air wave within the chamber115) would be or be corresponding/related to the input (audio) signal SIN, where the low frequency component of the plurality air pulses refers to frequency component of the plurality air pulses which is within an audible spectrum (e.g., below 20 or 30 KHz). Herein, baseband may usually be referred to audible spectrum, but not limited thereto. In other words, in sound producing application, the modulating portion104may be actuated to form the modulated air wave according to the input audio signal SIN, and the demodulating portion102, operate in synchronous with modulation portion104, produces the plurality air pulses with low frequency component thereof as (or corresponding/related to) the input audio signal SIN. For sound producing applications, where fUCis typically much higher than the highest human audible frequency, such as fUC≥96 KHz≈5×20 KHz, then through the natural/environmental low pass filtering effect (caused by physical environment such as walls, floors, ceilings, furniture, or the high propagation loss of ultrasound, etc., and human ear system such as ear canal, eardrum, malleus, incus, stapes, etc.) on the plurality air pulses, what the listener perceive will only be the audible sound or music represented by the input audio signal SIN. Illustratively,FIG.34conceptually/schematically demonstrates the effect of (de)modulation operation by showing frequency spectrums of signals before and after the (de)modulation operation. InFIG.34, the modulation operation produces an amplitude modulated ultrasonic acoustic/air wave UAW with spectrum shown as W(f), according to the input audio signal SIN, which is an electrical (analog or digital) representation of a sound signal SS. The spectrum of SIN/SS is represented as S(f) inFIG.34. The synchronous demodulation operation, producing an ultrasonic pulse array UPA (comprising the plurality of pulses) with spectrum illustrated as Z(f), can be viewed as (comprising step of) shifting spectral components of the ultrasonic acoustic/air wave UAW by ±n×fUC(with integer n) and spectral component of the ultrasonic air wave UAW corresponding to the sound signal SS is partially transferred to the baseband. Hence, as can be seen from Z(f), baseband component of the ultrasonic pulse array UPA is significant, compared to the amplitude modulated UAW W(f). The ultrasonic pulse array UPA propagates toward ambient. Through the inherent low pass filtering effect of natural/physical environment and human hearing system, a resulting spectrum Y(f) corresponding to the sound signal SS can be reproduced. Note that, different from conventional DSB-SC amplitude modulation using sinusoidal carrier, W(f) has component at ±3×fUC, ±5×fUCand higher order harmonic of fUC(not shown inFIG.34). It is because that the carrier of the modulation of the present invention is not purely sinusoidal. Referring back toFIG.1, as an embodiment of the synchronous demodulation operation, the demodulating portion102may be actuated to form an opening112at the time and location which are corresponding/aligned to peak(s) of the modulated air wave. In other words, when the modulated air wave reaches its peak at the location of the opening112, the demodulating portion102may be actuated such that the opening112also reaches its peak. In the embodiment shown inFIG.1, the demodulating portion102forms the opening112at a center location between the sidewalls110L and110R, which have a surface-to-surface, or111L to111R, spacing of (substantially) λUCbetween them, meaning that tips of the flaps101and103are (substantially) λUC/2 away from the sidewalls110L and110R, or away from the sidewall surfaces111L and111R, where λUCrepresent a wavelength corresponding to the ultrasonic carrier frequency fUC, i.e., λUC=C/fUCwith C being the speed of sound. In an embodiment, the demodulating portion102may be actuated to form the opening112at a valve opening rate synchronous to/with the ultrasonic carrier frequency fUC. In the present invention, the valve opening rate being synchronous to/with the ultrasonic carrier frequency fUCgenerally refers that the valve opening rate is the ultrasonic carrier frequency fUCtimes a rational number, i.e.,fUC×(N/M), where N and M represent integers. In an embodiment, the valve opening rate (of the opening112) may be the ultrasonic carrier frequency fUC. For example, the valve/opening112may open every operating cycle TCY, where the operating cycle TCYis a reciprocal of the ultrasonic carrier frequency fUC, i.e., TCY=1/fUC. In the present invention, (de)modulating portion102/104is also used to denote the (de)modulating flap pair. Moreover, the demodulating portion (or flap pair)102forming the opening112may be considered as a virtual valve, which performs an open-and-close movement and forms the opening112(periodically) according to specific valve/demodulation driving signals. In an embodiment, the modulating portion104may substantially produce a mode-2 (or 2ndorder harmonic) resonance (or standing wave) within the resonance chamber115, as pressure profile P104 and airflow profile U104 illustrated inFIG.1. In this regard, the spacing between sidewall surfaces111L and111R substantially defines a full wavelength λUCcorresponding to the ultrasonic carrier frequency fUC, i.e., W115≈λUC=C/fUC. Furthermore, in the embodiment shown inFIG.1, a free end of the modulating flap105/107is disposed by the sidewall110L/110R. Please be aware that, inter-modulation (or cross-coupling) between the modulation of generating the modulated air wave and the demodulation of forming the opening112might occur, which would degrade resulting sound quality. In order to enhance sound quality, minimizing inter-modulation (or cross-coupling) is desirable. To achieve that (i.e., minimize the cross coupling between the modulation and the demodulation), the modulating flaps105and107are driven to have a common mode movement and the demodulating flaps101and103are driven to have a differential-mode movement. The modulating flaps105and107having the common mode movement means that the flaps105and107are simultaneously actuated/driven to move toward the same direction. The demodulating flaps101and103having the differential-mode movement means that the flaps101and103are simultaneously actuated to move toward opposite directions. Furthermore, in an embodiment, the flaps101and103may be actuated to move toward opposite directions with (substantially) the same displacement/magnitude. The demodulating portion102may substantially produce a mode-1 (or 1storder harmonic) resonance (or standing wave) within the resonance chamber115, as pressure profile P102 and airflow profile U102 formed by the demodulating portion102illustrated inFIG.1. Hence, the demodulating portion102shall operate at a valve operating/driving frequency fD_V(corresponding to valve/demodulation-driving signal) such that W115≈λD_V/2, where λD_V=C/fD_V, and the valve operating/driving frequency shall be half of the ultrasonic carrier frequency fUC, i.e., fD_V=fUC/2. The common mode movement and the differential mode movement can be driven by (de)modulation-driving signals.FIG.2illustrates waveforms of demodulation-driving signals S101, S103 and a modulation-driving signal SM. The modulation-driving signal SM is used to drive the modulating flaps105and107. The demodulation-driving signals (or valve driving signals) S101 and S103 are used to drive the demodulating flaps101and103, respectively. In an embodiment, the modulation-driving signal SM can be viewed as pulse amplitude modulation (PAM) signal which is modulated according to the input audio signal SIN. Furthermore, different from convention PAM signal, polarity (with respect to a constant voltage) of the signal SM toggles within one operating cycle TCY. Generally, the modulation-driving signal SM comprises pulses with alternating polarities (with respect to the constant voltage) and with an envelope/amplitude of the pulses is (substantially) the same as or proportional/corresponding to an AC (alternative current) component of the input audio signal SIN. In other words, the modulation-driving signal SM can be viewed as comprising a pulse amplitude modulation signal or comprising PAM-modulated pulses with alternating polarities with respect to the constant voltage. In the embodiment shown inFIG.2, a toggling rate of the modulation-driving signal SM is 2×fUC, which means that the polarity of the pulses within the modulation-driving signal SM alternates/toggles twice in one operating cycle TCY. The demodulation-driving signals S101 and S103 comprises two driving pulses of equal amplitude but with opposite polarities (with respect to a constant/average voltage). In other words, at a specific time, given S101 comprises a first pulse with a first polarity (with respect to the constant/average voltage) and S103 comprises a second pulse with a second polarity (with respect to the constant/average voltage), the first polarity is opposite to the second polarity. As shown inFIG.2, a toggling rate of the demodulation-driving signal S101/S103 is fUC, which means that the polarities of the pulses within the demodulation-driving signal S101/S103 alternates/toggles once in one operating cycle TCY. Hence, the toggling rate of the modulation-driving signal (SM) is twice of the toggling rate of the demodulation-driving signal S101/S103. The slopes of S101/S103 (and the associated shaded area) are simplified drawing representing the energy recycling during the transitions between voltage levels. Note that, transition periods of the signals S101 and S103 overlap. Energy recycling may be realized by using characteristics of an LC oscillator, given the piezoelectric actuators of flap101/103are mostly capacitive loads. Details of the energy recycling concept may be referred to U.S. Pat. No. 11,057,692, which is incorporated herein by reference. Note that, piezoelectric actuator serves as an embodiment, but not limited thereto. To emphasize the flap pair102is driven differentially, the signals S101 and S103 may also be denoted as −SV and +SV, signifying that this pair of driving signals have the same waveform but differ in polarity. For illustration purpose, −SV is for S101 and +SV is for S103, as shown inFIG.2, but not limited thereto. In an embodiment, S101 may be +SV and S103 may be −SV. In another embodiment, there may be a DC bias voltage VBIASand VBIAS≠0, under such situation driving signal S101=VBIAS−SV, S103=VBIAS+SV. Variations such as this shall be considered as within the scope of this disclosure. In addition,FIG.2demonstrates difference in toggling rate between the modulation-driving signal SM and the demodulation-driving signal ±SV. Relative phase delay, meaning timing alignment, between the modulation-driving signal SM and the demodulation-driving signal ±SV may be adjusted according to practical requirement. In an embodiment, driving circuit for generating the signals SM and ±SV may comprise a sub-circuit, which is configured to produce a (relative) delay between the modulation-driving signal SM and the demodulation-driving signal ±SV. Details of the sub-circuit producing the delay are not limited. Known technology can be incorporated in the sub-circuit. As long as the sub-circuit can generate the delay to fulfill the timing alignment requirements (which will be detailed later), requirements of the present invention is satisfied, which will be within the scope of the present invention. Note that, the tips of the flaps101and103are at substantially the same location (the center location between the sidewalls111L and111R) and experience substantially the same air pressure at that location. In addition, the flaps101and103move differentially. Hence, movements of the tips of the flaps101and103owns a common mode rejection behavior, similar to the common mode rejection known in the field of analog differential OP-amplifier circuit, which means that the displacement difference of the tips of the demodulating flaps101and103, or |d101−d103|, is barely impacted by air pressure formed by the modulating flaps105and107. The common mode rejection, or modulator-to-demodulator isolation, can be evidenced byFIG.3.FIG.3illustrates simulated results generated from an equivalent circuit model of the device100. Curves d101and d103represents movements/displacements of the tips of the flaps101and103, respectively. As can be observed inFIG.3, even though d101and d103fluctuates quite significantly due to the acoustic pressure generated by the modulating flap105/107(P104), the differential movement, represented by the curve denoted by d101−d103inFIG.3, remains (substantially) consistent. That is, width/gap of the valve opening112would be consistent even when the modulation portion104operates. In other words, modulator movement produces negligible impact on the functionality and performance of the demodulator, which is what “modulator-to-demodulator isolation” means. On the other hand, as for demodulator-to-modulator isolation, since the flaps101/103produce 1storder harmonic resonance or standing wave within the chamber115, as can be seen fromFIG.1, pressure exerted by P102 on the flap105and the flap107would have substantially the same magnitude but of opposite polarity, causing the movements of the flap105and the flap107to experience changes (due to P102) that are also of the same magnitude but of opposite polarity. This will produce two ultrasonic waves (one by105, the other by107) that also changes same magnitude but of opposite polarity. When these two ultrasonic waves propagate to the location above the valve opening112(indicated by the dotted area shown inFIG.1), they are merged into one pressure. Since the location of this “merge” occurs at the center of device100, along X-axis or X direction, with equal distance from the tips of105and107, the P102 induced changes would cancel/compensate each other and produce a net rest that is largely free from the interference of demodulator/virtual-valve operation. Illustratively,FIG.4plots a simulated frequency response of an SPL (sound pressure level), measured at 1 meter away from the device100, under the condition that SINis a 10-tone equal amplitude test signal (within 650˜22K Hz and with equal log scale spacing) and an equivalent circuit simulation model of the device100is used. In the current simulation, the ultrasonic carrier frequency is set as fUC=192 KHz and the valve operating frequency is set as fD_V=fUC/2=96 KHz. The demodulator-to-modulator isolation can be evidenced by the absence of extraneous spectral component at and around 96 KHz (pointed by block arrow inFIG.4), indicating a high degree of isolation. As a result, the interference of the movements of these two flap-pairs (101/103versus105/107) is minimized through the common mode (on modulator) versus differential-mode (on demodulator) orthogonality/arrangement. In addition, the percentage of time valve remains open, or duty factor, is a critical factor affecting the output of device100. Increasing amplitude of driving voltage S101 and S103 can increase the amplitude of the movements of the flaps101and103, which will increase the maximum open width of the valve opening112, and raising the driving voltage also raises the duty factor of valve opening. In other words, duty factor of the valve opening112and the maximum open width/gap of the valve opening112can be determined by the driving voltage S101 and S103. When the opening duty factor of valve approaches 50%, such as the example shown inFIG.5, which is generated from one of the equivalent circuit simulation models mentioned previously, the period of each valve opening, shown as curve labeled as V(opening)>0, overlaps with the same half-cycle of the amplitude modulated ultrasonic standing wave at the location atop the valve opening112(indicated by the dotted region inFIG.1). By synchronizing and timing-aligning the opening-closing of valve opening112to the in-chamber standing wave, illustrated as curve labeled as V(p_vlv) inFIG.5, a nicely shaped output pressure pulse, illustrated as curve labeled as V(ep_vlv), is produced. InFIG.5, curve labeled as V(d2)−V(d3) represent difference in displacement of flaps101and103, i.e., d101-d103, curve labeled as V(opening) represent a degree of opening of the virtual valve112. V(opening) >0 when |V(d2)−V(d3)|>TH, where TH is a threshold defined by parameters such as the thickness of the flaps101and103, width of slit between claps101and103, boundary layer thickness, etc. V(ep_vlv) being nicely shaped may refer that pulses illustrated by V(ep_vlv) are highly asymmetric, unlike V(p_vlv) which is highly symmetric. Asymmetricity of output pressure pulses would demonstrate low frequency component (i.e., frequency component in audible band) of air pulses generated by the air pulse generating device, or APG device for brief, which is a desirable feature for the APG device. The higher the asymmetric is, the stronger the baseband frequency component of the air pulses will be. A zoomed-out view ofFIG.5is illustrated inFIG.6, showing the asymmetricity of V(ep_vlv) corresponding to the envelope of the baseband sound signal of 1.68 KHz. In the present invention, the opening (112) is opened/formed or in an opened status when difference in displacement of flaps101and103is larger than a threshold, e.g., |V(d2)−V(d3)|>TH, and is closed or in a closed status otherwise. Furthermore, it is observed that the maximum output will occur when the duty factor of valve opening, defined as |V(d2)−V(d3)|>TH, is equal to or slightly larger than 50%, such as in the range of 55˜60%, but not limited thereto. However, when the duty factor of valve opening is significantly higher than 50%, such as 80˜85%, more than half-cycle of the in-chamber ultrasonic standing wave will pass through the valve, leading portions of the standing wave with different polarities to cancel each other out, resulting in lower net SPL output from device100. It is therefore generally desirable to keep the duty factor of valve opening close to 50%, typically in the range between 50% and 70% (where the duty factor in the range between 45% and 70% is within the scope of present invention). In addition to duty factor, to ensure the modulator-to-demodulator isolation, resonance frequency fR_Vof demodulating flaps101/103is suggested to be sufficiently deviated from the ultrasonic carrier frequency fUC, which is another design factor. It can be observed (from equivalent circuit simulation model) that, under the constraint of valve open duty factor equals 50%, for any given thickness of flaps101/103, the higher is the resonance-to-driving ratio (fR_V:fD_Vor fR_V/fD_V), the wider the valve can open. Since the output of device100is positively related to the max width valve opens, it is therefore desirable to have the resonance-to-driving ratio higher than 1. However, when fR_Vfalls within the range of fUC±max(fSOUND), flap101/103will start to resonate with the AM ultrasonic standing wave, converting portion of the ultrasound energy into common mode deformation of flap101/103, where max(fSOUND) may represent maximum frequency of the input audio signal SIN. Such common mode deformation of flaps101/103will cause the volume atop the flaps101/103to change, result in fluctuation of pressure inside chamber115at the vicinity of valve opening112, over the affected frequency range, leading to depressed SPL output. In order to avoid valve resonance induced frequency response fluctuations, it is preferable to design the flap101/103with a resonance frequency outside of the range of (fUC±max(fSOUND))×M, where M is a safety margin for covering factors such as manufacturing tolerance, temperature, elevation, etc., but not limited thereto. As a rule of thumb, it is generally desirable to have fR_Veither significantly lower than fUCas in fR_V≤(fUC−20 KHz)×0.9 or significantly high than fUCas in fR_V≥(fUC+20 KHz)×1.1. Note that 20 KHz is used here because it is well accepted as highest human audible frequency. In applications such as HD-/Hi-Res Audio, 30 KHz or even 40 KHz may be adopted as max(fSOUND), and the formula above should be modified accordingly. In addition, suppose w(t) and z(t) represent functions of time for the amplitude-modulated ultrasonic acoustic/air wave UAW and the ultrasonic pulse array UPA (comprising the plurality of pulses). Since the opening112is formed periodically in the opening rate of the ultrasonic carrier frequency fUC, a ratio function of z(t) to w(t), denoted as r(t) and can be expressed as r(t)=z(t)/w(t), is periodic with the opening rate of the ultrasonic carrier frequency fUC. In other words, z(t) may be viewed as a multiplication of w(t) and r(t) in time domain, i.e., z(t)=r(t)·w(t), and the synchronous demodulation operation performed on UAW can be viewed as the multiplication on w(t) by r(t) in time domain. It implies that Z(f) may be viewed as a convolution of W(f) and R(f) in frequency domain, i.e., Z(f)=R(f)*W(f) where * denotes convolution operator, and the synchronous demodulation operation performed on UAW can be viewed as the convolution of W(f) with R(f) in frequency domain. Note that, when r(t) is periodic in time domain with the rate of the frequency fUC, R(f) is discrete in frequency domain where frequency/spectrum components of R(f) are equally spaced by fUC. Hence, the convolution of W(f) with R(f), or the synchronous demodulation operation, involves/comprises step of shifting W(f) (or the spectral components of UAW) by ±n×fUC(with integer n). Herein, r(t)/w(f)/z(t) and R(f)/W(f)/Z(f) form Fourier transform pair. FIG.7is a schematic diagram of an APG device200according to an embodiment of the present invention. The device200is similar to the device100, and thus same notations are used. Different from the device100, the device200further comprises an enclosing structure (enclosure)14. A chamber125is formed between the enclosing structure14and the cap structure11. Note that vents113L/R are formed within the ceiling117located at λUC/4 from the sidewalls111L/R, respectively, on the nodes of the ultrasonic standing pressure wave P104, as indicated by lines135/137. The purpose of vents113L/R inFIG.7is to allow the airflow generated during the demodulation operation (as indicated by the two dashed 2-way pointed-curves between112and113L/R) to be vented from chamber115, such that the difference between the average pressure inside the chamber115and outside in the ambient is minimized and the function of chamber125is to disrupt the spectral components carried by the airflow into chamber125, preventing these airflow from forming additional audible sound signal. By locating vents113L/R on the nodes of the standing pressure wave, the spectral components surrounding fUCare prevented from exiting chamber115, allowing demodulation to form UPA (ultrasonic pulse array) and produce the desired APPS (air pressure pulse speaker) effect. In the present invention, APG device having APPS effect generally refers that, the baseband frequency component (especially frequency component in audible band) embedded within the air pulses output by the APG device at the ultrasonic carrier frequency is not only observable but also of significant intensity. For APG device producing APPS effect, the spectrum of the electrical input signal SINwill be reproduced acoustically within baseband of audible spectrum (low frequency compared to carrier frequency) via producing the plurality of air pulses by the APG device, which is suitable for being used in sound producing application. The intensity of baseband produced through APPS effect is related to the amount of, or degree of, asymmetricity of air pulses produced by the APG device, where asymmetricity will be discussed later. Note the, the supporting structures123L and123R of the device100or200have parallel and straight walls (with respect to X-axis), where space/channel between123L and123R functions as an sound outlet. Simulation results using FEM (finite element method) show that, when the frequency rises above 350 KHz, lateral standing waves, along the X direction, start to be formed between the walls of123L/123R, and the output starts to self-nullify. Such lateral-resonance induced self-nullifying phenomenon cause the energy transfer ratio over the height of the walls of123L-123R (in Z direction) to degrade. To bypass this problem, a horn-shaped outlet is proposed. For example,FIG.8is a schematic diagram of a portion of an APG device300according to an embodiment of the present invention. Similar to the device100, the device300comprises the flaps101and103, anchored on the supporting structure123L″ and123R″, respectively, and configured to form the opening112to produces a plurality of air pulses via an outlet320toward an ambient. Different from the supporting structure123L and123R of the device100which have straight and parallel walls, walls of the supporting structure123L″ and123R″ of the device300are oblique and has a non-right angle θ with respect to X-Axis or X direction, such that the outlet320with horn-shape is formed. The non-right angle θ may be designed according to practical requirement. In an embodiment, the non-right angle θ may be 54.7°, but not limited thereto. In the present invention, the horn-shaped outlet generally refers to an outlet with an outlet dimension or a tunnel dimension which is gradually widened from the film structure toward an ambient. FIG.9andFIG.10illustrate frequency responses of energy transfer ratio of the device100and300, respectively, for 8 different displacements of flaps101and103, where Dvv=k means the displacement of the tips of each flap is kμM, which produces a differential movement of 2 kμM.FIG.9andFIG.10are simulated by using FEM. By comparingFIG.9andFIG.10, the device100produces energy transfer ratio that starts to roll-off above 170 KHz, with a few jumps and dips as the frequency rises above 170 KHz; while the device300produces energy transfer ratio that retains a rising trend roughly above 120 KHz, with a much smoother frequency response for frequency above 170 KHz. It means, frequency response of energy transfer ratio (above 170 KHz) of the device300is much smoother than which of the device100, which is benefit for the APG device operating at ultrasonic pulse rate (i.e., the ultrasonic carrier frequency fUC) and its high order harmonic (e.g., n×fUC). Furthermore, the device300produces a roughly 5 times energy transfer ratio higher than which produced by the device100. Hence, it can be validated fromFIG.9andFIG.10that horn-shaped outlet brings better energy transfer ratio for APG device. FIG.11shows an embodiment of a two-step etching/manufacturing method to etch walls at two different angles. First, the wall of123R″/123L″ is etched with a tapered angle (as shown inFIG.11(b)), and the tapered wall is then covered by photoresist or spin-on dielectric using a spray coating method (as shown inFIG.11(c)). The photoresist or spin-on dielectric is then patterned by photolithography methods (as shown inFIG.11(d)), followed by the etching of the wall of124L and124R at a straight angle (as shown inFIG.11(e)). The fabrication method provided above is for illustration purpose only and the scope of the invention is not limited thereof. FIG.12is a schematic diagram of an APG device400according to an embodiment of the present invention. The device400is modified from FIG. 7 of U.S. application Ser. No. 17/553,806 and similar to the device100shown inFIG.1of the present invention. Different from the device100, the device400comprises only flap pair102(but no flap pair104). The flap pair102is configured to perform both the modulation operation (which is to form amplitude-modulated air pressure variation with the ultrasonic carrier frequency fUC) as well as the demodulation operation (which is to form the opening112, synchronous to the amplitude-modulated ultrasonic carrier at frequency fUC, to produce air pulses according to the envelope of the said amplitude-modulated ultrasonic air pressure variation). InFIGS.12, U104 and P104 represent pressure profile and airflow profile formed by the flap pair102in response to the modulation-driving signal SM, and U102 and P102 represent pressure profile and airflow profile formed by the flap pair102in response to the demodulation-driving signal ±SV. Herein the demodulation-driving signal is denoted by ±SV to emphasize the flap pair102is driven differentially (which implies the demodulation-driving signals +SV and −SV have the same magnitude but opposite polarity) to perform the demodulation operation. For example, S101 and/or S103 above may be represented by −SV and/or +SV. In other words, modulator and demodulator are co-located at/as the flap pair102. Like the device100, the film structure10of the flap pair102of the device400is actuated to have not only a common mode movement to perform the modulation and a differential mode movement to perform the demodulation. In other words, the “modulation operation” and the “demodulation operation” are performed by the same flap pair102, at the same time. This colocation of “modulation operation” together with “demodulation operation” is achieved by new driving signal wiring schemes such as those shown inFIG.13. Given that the device400may comprise an actuator101A/103A disposed on the flap101/103and the actuator101A/103A comprises a top electrode and a bottom electrode, both of the top and bottom electrodes may receive the modulation driving signal SM and the demodulation-driving signal ±SV. In an embodiment, one electrode of the actuator101A/103A may receive the common mode modulation-driving signal SM; while the other electrode may receive the differential mode demodulation-driving signal S101(−SV)/S103(+SV). For example, diagrams431to433shown inFIG.13illustrate details of a region430shown inFIG.12. As shown in the diagrams431and432, bottom electrodes of the actuator101A/103A receive the common mode modulation-driving signal SM; while top electrodes of the actuator101A/103A receive the differential mode demodulation-driving signal S101(−SV)/S103(+SV). A suitable bias voltage VBIASmay be applied to either the bottom electrode (like diagram432shows) or the top electrode (like diagram433shows), where the bias voltage VBIAScan be determined according to practical requirement. In an embodiment (shown in diagram433), one electrode of the actuator101A/103A may receive both the common mode modulation-driving signal SM and differential mode demodulation-driving signal S101(−SV)/S103(+SV); while the other electrode is properly biased. In the embodiment shown in diagram433, the bottom electrodes receive the common mode modulation-driving signal SM and differential mode demodulation-driving signal S101(−SV)/S103(+SV); while the top electrode are biased. The driving signal wiring schemes shown inFIG.13achieve a goal that, (without considering VBIAS) an applied signal of one actuator (e.g.,101A) is or comprises −SM-SV while an applied signal of the other actuator (e.g.,103A) is or comprises −SM+SV. Note that, driving signal wiring schemes may be modified or altered according to practical situation/requirement. As long as a common-mode signal component between the two applied signals applied on the flap pair102comprises the modulation-driving signal SM (plus VBIAS) and a differential-mode signal component between the two applied signals applied on the flap pair102comprises the demodulation-driving signal SV, requirements of present invention is satisfied and is within the scope of the present invention. Herein (or generally), a common-mode signal component between two arbitrary signals a and b may be expressed as (a+b)/2; while a differential-mode signal component between two arbitrary signals a and b may be expressed as (a−b)/2. Further note that, in order to minimize the cross coupling between the modulation operation (as a result of driving signal SM) and the demodulation operation (as a result of driving signal ±SV), in an embodiment, the flaps101and103are made into a mirrored/symmetric pair in both their mechanical construct, dimension and electrical characteristics. For instance, the cantilever length of flap101should equal that of103; the membrane structure of flap101should be the same as flap103; the location of virtual valve112should be centered between, or equally spaced from, the two supporting walls110of flap101and flap103; the actuator pattern deposited on flap101should mirror that of flap103; the metal wiring to actuators deposited atop flap101and103should be symmetrical. Herein, a few items are names for mirrored/symmetric pair (or the flaps101and103are mirrored/symmetric), but not limited thereto. FIG.14illustrates a sets of frequency response measurement results of a physical embodiment of the device400in an IEC711 occluded ear emulator, where driving scheme shown in diagram431is used to drive the device400, Vrms for modulation-driving signal SM for bottom electrodes is 6 Vrms, Vpp (peak-to-peak voltage) for demodulation-driving signal ±SV for top electrodes is swept from 5 Vpp to 30 Vpp, and a GRAS RA0401 ear simulator is used for measuring acoustic results. Operating frequency (i.e., ultrasonic carrier frequency fUC) of the device400is 160 KHz, and the device dimension is designed accordingly (e.g., W115≈λUC=C/fUC2.10 mm for C=336 m/s). As can be seen fromFIG.14, the device400is able to produce sound of high SPL at low frequency band (at least 99 dB for frequency less than 100 Hz). Furthermore,FIG.15illustrates and analysis of measurement results of the device400shown inFIG.14. InFIG.15, the SPL at 100 Hz (bold dashed line) and 19 Hz (bold solid line) ofFIG.14is plotted versus Vvtop (Vpp), where Vvtop (Vpp) is the peak-to-peak voltage for the demodulation-driving signal applied on the top electrodes, as shown in connection diagram431. It can be seen fromFIG.14andFIG.15that SPL increases as Vvtop increases. In addition, simulation results of equivalent lumped-circuit model of the device100also concurred that SPL increases as amplitude of (valve-driving or) demodulation-driving signal increases. Therefore, it can be obtained that a volume of a sound produced by the air-pulse generating device of the present invention may be controlled via an amplitude of the demodulation-driving signal. Based on the results fromFIG.14andFIG.15, it can be concluded that the concept of modulator-demodulator co-location is validated, meaning that modulation (forming amplitude-modulated ultrasonic air pressure variation) and demodulation (forming opening synchronously to produce asymmetric air pulses) performed by the device400successful produce APPS effect. Hence, it may be possible to shrink the chamber width (e.g., W115 of the device100). For example,FIG.16is a schematic diagram of an APG device500according to an embodiment of the present invention. The device500is similar to the device400, where the flap pair102is also driven via one of the driving schemes shown inFIG.13, but not limited thereto. Compared to the device400, the chamber width W115′ of the device500is reduced by half. In an embodiment, the chamber width W115′ of the device500may be λUC/2. Furthermore, standing wave within the chamber, such as115ofFIG.12or115′ ofFIG.16, may not be required, which means, the chamber width (W115) does not have to be (related to) λUCor λUC/2, and there is no need to form/maintain/reflect planar wave between sidewalls111R/111R′ and111L/111L′. It is free/flexible to change the shape of chamber to optimize other factors, e.g., reducing the chamber length to enhance sound producing efficiency, which can be evaluated by SPL per area (mm2) of the device. FIG.17is a schematic diagram of an APG device600according to an embodiment of the present invention. The device600may comprise subassemblies610and640. In an embodiment, the subassemblies610and640may be fabricated via known MEMS process, and be bounded together through layer620using bounding or adhesive material such as dry film or other suitable die attach material/methods. The subassembly610by itself may be viewed as an APG device (which will be detailed later inFIG.26and related paragraphs), which comprises the flap pair102or the film structure10. The subassembly640may be viewed as a cap structure. Similar to the device500, the device600comprises the flap pair102with flaps101and103driven via one of the driving schemes shown inFIG.13, but not limited thereto, and the flap pair102of the device600is actuated to form amplitude-modulated ultrasonic air pressure variation with ultrasonic carrier frequency fUCand to form the opening112at the rate synchronous with the ultrasonic carrier frequency fUCand produce a plurality of air pulses via an outlet toward ambient according to the ultrasonic air pressure variation. Different from the device500, a conduit630is formed within the device600. The conduit630connects air volume above the virtual valve112(the slit between flaps101and103) outward to the ambient. The conduit630comprises a chamber631, a passageway632and an outlet633(or zones631-633). The chamber631is formed between the film structure10and the cap structure (subassembly)640. The passageway632and the outlet633are formed within the cap structure (subassembly)640. The chamber631may be viewed as a semi-occluded compression chamber, where an air pressure within the compression chamber631may be compressed or rarefied in response to the common-mode modulation-driving signal SM, and the ultrasonic air pressure variation/wave may be generated and directly fed into the passageway632via an orifice613. The passageway632serves as a waveguide, where the shape and dimension thereof should be optimized to allow the pressure variation/pulse generated in zone/chamber631to propagate outward efficiently. The outlet633is configured to minimize reflection/deflections and maximize the acoustic energy coupling to ambient. To achieve that, a tunnel dimension (e.g., a width in X direction) of the outlet633is gradually widened toward the ambient and the outlet633may have a horn-shape. In an embodiment, a length/distance L630of the conduit630between the opening112(equivalently, the flap pair102or the film structure10) and a surface650may be (substantially) a quarter wavelength λUC/4 corresponding to fUC(with, for example, ±10% tolerance). For example, L630may be 450 μm for case of fUC=192 KHz, which is not limited thereto. Note that, (referring back toFIG.16) it is observed that air pressure wave (as a kind of air pressure variation) propagates within the chamber115′ of the device500(or the chamber115within the device100) along X direction, and a distance between virtual valve (opening)112and sidewall surfaces111L′/111R′ is λUC/4. InFIG.17, the device600may be viewed as folding/rotating air wave propagation path by 90° to align with Z-direction, such that air wave or air pressure pulse is emitted via the Z-direction toward ambient directly. FIG.18illustrates a snapshot of FEM simulated pressure profile of a device similar to the device600, according to an embodiment of present invention. InFIG.18, auxiliary arrows are presented to indicate polarity/sign of the pressure values. Difference between the device600and the device shown inFIG.18is that, chamfer635is added on the subassembly640at an interface between the chamber631and the passageway632to minimize disturbance to the airflow. InFIG.18, pressure within zone631is about +500 Pa, and pressure within zone632closed to633is about −500 Pa. Brightest zone presents pressure nodal plane. Note that, nodal plane within zone632indicates proper forming of wave propagation, and space/distance between nodal plane632and nodal plane outside the device is about 1.2*λ/2 (herein λ=346 (m/s)/192 (KHz)), which is close to (and slightly larger than) λ/2. It implies that, non-interrupted pressure wave propagation at the speed of sound exists. In other words, pressure pulses or air wave generated by the film structure of the device600radiate toward ambient, as shown inFIG.18. FIG.19illustrates IEC711 occluded ear coupler SPL measurement results versus frequency of a physically implemented device600, where results corresponding to the demodulation-driving signal ±SV with 20 Vpp and 15 Vpp are plotted. Also, parameters of the devices400and600for producing maximum SPL are compared in TABLE I. TABLE IDevice 400Device 600SV30Vpp20VppSM6 Vrms (16 Vpp)5VppSPL142.39 dB at 19 Hz143.52 dB at 19 Hz131.44 dB at 100 Hz133.44 dB at 100 HzDie Size50mm230mm2 As can been seen fromFIG.14,FIG.19and TABLE I, the device600can achieve slighter higher SPL than the device400with lower input amplitude while reducing the die size by 40% at the same time. It means, the device600with conduit630is far more efficient both in terms of power consumed and in terms of silicon space/area occupied. In general, a width W631 of the chamber631is significantly less than λUC/2, for example, in the example of device600W631≈570 μM while λUC/2≈900 μM. For zone631to perform chamber compression, the dimension of the chamber631should be much smaller than λUC. In an embodiment, a height H631of the chamber631may be less than λUC/5, i.e., H631<λUC/5. Note that, the width of the chamber631(i.e., a dimension in X direction) may be getting narrower from the film structure10toward the passageway632, either in a staircase fashion or a tapered fashion, where both cases are within the scope of present invention. FIG.20is a schematic diagram of an APG device700according to an embodiment of the present invention. Similar to the device600, the device700comprises subassemblies710and740, and has a conduit730formed therewithin. The subassembly710may be fabricated by MEMS process, and may be viewed as an APG device also. A chamber705is formed within the subassembly710. The subassembly710may itself be an APG device, which can be viewed as a combination of squeeze mode operation disclosed in U.S. Pat. No. 11,172,310, virtual valve disclosed in U.S. Pat. No. 11,043,197, and driving scheme illustrated inFIG.13, where U.S. Pat. Nos. 11,172,310 and 11,043,197 are incorporated herein by reference. The conduit730comprises a chamber731, a passageway/waveguide732and a horn-shaped outlet733(or zones731-733), and connects air volume below the virtual valve112outward to the ambient. Different from the device600, the subassembly740may be formed/fabricated via technologies such as 3D printing, precision injection molding, stamping, etc. The passageway/waveguide732comprises a first section which is the orifice713etched on the cap of the subassembly710and a second section which is formed within the subassembly740, where chamfer735may be added therebetween to minimize disturbance. The chamber705and731are overlapped. The pressure variation/wave generated by the flaps101and103would be fed into the passageway/waveguide732directly. FIG.21is a schematic diagram of an APG device800according to an embodiment of the present invention. The device800comprises subassemblies810and840. The subassembly810may have the same or similar structure of the device500, which can be fabricated by MEMS process and be viewed as an APG device, comprises flaps101and103driven by one of the schemes shown inFIG.13, where the virtual valve (opening)112is formed. The subassembly840may be formed/fabricated via technologies such as 3D printing, precision injection molding, precision stamping, etc. Note that, via the (de)modulation operation, the subassembly810produces a plurality of airflow pulses. A conduit830, connecting air volume below the virtual valve112outward to the ambient, is formed within the device840. The conduit830comprises a (compression) chamber831, a passageway/waveguide832and a horn-shaped outlet833(or zones631-633). The compression chamber831is configured to convert the plurality of airflow pulses into a plurality of air pressure pulses. Specifically, the chamber831would producing pressure pulses ΔPn∝P0_n·ΔMn/M0_n(Eq. 1), where M0_nis the airmass inside chamber831before the start of pulse cycle n and ΔMnis the airmass associated with the airflow pulse of pulse cycle n. Eq. 1 represents converting airflow pulses into air pressure pulses, and the converted air pressure pulses propagate into the passageway/waveguide832. In an embodiment, the subassembly840in zone831may have a brass mouthpiece-like cross section profile. The passageway/waveguide832may have an impedance that is close to, matched to, or within ±15% of, the compression chamber831, so as to maximize the propagation efficiency of the pressure pulse generated in zone831outward to the ambient. In an embodiment, the propagation efficiency may be optimized by properly choosing the cross section area of the passageway832. In the embodiment shown inFIG.21, a tunnel dimension (e.g., width in X direction) of the outlet833is gradually widened toward the ambient with a piece-wise linear manner (where θ1<θ2), such that a horn-shape is formed. Note that, the horn-shape of the outlet may be designed according to practical requirements. The tunnel dimension of the outlet can be widen in polynomial manner, pure linear manner, piece-wise linear manner, parabolic manner, exponential manner, hyperbolic manner, etc., and not limited thereto. As long as the tunnel dimension of the outlet is gradually widened toward the ambient, requirement of the present invention is satisfied, which is within the scope of the present invention. To perform chamber compression in zone831, dimension of chamber/zone831is suggested to be much smaller than wavelength λUCcorresponding to operating frequency fUC. For instance, in an embodiment of fUC=160 KHz and λUC=(346/160)=2.16 mm, a height H831may be in a range of λUC/10˜λUC/60 (e.g., H831=λUC/35=62 μm) and a width W815may be in a range of λUC/5˜λUC/30 (e.g., W815in a range of 115 μm˜350 μm), but not limited thereto. Note that, the film structure10subdivide a volume of space into a resonance chamber805on one side and a compression chamber831on another (or the other) side, and by nature of this subdivision, the displacements due to common-mode movement of flaps101and103, as observed from the space of chamber805and chamber831, will have exactly the same magnitude but of opposite direction/polarity. In other words, along with the common mode movement of the flaps101and103, a push-pull operation will be formed, and such push-pull operation will increase (e.g., doubles) the pressure difference across flaps101and103, and thus the airflow will be increased when virtual valve112is opened. Specifically, for the compression chamber831with volume V1 and the resonance chamber805with volume V2, a membrane/flap movement, resulting in a volume difference DV (assuming DV<<V1, V2), would cause a pressure change in V1 as ΔPV1=1−V1/(V1−DV)=−DV/(V1−DV)≈−DV/V1 and a pressure change in V2 as ΔPV2=1−V2/(V2+DV)=DV/(V2+DV)≈DV/V2. The pressure difference between two volume may be ΔPV2−ΔPV1=DV/(V2+DV)+DV/(V1−DV). When V1≈V2≈Va, ΔPV2−ΔPV1≈DV/(Va+DV)+DV/(Va−DV)=DV·2Va/(Va2−DV2)≈2·DV/V≈2·ΔPV2, which means that the push-pull operating can doubles the pressure difference between the two subspaces separated by flaps101and103. FIG.22is a schematic diagram of an APG device900according to an embodiment of the present invention. The device900comprises subassemblies910and940. The subassembly910may be fabricated by MEMS process and may be viewed as an APG device. The subassembly940may be fabricated by 3D printing. Similar to the device700or the subassembly710, the subassembly940may also be viewed as a combination of squeeze mode operation disclosed in U.S. Pat. No. 11,172,310, virtual valve disclosed in U.S. Pat. No. 11,043,197, and driving scheme illustrated inFIG.13. In the device900, squeeze mode operating chamber905and compression chamber931are separated; while in the device700, the squeeze mode operating chamber and the compression chamber are merged as chamber731. The effect of the subassembly810and subassembly910are similar in terms of airflow pulse generation, but their operation principles are different. The subassembly810exploits resonance; while the assembly910exploits compression and rarefication of the squeeze mode operating chamber905caused by membrane (flaps101,103) movement. Hence, chamber width W905 no longer needs not fulfill any relationship with λUC, and thus, the size of the chamber905may be shrunk as much as practical/desired. FIG.23is a schematic diagram of an APG device A00 according to an embodiment of the present invention. Since resonance is not a requirement, restriction of rectangular cross-section of chamber, such as chamber905, can be removed, and it is more flexible in geometry to optimize the pressure wave generation or the propagation of wave out to the ambient. For example, chamber A05 or subassembly A40 may have brass mouthpiece-like cross-section. Another aspect of device A00 ofFIG.23is that of “direct pressure coupling”. Instead of first going through an orifice913as in device900, the pressure wave generated in compression chamber A05 of device A00 is coupled directly to the conduit A32, and then goes out to the ambient via the outlet A33. Such direct coupling between compression chamber and the conduit/outlet eliminates the loss incurred by the orifice913, resulting in significant efficiency improvement over device900. FIG.24is a schematic diagram of an APG device BOO according to an embodiment of the present invention. The device BOO is similar to the device A00. Different from the device A00, the device BOO further comprises a (cap) structure B11, and a chamber B05 is formed between the cap structure B11 and the film structure10. With the chamber A05 formed by one side of the film structure10and the chamber B05 formed by the other side of the film structure10, the push-pull operation may be performed, such that airflow pulse may be enhanced. Note that, the air pulses produced by the subassemblies810and910may be viewed as airflow pulses, and the subassemblies840and940may be viewed as an airflow-to-air-pressure converter, which has a trumpet-like cross section profile. On the other hand, the air pulses produced by the subassemblies610,710, A10 and B10 may be viewed as air pressure pulses, which create demodulated/asymmetric air pressure pulses directly and may be more efficient than the devices800and900. In addition, the subassembly with conduit formed therewithin or the subassembly having conduit with trumpet-like cross section profile may also be applied on the APG device disclosed in U.S. Pat. Nos. 10,425,732, 11,172,310, etc., filed by Applicant, or other device such as U.S. Pat. No. 8,861,752, which is not limited thereto. FIG.25demonstrates illustrations of timing alignment of virtual valve (VV)112opening for APG devices of present invention. InFIG.25, solid curves represent flaps common mode movement produced by modulation-driving signal SM and darkness in the background represents acoustic resistance corresponding to the virtual valve, where darker shade means higher resistance (VV closed, resulting in the volume within the chamber being disconnected from the ambient) and lighter means lower resistance (VV opened, resulting in the volume within chamber being connected to the ambient). InFIG.25(a), the timing of the open status of virtual valve (VV)112is aligned to maximum (a first peak) of pressure within the chamber is achieved which typically lies slightly before the flaps reaching their most positive (a first peak) common-mode displacement; while the timing of the closed status of virtual valve112is aligned to minimum (a second peak) of pressure within the chamber is reached which typically lies slightly before the flaps reaching their most negative (a second peak) common-mode displacement. Timing alignment shown inFIG.25(a), where the maximum opening of VV112is aligned to a first peak of pressure within the chamber, is to maximize the pulse amplitude of the airflow pluses, which may be suitable for the devices100˜500(with chamber but without conduit formed therein). On the other hand, inFIG.25(b), inspired by valve timing of gas/piston engine in the automobile industry, the timing of the open status of virtual valve112is aligned to a maximum speed of the common mode movement of membrane (flaps) moving toward a first direction; while the timing of the closed status of virtual valve112is aligned to a maximum speed of the common mode movement of membrane (flaps) moving toward a second direction opposite to the first direction. The first direction is a direction from the film structure toward ambient. Timing alignment shown inFIG.25(b)is to maximize the volume of the airflow pluses, which may be suitable for the device600, or the devices700˜900, A00 and B)) (with chamber comprising conduit formed therein). FIG.26is a schematic diagram of an APG device COO according to an embodiment of the present invention. The deice COO is similar to the APG devices previously introduced, which comprise the flaps101and103. The flaps101and103may also be driven by the driving scheme shown inFIG.13. Different from those devices, the device COO comprises no cap structure. Compared to the APG devices introduced above, the device COO has much simple structure, requiring less photolithographic etching steps, done away complicated conduit fabrication steps, and avoid the need to bound two sub-components or subassemblies together. Production cost of the device COO is reduced significantly. Since there is no chamber formed under the cap structure to be compressed, the acoustic pressure generated by the device COO arise mainly out of the acceleration of the flaps (101and103) movement. By aligning the timing of opening of the virtual valve112(in response to the demodulation-driving signal ±SV) to the timing of acceleration of common mode movement of the flaps101and103(in response to the modulation-driving signal SM), the device COO would be able to produce asymmetric air (pressure) pulses. Note that, the space surrounding flaps101and103is divided into two subspaces: one in Z>0, or +Z subspace, and one in Z<0, or −Z subspace. For any common mode movements of flaps101and103, a pair of acoustic pressure waves will be produced, one in subspace +Z, and one in the subspace −Z. These two acoustic pressure waves will be of the same magnitude but of opposite polarities. As a result, when the virtual valve112is opened, the pressure difference between the two air volumes in the vicinity of the virtual valve112would neutralize each other. Therefore, when the timing of differential mode movement reaching its peak, i.e. the timing VV112reaches its maximum opening, is aligned to the timing of acceleration of common mode movement reaching its peak, the acoustic pressure supposed to be generated by the common mode movement shall be subdued/eliminated due to the opening of the virtual valve112, causing the auto-neutralization between two acoustic pressures on the two opposite sides of the flaps101and103, where the two acoustic pressures would have same magnitude but opposite polarities. It means, when the virtual valve112is opened, the device COO would produce (near) net-zero air pressure. Therefore, when the opened period of the virtual valve112overlaps a time period of one of the (two) polarities of acceleration of common mode flaps movement, the device COO shall produce single-ended (SE) or SE-liker air pressure waveform/pulses, which are highly asymmetrical. In the present invention, SE(-like) waveform may refers that the waveform is (substantially) unipolar with respect to certain level. SE acoustic pressure wave may refer to the waveform which is (substantially) unipolar with respect to ambient pressure (e.g., 1 ATM). FIG.27demonstrates illustrations of timing alignment of virtual valve (VV) opening according to an embodiment of the present invention. The timing alignment scheme shown inFIG.27may be applied to the device COO. InFIG.27(a), solid/dashed/dotted curve represents displacement/velocity/acceleration of common mode movement of membrane (flaps101and103) in response to the modulation-driving signal SM, and similar toFIG.25, background darkness represents acoustic resistance caused by open-close action of VV112. For illustration purpose, waveform of membrane/flaps movement inFIG.27(a)is assumed to be (or approximately plotted as) sinusoidal with constant amplitude, where the velocity/acceleration waveform is the 1st/2ndorder derivative of the displacement waveform. As shown inFIG.27(a), the timing of peak VV opening is aligned to the timing of a first peak acceleration of common mode membrane/flaps movement toward a first direction, as discussed above, such timing alignment resulting in auto-neutralization between the two acoustic pressure waves generated in subspaces +Z and −Z, causing the net acoustic pressure to be suppressed, illustrated as the flattened portions of the SE air pressure waveform inFIG.27(b). Also illustrated inFIG.27(a), the timing of VV being closed is aligned to the timing of a second peak acceleration of common mode membrane/flaps movement toward a second direction, the second direction is opposite to the first direction. Since the VV is closed during/around the second peak acceleration, the acoustic pressure generated by the second peak acceleration of flaps101and103is able to radiate away from flaps101and103, resulting in a highly asymmetrical acoustic pressure wave as illustrated by the half-sine portions of the SE air pressure waveform inFIG.27(b). Note that, the opening of virtual valve112does not determine the strength/amplitude of the acoustic pressure pulse, but determines how strong is the “near net-zero pressure” (or the auto-neutralization) effect. When the virtual valve112opening is wide, the “net-zero pressure” effect is strong, the auto-neutralization is complete, the asymmetry will be strong/obvious, resulting in strong/significant baseband signal or APPS effect. Conversely, when the virtual valve112open is narrow, the “net-zero pressure” effect is weak, the auto-neutralization is incomplete, lowering the asymmetry, resulting in weak baseband signal or APPS effect. In an FEM simulation, the device COO can produce 145 dB SPL at 20 Hz. From the FEM simulation, it is observed that, even though the SPL produced by the device COO is about 12 dB lower than which produced by the device600(about 157 dB SPL at 20 Hz), under the same driving condition, THD (total harmonic distortion) of the device COO is 10˜20 dB lower than which of the device600. Hence, the simulation validates the efficacy of the device COO, the APG device without cap structure or without chamber formed therewithin. Please note that, the statement of the timing of VV opening being aligned to the timing of peak pressure within the chamber or peak velocity/acceleration of common mode membrane movement implicitly implies that a tolerance of ±e % is acceptable. That is, the case of the timing of VV opening being aligned to (1±e %) of peak pressure within the chamber or peak velocity/acceleration of common mode membrane movement is also within the scope of present invention, where e % may be 1%, 5% or 10%, depending on practical requirement. As for the pulse asymmetricity,FIG.28illustrates full-cycle pulses (within one operating cycle TCY) with different degrees of asymmetricity. In the present invention, degree of asymmetricity may be evaluated by a ratio of p2to p1, where p1>p2, p1represents a peak value of a first half-cycle pulse with a first polarity with respect to a level, and p2represents a peak value of a second half-cycle pulse with a second polarity with respect to the level. In acoustic area, the level may be corresponding to ambient condition, either ambient pressure (zero acoustic pressure) or zero acoustic airflow, where air pulses in the present invention may refer to either airflow pulses or air pressure pulses. FIG.28(a)illustrates a full-cycle pulse with r=p2/p1>80%. The full-cycle pulse shown inFIG.28(a)or with r=p2/p1≈1 has low degree of asymmetricity.FIG.28(b)illustrates a full-cycle pulse with 40%≤r=p2/p1≤60%. The full-cycle pulse shown inFIG.28(b)or with r=p2/p1≈50% has median degree of asymmetricity.FIG.28(c)illustrates a full-cycle pulse with r=p2/p1<30%. The full-cycle pulse shown inFIG.28(c)or with r=p2/p1→0 has high degree of asymmetricity. As discussed in the above, the higher the degree of asymmetricity is, the stronger the APPS effect and baseband spectrum components of the ultrasonic air pulses will be. In the present invention, asymmetric air pulse refers to air pulse with at least median degree of asymmetricity, meaning r=p2/p1≤60%. Note that, the demodulation operation of the APG device of the present invention is to produce asymmetric air pulses according to the amplitude of ultrasonic air pressure variation, which is produced via the modulation operation. In one view, the demodulation operation of the present invention is similar to the rectifier in AM (amplitude modulation) envelope detector in radio communication systems. In radio communication systems, as known in the art, an envelope detector, a kind of radio AM (noncoherent) demodulator, comprises a rectifier and a low pass filter. The envelope detector would produce envelope corresponding to input amplitude modulated signal thereof. The input amplitude modulated signal of the envelop detector is usually highly symmetric with r=p2/p1→1. One goal of the rectifier is to convert the symmetric amplitude modulated signal such that rectified amplitude modulated signal is highly asymmetric with r=p2/p1→0. After low pass filtering the highly asymmetric rectified AM signal, the envelope corresponding to the amplitude modulated signal is recovered. The demodulation operation of the present invention, which turns symmetric ultrasonic air pressure variation (with r=p2/p1→1) into to asymmetric air pulses (with r=p2/p1→0), is similar to the rectifier of the envelope detector as AM demodulator, where the low pass filtering operation is left to natural environment and human hearing system (or sound sensing device such as microphone), such that sound/music corresponding to the input audio signal SINcan be recovered, perceived by listener or measured by sound sensing equipment. It is crucial for the demodulation operation of the APG device to create asymmetricity. In the present invention, pulse asymmetric relies on proper timing of opening which is aligned to membrane (flaps) movement which generates the ultrasonic air pressure variation. Different APG constructs would have different methodology of timing alignment, as shown inFIG.25andFIG.27. In other words, a timing of forming the opening112is designated such that the plurality of air pulses produced by the APG device is asymmetric. APG device producing asymmetric air pulses may also be applied to air pump/movement application, which may have cooling, drying or other functionality. In addition, power consumption can be reduced by proper cell and signal route arrangement. For example,FIG.29illustrates a top view of an APG device D00 according to an embodiment of the present invention, andFIG.30illustrates a cross sectional view of the device D00 along an A-A′ line shown inFIG.29. The device D00 comprises D01˜D08 cells arranged in an array. Each cell (D0x) may be one of the APG devices (e.g.,400˜C00) stated in the above. InFIG.30, cap structures and subassemblies with conduit formed therein are omitted for brevity. Assume all the flaps in the device D00 are driven by the driving signal scheme431, where top electrodes receive either signal +SV or signal −SV and bottom electrodes receive SM-VBIAS. InFIG.29, long rectangular elongating along Y direction represents flap or top electrode of the actuators disposed on the flap. Shaded in background may represent that bottom electrodes of the actuators are electrically connected. In the device D00, flaps (e.g.,101) receiving the signal −SV and flaps (e.g.,103) receiving the signal +SV are spatially interleaved. For example, when the flap103of the cell D01 receives the signal +SV, the flap101of the cell D02 is suggested to receive the signal −SV. It is because when the signals +SV, −SV toggle polarity or during transition periods of the signals +SV, −SV, there will be capacitive load (dis)charging current flowing through the bottom electrode in X direction, and the effective resistance of the bottom electrode, RBT,P(wherePrefers to parallel current flow), will be low since L/W<<1 and power consumption of the device D00 would be low, wherein L/W represents channel length/width in perspective of the (dis)charging current. On the other hand, under a case that the driving signals −SV, +SV been wired in a pattern of {+SV, −SV}, {−SV, +SV}, {+SV, −SV}, {−SV, +SV}, {+SV, −SV}, {−SV, +SV}, {+SV, −SV}, {−SV, +SV} (not shown inFIG.29), where designates a pair of differential driving signal for one cell D0x, the load (dis)charging current would be in Y direction, and the effective resistance of the bottom electrode, RBT,S(where s refers to series current flow), would be much higher (i.e., RBT,S>>RBT,P, since L/W>>1) and power consumption of such scheme would be higher. In other word, by utilizing the wiring scheme shown inFIG.29, (take cells D01 and D02 as an example) given the flap103of the cell D01 receiving the signal +SV is spatially disposed next to the flap101of the cell D02 receiving the signal −SV and transition periods of the signals±SV temporally overlap, the current from the bottom electrodes of one flap (e.g.,103of D01) travels to a neighboring flap (e.g.,101of D02) directly, without needing to leave the device D00 from a pad and reenter device D00 from another pad. Hence, effective resistance of the bottom electrode is reduced significantly, so is the power consumption. In addition, operating frequency may be enhanced by incorporating multiple (e.g.,2) cells. Specifically, the Air Pressure Pulse Speaker (APPS) sound producing scheme using APG devices of the present invention is a type of discrete time sampled system. On one hand, it is generally desirable to raise the sampling rate in such sampled system in order to achieve high fidelity. On the other hand, it is desirable to lower the operating frequency of the device in order to lower the required driving voltage and power consumption. Instead of raising operating frequency as sampling rate for one APG device, it would be efficient to achieve high pulse/operating rate by interleaving (at least) two groups of (sub-systems) with low pulse/operating rate, temporally and spatially. FIG.31(showing spatial arrangement) is a top view of an APG device E00 according to an embodiment of the present invention. The device E00 comprises two cells E11 and E12 disposed next/adjacent to each other. The cell E11/E12 may be one of the APG devices of the present invention. FIG.32(showing temporal relationship) illustrates waveforms of two set of (de)modulation-driving signals, A and B, intended for the cells E11 and E12. The set A comprises demodulation-driving signal ±SV and modulation-driving signal SM; while the set B comprises demodulation-driving signal ±SV′ and modulation-driving signal SM′. In the embodiment shown inFIG.32, the demodulation-driving signal +SV′/−SV′ of the signal set B is a delayed version of the demodulation-driving signal +SV/−SV of the signal set A. Furthermore, the signal +SV′/−SV′ of the signal set B is the signal +SV/−SV of the signal set A delayed by TCY/2, half of the operating cycle, where TCY=1/fUCand fUCrepresents operating frequency for cell E11/E12. The modulation-driving signal SM′ of the set B may be viewed as an inverse of or a polarity inversion version of the modulation-driving signal SM of the set A. The signals SM and SM′ may have a relationship of SM′=−SM or SM+SM′=C, where C is some constant or bias. For example, when the modulation-driving signal SM of the set A has a pulse with negative polarity with respect to a voltage level (shown as dashed line inFIG.32) within a time period T22, the modulation-driving signal SM′ of the set B would have a pulse with positive polarity with respect to the voltage level (shown as dashed line inFIG.32) within the time period T22. By providing one set of the sets A and B to the cell E11 and the other set of the sets A and B to the cell E12, the device E00 may produce pulse array with pulse/sampling rate as 2×fUCand fUCis operating frequency for each cell. FIG.33is a top view of an APG device F00 according to an embodiment of the present invention. The device F00 comprises cells F11, F12, F21 and F22, arranged in a 2×2 array. The cell in the device F00 may be one of the APG devices of the present invention. Two of the cells F11, F12, F21 and F22 may receive the signal set A, and the other two cells may receive the signal set B. In an embodiment, the cells F11, F12 receive signal set A and the cells F21, F22 receive signal set B. In an embodiment, the cells F11, F22 receive signal set A and the cells F12, F21 receive signal set B. In an embodiment, the cells F11, F21 receive signal set A and the cells F12, F22 receive signal set B. Similar to the device E00, the device also produces pulse array with pulse/sampling rate as 2×fUC. Note that, conventional speaker (e.g., dynamic driver) using physical surface movement to generate acoustic wave faces problem of front-/back-radiating wave cancellation. When physical surface moves to cause airmass movement, a pair of soundwaves, i.e., front-radiating wave and back-radiating wave, are generated. The two soundwaves would cancel most of each other out, causing net SPL being much lower than the one that front-/back-radiating wave is measured alone. Commonly adopted solution for front-/back-radiating wave canceling problem is to utilize either back enclosure or open baffle. Both solutions require physical size/dimension which is comparable to wavelength of lowest frequency of interest, e.g., wavelength as 1.5 meter of frequency as 230 Hz. Compared to conventional speaker, the APG device of the present invention occupies only tens of square millimeters (much smaller than conventional speaker), and produces tremendous SPL especially in low frequency. It is achieved by producing asymmetric amplitude modulated air pulses, where the modulation portion produces symmetric amplitude modulated air pressure variation via membrane movement and the demodulation portion produces the asymmetric amplitude modulated air pulses via virtual valve. The modulation portion and the demodulation portion are realized by flap pair(s) fabricated in the same fabrication layer, which reduces fabrication/production complexity. The modulation operation is performed via common mode movement of flap pair and the demodulation operation is performed via differential mode movement of flap pair, wherein the modulation operation (via common mode movement) and the demodulation operation (via differential mode movement) may be performed by single flap pair. Proper timing alignment between differential mode movement and common mode movement enhances asymmetricity of the output air pulses. In addition, horn-shape outlet or trumpet-like conduit helps on improving propagation efficiency. In summary, the air-pulse generating device of the present invention comprises a modulating means and a demodulating means. The modulating means, which may be realized by applying the modulation-driving signal to the flap pair (102or104), is to produce amplitude modulated ultrasonic acoustic/air wave with ultrasonic carrier frequency according to a sound signal. The demodulating means, which may be realized by applying the pair of demodulation-driving signals +SV and −SV to the flap pair (102) or by driving the flap pair (102) to form the opening (112) periodically, to perform the synchronous demodulation operation of shifting spectral components of the ultrasonic acoustic/air wave UAW by ±n×fUC. As a result, spectral component of the ultrasonic air wave corresponding to the sound signal is shifted to audible baseband and the sound signal is reproduced. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
76,665
11943586
DETAILED DESCRIPTION OF THE INVENTION Parts and variables corresponding to one another have always been provided with the same reference sign in all figures. The hearing aid2illustrated inFIG.1comprises a surround microphone4, based on which sound from the surroundings of the wearer of the hearing aid2can be captured. Moreover, the hearing aid2comprises an auditory canal microphone6, which serves to capture sound in the auditory canal8of the wearer. To this end, the auditory canal microphone6is introduced into the auditory canal8of the wearer if they are wearing the hearing aid2as intended. In this case, the auditory canal8is only illustrated schematically inFIG.1as lines that are parallel to one another. According to the embodiment of the hearing aid2illustrated inFIG.1, the latter is embodied as a so-called behind-the-ear hearing aid. In this case, the surround microphone4is arranged in a housing10which is worn behind the ear, in particular between the auricle of the wearer and their head. According to an alternative not illustrated in any more detail here, the hearing aid is embodied as an in-the-ear hearing aid, in which both the surround microphone4and the auditory canal microphone6are arranged in a common housing, which is provided and set up for at least partial insertion into the auditory canal. The surround microphone4and the auditory canal microphone6each are an electroacoustic transducer. They convert the respectively captured sound in the surroundings of the wearer, also referred to as surround sound for short, and the captured sound in the auditory canal8, also referred to as auditory canal sound for short, into an electrical surround signal SU and into an electrical auditory canal signal SG, respectively. The microphones4and6are connected to a control unit12, in which an own voice recognition unit14is integrated, for signal transfer. Thus, the own voice recognition unit14is a constituent part of the control unit12. Furthermore, the hearing aid2comprises a receiver16, in this case a micro-loudspeaker, which is likewise introduced into the auditory canal8. The receiver16is likewise connected to the control unit12for signal transfer, and so a receiver signal SH can be output from the control unit12to the receiver. The receiver16converts the receiver signal SH into sound and outputs the latter into the auditory canal8of the wearer. The hearing aid2moreover comprises a motion sensor18which is embodied as an acceleration sensor and based on which it is possible to determine the position and/or the orientation of the hearing aid. The motion sensor18is received in the housing10and rigidly joined therewith at the location, the surround microphone4also being arranged in said housing. Here, the own voice recognition unit14is set up to carry out an own voice recognition. Expressed differently, the surround signal SU and/or the auditory canal signal SG are analyzed using the own voice recognition unit14in respect of the presence of the own voice of the wearer. In the process, the surround signal SU and/or the auditory canal signal SG are processed by the control unit12or by the own voice recognition unit14, the processing depending on whether the own voice of the wearer was recognized. The receiver signal SH is generated and output to the receiver16as a result of the processing. Here, recognition of the own voice by the own voice recognition is implemented based on an analysis of the signals SU and SG by means of a number of filters F, i.e., one filter on more than one filter. In this case, the filter or filters F have an adaptive embodiment. Consequently, these are changeable or adaptable, especially within the scope of training. In summary, the own voice recognition is adaptive. The filter or at least one of the filters F is embodied in such a way in the process that, if applied, a signal corresponding to the own voice of the carrier or to the part of a signal to be analyzed which corresponds to the own voice is attenuated to the greatest possible extent. Thus, a signal SU or SG analyzed based on this filter F is subject to more attenuation, the more said signal corresponds to the own voice of the wearer. According to an alternative not illustrated in any more detail here, an algorithm that analyzes the signals SU and/or SG is used in analogous fashion for the recognition of the own voice. The own voice recognition unit14is embodied such that training of the adaptive own voice recognition is started based on the signal SG output by the auditory canal microphone6, as illustrated in more detail below based onFIG.2. It is evident fromFIG.2that the training is implemented during normal operation N. In this case, the auditory canal signal SG and/or the surround signal SU is processed, in particular continuously, by the control unit12or by the own voice recognition unit14and is output to the receiver16as receiver signal SH for compensating a hearing deficit of the wearer. The signal SG output by the auditory canal microphone6, which corresponds to or represents the sound in the auditory canal8of the wearer, is transmitted to the control unit12, specifically to the own voice recognition unit14(step I). In a second step II, the auditory canal signal SG is analyzed in respect of the presence of the own voice using the own voice recognition unit14. To this end, the level P of the auditory canal sound is compared to a given threshold based on the auditory canal signal SG or a corresponding value of the auditory canal signal SG. The own voice of the wearer is considered identified should this threshold be exceeded. Additionally, for the purposes of analyzing the auditory canal signal SG in respect of the presence of the own voice, a spectral analysis in respect of at least one feature M characteristic for the own voice of the wearer is carried out for the auditory canal signal SG. Furthermore, a filter F1of the number of filters F of the speech recognition is applied to the auditory canal signal SG and an attenuation of the latter is compared to a specified threshold for the (redundant) analysis as to whether the own voice of the wearer is present. The presence of the own voice is deduced should the attenuation be greater than the threshold. According to alternatives not illustrated in any more detail, only one or two of the processes presented above, i.e., determining a level of the signal SG, the spectral analysis thereof or the application of a filter to this signal SG, is used for the analysis of the signal SG in respect of the presence of the own voice. Preferably, the remainder of the method is only carried out if the own voice of the carrier was recognized in step II. In a third step III of the method, which follows the second step in time, the auditory canal signal SG is used to determine whether the acoustic surroundings of the hearing aid2are suitable for training. To this end, the acoustic surroundings are analyzed in respect of a noise and in respect of a reverberation time tN. To this end, a signal-to-noise ratio SNR is determined for the auditory canal signal SG and is compared to a specified threshold. In this case, the noise is determined by means of a noise estimator. According to an alternative not illustrated in any more detail, a signal-to-noise ratio SNR is also determined by means of the noise estimator for the surround signal SU. If the own voice is recognized as per step II, if the specified threshold is exceeded by the signal-to-noise ratio SNR of the auditory canal signal SG, optionally if the threshold is exceeded by the signal-to-noise ratio SNR for the surround signal SU, and in the case of a reverberation time tN that is shorter than a further given threshold, the training of the adaptive own voice recognition is started or, should training have already been started, the latter is continued. Within the scope of the training (step IV), the filter or filters of the own voice recognition are altered. A convergence value K, which is a measure for the recognition of the own voice by means of the own voice recognition, is determined in a fifth step V. Here, the absolute value of the attenuation (damping) of the surround signal SU and/or of the auditory canal signal SG when applying the filter or filters F of the own voice recognition is used as convergence value K. Further training is admitted if the convergence value K is smaller than a further specified threshold TK; this is illustrated inFIG.2based on the arrow from step V to step I. Should the convergence value K be greater than the threshold TK, the training is finished and there is no further training, i.e., the further training is omitted. In a sixth step VI, a position of the surround microphone4is determined based on the signal SB of the motion sensor18. In this case, signals SB are output from the motion sensor18to the control unit12. They are evaluated by the control unit12in respect of their relative position and in respect of their orientation and consequently in respect of the relative position and orientation of the surround microphone4and the housing10. Should a malposition be identified, i.e. a deviation of the position and/or the orientation of the surround microphone from the position in which the own voice recognition was trained earlier in time, the above-described method is carried out again from step I, with, therefore, there being a further training of the own voice recognition. This also occurs should a previously determined convergence value K be greater than the threshold TK. This at least reduces the risk of an incorrect detection of the own voice or incorrect determination on account of the malposition. Determining whether a malposition is present as per step VI is implemented automatically during the normal operation N in this case. This occurs recurrently here after a specified time interval, for example every 30 seconds. In summary, the training of the adaptive own voice recognition is controlled based on the signal SG output by the auditory canal microphone6. Determining whether the training is started or continued is implemented automatically in this case, i.e., without an input by the user. The invention is not restricted to the above-described exemplary embodiment. Rather, other variants of the invention can also be derived from this by a person skilled in the art without departing from the subject matter of the invention. In particular, all individual features described in the context of the exemplary embodiment are further also combinable with one another in a different way without departing from the subject matter of the invention. The following is a summary list of reference numerals and the corresponding structure used in the above description of the invention: LIST OF REFERENCE SIGNS 2Hearing aid4Surround microphone6Auditory canal microphone8Auditory canal10Housing12Control unit14Own voice recognition unit16Receiver18Motion sensorI Output of the auditory canal signal to the own voice recognition unitII Analysis in respect of the own voiceIII Evaluation of the surroundings of the hearing aidIV TrainingV Determination of the convergence valueVI Determination of a malposition of the surround microphoneF,F1FilterK Convergence valueM Feature of the own voiceN Normal operation of the hearing aidP LevelSNR Signal-to-noise ratioSB Signal of the motion sensorSG Auditory canal signalSH Receiver signalSU Surround signaltN Reverberation time
11,486
11943587
DETAILED DESCRIPTION Systems and methods described herein may be used to generate one or more models for use in a hearing assistance device, for example to be used as a shell for the hearing assistance device. The one or more models may be generated such that they may be used as semi-customized hearing assistance device shells (e.g., such that most users (e.g., 90-95%) are able to use one of the one or more models without discomfort). The one or more models may be specific to a type of hearing assistance device, for example in-the-ear (ITE), in-the-canal (ITC), completely-in-the-canal (CIC), invisible-in-the-canal (IIC), or the like. A database or library of 3D hearing aid shells may include custom shells generated for users, for example based on images of users, molds, etc. The database may include corresponding information, such as fit, comfort, or other user information (e.g., age, preferences, etc.). Using the database, the systems and methods described herein may determine a set of prototype shells using a machine learning technique, with shells in the set designed to fit a large population of users. Current procedures to fit a patient with a hearing device are long and complex. For example, a test may be conducted to determine the degree of hearing loss for an individual. An ear impression may be taken. In addition to the specifications of the hearing assistance device, a physical mold may also be sent to a manufacturer for a custom hearing assistance device request. At the manufacturing office, the ear impression may be digitized and further processed into a hearing shell that will fit into the patient's ear. The required electronic parts to address the earlier diagnosed hearing loss may then be installed. The manufacturer then sends the final device to an audiologist who subsequently meets with the patient to complete the final program modifications using the physical hearing assistance device. This whole process takes an average of three weeks or longer. The techniques described herein may be used to eliminate some of these steps such that the time it takes to fit a patient with a hearing device may be an hour or less. Previous efforts to attain this kind of fit have centered around the notion of building the external region of the hearing assistance device that goes into the auditory canal for a snug fit. This lack of variability prevents the device from going deep in the canal. The techniques described herein may be used to learn a particular number of 3D shells in a set (e.g., 2, 5, 7, 10, 20, etc.) from a repository of custom-made shells such that a patient may choose one that fits them. Once the set of shells is determined, the shells may be mass produced to lower costs and provide an optimized hearing assistance device design. In an example, an individual may determine a best fit of the set of shells without professional help, for example by taking pictures of the individual's ear, and optionally selecting a shell using an online interface, FIG.1illustrates a flow diagram100for generating one or more models for hearing assistance device shells according to an example. The flow diagram100includes a database101, which may store a plurality of custom hearing aid shells (e.g., specifications for generating, printing, or manufacturing shells). The information stored in the database101about the shells may include one or more of measurement data, patient comfort or fit data, information about whether a particular shell was returned by a patient, shell type (e.g., for use with an ITE, an ITC, a CIC, an IIC, or other hearing assistance device) or the like. The shells may be generated for saving in the database101by obtaining a silicone impression of the auditory canal of a patient. These impressions may be stored in a computer or the database101using a 3D laser scanning system. The digitized ear impression is further processed with a 3D CAD software to produce a hearing aid housing. Such hearing aids include in-the-ear hearing aids, in-the-canal hearing aids, completely-in-the-canal hearing aids and invisible-in-the-canal hearing aids. Operations conducted using components102-112are explained in more detail below. The database101may be accessed to retrieve shell information. In an example, for training a machine learning model, shell information may be accessed from the database101and aligned with a template model at component102. Once aligned, a shell may be voxelized (e.g., broken down into voxels, or 3D boxes) to represent the shape of the shell at component104. At component106, features of the shell (represented with voxels) may be extracted. At component108, the plurality of shells (or a subset) may be clustered. Component108may be used to cluster multi-dimensional data from component106into k-clusters (e.g., 4, 8, 12, etc.). Component110may be used to compute a “mean” of each cluster, for example an average or median shell representing the cluster. The “mean” shells of each cluster may be segmented from a 3D image to generate a final set of shells. Once complete, a “mean” shell may be sent to a 3D printer, stored on a central server, for example at a fabrication site for retrieval at a later time, or stored in the database101. FIG.2illustrates alignment of a shell model202to a template201according to an example. The shell model202and the template201are shown in a first position200A before alignment of the shell model202to the template201and a second position200B after alignment of the shell model202to the template201. Alignment of a shell model to the template201may be performed for a plurality of shell models. In an example, each model obtained from laser scanning may be initially stored with an arbitrary coordinate system, since each is captured from a different viewpoint. These models may then be aligned to the template201(e.g., each obtained model is aligned to a same template). In an example, the template201may be representative of a class of hearing assistance device. For a particular class of hearing assistance device, a spatial transformation may be found to align each shell with the template201, which may be representative of that class. The registration may result in transformation of all 3D shells (e.g., of a particular style) into a same coordinate system (e.g., that of the template201) such that poses of the 3D shells may be estimated within the same coordinate system. The registration of these models described herein may be model-based, point-based, or feature-based. In an example, the registration of a template model (target), and source model, may be achieved as follows. Suppose the target and source model are composed of the points, Y={γ1, γ2, γ3, . . . , γn}, X={x1, x2, x3, . . . , xn}, in R3respectively. The parameters of a transformation T may be generated such that when applied to the source points, a best alignment of the target and source model is obtained. To estimate the pose or alignment, the correspondence between the source and the target points may be assumed. Let ζ denote the correspondence between the two point sets so that point i in the target model is mapped to point ζ(i) in the source model. In most practical applications, some source points have no correspondence with any target points. An example approach to handle such situation is to assign weights to the source points so that points with no correspondence have a weight of zero and the weights for points with correspondence are set to one. Thus, the following error function in Eq. 1 may be minimized to align points: E⁡(θ,ζ)=∑i=1mwi⁢ϵ2(❘"\[LeftBracketingBar]"yζ⁡(i)-T⁡(xi,θ)❘"\[RightBracketingBar]")Eq.1 where xiand yζ(i)are corresponding points, wiis the weights and E is a distance function. In an example, the point-wise correspondences between the two point clouds in the target and source model are unknown. In this scenario, the alignment and correspondence between the two point sets may be solved simultaneously or alternatingly. An example approach for solving this problem is an Expectation-Maximization (EM) type algorithm. In an example, an initial guess may be used and then an iterative solution for the correspondence and estimation in an alternating fashion may be used. In an example, the transformation T is rigid and includes only translation and rotation. An example of an algorithm that performs rigid registration is the iterative closest point (ICP) algorithm. In this example, the ICP may be executed in two iterative steps, as described below. Starting with an initial guess for the parameters of T, α0, the correspondence between the two point sets may be computed as shown in Eq. 2: ζ⁡(i)=argminjϵ2[❘"\[LeftBracketingBar]"yj-T⁡(xk,θi)❘"\[RightBracketingBar]"];i=1,2,…,mEq.2 Next, the parameters of T may be updated as shown in Eq. 3: θk+1=argminθ⁢∑i=1mϵ2[❘"\[LeftBracketingBar]"yζ⁡(i)-T⁡(xi,θ)❘"\[RightBracketingBar]"]Eq.3 These two steps may be repeated until the error function falls below a specified threshold. Other examples of algorithms that may be used for registration include but are not limited to Levenberg-Marquardt algorithm (LAI-ICP), robust point matching, coherent point drift, modal and spectral matching, or PCA alignment. FIG.2demonstrates the registration of an in-the-ear housing shell202with a template model201, but other example templates may be used.200A shows the initial position before registration while200B is the result after the registration process is completed. FIG.3illustrates feature extraction of a model to generate a feature vector according to an example. The model is shown in various stages, including before feature extraction at300A, at a coarse level of voxelization at300B and302, a fine level of voxelization at300C and304, and represented as a feature vector at300D. Following the registration of each model to a canonical coordinate frame, voxelization may be performed. In a voxelization operation, each 3D model may be represented as a polygonal mesh that is approximated with a set of voxels (e.g., cubes). In an example, to voxelize a model, first the bounding cube of the 3D model is obtained. This bounding cube is then uniformly subdivided in the three coordinate axis, after which the fraction of the polygonal mesh surface inside each cube is estimated, A voxel is assigned a value of 1 if it overlaps with a surface of the mesh, otherwise it is set to zero. Thus, each object may be represented as a binary function as shown in Eq. 4: u⁡(x)={1,x∈Ω0,x∉ΩEq.4 where Ω represents the domain of each object, FIG.3shows two coarse examples at300B and302, and two fine examples at300C and304for voxelization of an example shell300A. Using a voxel grid may achieve a more robust handling of the variances of the polygonal surface (e.g., the 3D model). The information stored in each voxel may be further processed to obtain a more compact descriptor of the 3D model represented as a feature vector at300D. In an example, a 3D Discrete Fourier Transform is applied to the voxel model (e.g., a fine model300C or304) to obtain the spectral domain feature vector at300D. In addition to being invariant to translation, rotation, scaling and reflection, it may be useful for the feature vector chosen to be insensitive to noise, and robust against random topological degeneracies. Other suitable descriptors that may be employed here include 3D voxel-based spherical harmonic, 3D ray-based spherical harmonics, PCA-spherical harmonics transform, probability density-based shape descriptors, or 3D Hough transform descriptor. FIG.4illustrates sets of feature vectors according to an example. For example,FIG.4includes a full set of feature vectors represented in graph402, and clustered sets of feature vectors represented in graph404. In an example, each dot on graph402or404may represent a feature vector of a 3D model. The feature vectors of 3D models (e.g., from300D) may be partitioned into k clusters. The determination of the number of clusters may be guided by the shape and scale parameters of the point distribution or the target application. When the number of inherent clusters in the dataset is not apparent, then the number may be estimated. A metric used to compare results for different values of K may include an average distance between data points in a cluster and its centroid. Since increasing K ultimately reduces this metric to zero, which corresponds to when K equals the number of data points, this metric may not be sufficient. The selection process may further include an elbow method, an information criterion approach, a silhouette method, cross-validation, or analysis of a kernel matrix. A k-means clustering algorithm may be used. In an example, given feature vectors x(1), x(2), . . . , x(m)∈Rn, k centroids may be predicted and for each training data, a label c(i)may be predicted. The algorithm may include:1. Randomly initialize cluster centroids μ1, μ2, . . . , μk∈Rn2. While not converged: For⁢every⁢i,set⁢c(i):=argminjx(i)-μj2Eq.5For⁢every⁢j,set⁢μj:=∑i=1m⁢1⁢{c(i)=j}⁢x(i)∑i=1m⁢1⁢{c(i)=j} A number of alternative clustering algorithms may be employed, such as density-based clustering methods, spectral clustering, soft clustering with Gaussian mixtures, a neural network such as a generative adversarial network (GAN) for a distribution that results in multiple shells, a serial auto-encoder, or the like. For any positive M, the three-dimensional Discrete Fourier Transform of a 3D array, un, is an invertible linear transformation, F:N×N×N→N×N×N, defined by: Uk=∑n=0M-1un⁢e-2⁢π⁢ik·(nM)Eq.6 Where M-1=(M1-1,M2-1,M3-1),nM=(n1M1,n2M2,n3M3),k=(k1,k2,k3),n=(n1,n2,n3) and the summation is over all 3-tuples from 0 to M−1. The inverse transform may be defined as: un=1M1⁢M2⁢M3⁢∑k=0M-1Uk⁢e2⁢π⁢i⁢n·(kM)Eq.7 where k/M=(k1/M1, k2/M2, k3/M3). From the analysis above and assuming a voxel grid of dimension M3, the matrix of Fourier coefficients, F, for all N objects may be constructed. The final shape of F may be 2M3×N because of the expansion of the complex coefficients into its real and imaginary counterparts. An example output of this process may include the clustered graph404, values corresponding to mean points (e.g., a mean or average shell model) of the clustered graph404, a feature vector corresponding to each cluster of the clustered graph404(e.g., of a centroid of a cluster) or the like. The output may be used to generate a set of shells. FIG.5illustrates an example 3D shell model output according to an example. The mean shape within each cluster (e.g., clusters of graph404) may be estimated by computing the average,F, of the coefficients components-wise, for example using a formula such as Eq. 8: F_=1N⁢∑j=1NF(j),Eq.8 where F(j)is the jth column of F. InFIG.5,500Ashows the final result after calculating the inverse transform of the mean of coefficients for objects in a cluster: The final output, in an example, may be a mirror image or inverted from an original model (e.g., as shown inFIG.3at300A). As shown inFIG.5at500A, the final output may not be a binary function. To rectify this, the following minimization problem may be solved, for example based on a Modica-Mortola energy: minu[∫Ωϵ⁢❘"\[LeftBracketingBar]"∇u❘"\[RightBracketingBar]"2+1ϵ⁢u2(1-u)2⁢dx+λ2⁢u-uo2]Eq.9 In other words, given an object function uo, an optimal approximation u of uoand a decomposition Ωjof Ω may be determined, such that inside each the variation of u is smooth but discontinuous across element boundaries. In an example, where Γ={x|u(x)=0.5} represents the shape boundary at the 0.5 level set of u, a binary representation may be obtained by setting the value of u to 1 inside Γ and 0, outside. This procedure is shown inFIG.5where500B and500C show a transition state (500B) and a final state (500C). The final state500C is not inverted, and corresponds to the original model (e.g.,300A ofFIG.3). The final state500C may be the result after binarization. FIG.6illustrates a set600of 3D shell models according to an example. The set of models600are labeled 1-7, but may include any number of models, such as corresponding to a number of clusters in graph404ofFIG.4(e.g., 2, 3, 5, 10, 20, etc.). In an example, the set of 3D shell models600may be used as generic shells for users, such that the models in the set600cover a portion of the population within a tolerance. For example, the models in the set may cover 90-95% of the population within a fit tolerance. The tolerance may include a physical tolerance, such as height, width of canal aperture, width of concha bowl, canal aperture height or width, hardness, durability, or the like. In another example, the tolerance may include a comfort tolerance level (e.g., users do not complain about the fit, users only experience a certain amount of discomfort, no pain is present, or the like). A best fit shell from the set of models600may be used for a user. For example, a user may test shells by insertion of physical representations of the models to test for fit. In another example, an image of the user's ear anatomy may be generated (e.g., using a smart phone), from which a model may be generated of the user's ear. The model may be aligned (e.g., as described above), and compared using for example a best fit algorithm or minimum distance algorithm (or via a machine learning technique) to compare to each model of the set of models600. The best fit or minimum distance model of the set600may be identified for the user. The selected model from the set600may be physically generated as a shell, which may be used by the user as part of a hearing assistance device. The physical shell ay be generated before the testing (e.g., a number of physical shells of each of the models in the set600may be on hand), and given to the user without needing to wait for manufacturing. FIG.7illustrates a flowchart showing a technique700for generating a set of 3D shell models of a hearing assistance device according to an example. The technique700includes an operation702to align a plurality of 3D input models to a template. The plurality of 3D input models may be generated from images based on patient anatomy. For example, the images may include two orthogonal images (e.g., images taken from vantage vectors substantially 90 degrees apart). The two orthogonal images may be generated by a mobile device, for example a phone. In an example, the images may be generated from a mold (e.g., silicone) of patient anatomy, for example by scanning the mold or taking one or more pictures of the mold. In an example, operation702may include aligning and determining correspondence between respective points in a model of the plurality of 3D input models and points in the template. The aligning and correspondence may be performed together, for example simultaneously, alternating, or the like. The alignment may be performed iteratively using, for example, an expectation-maximization algorithm. In an example, aligning the models to the template may include only translation or rotation (e.g., a rigid alignment, without skewing, deleting points, or otherwise modifying the outline or shape of the models). The technique700includes an operation704to extract features of each of the aligned plurality of 3D input models to generate a plurality of feature vectors corresponding to the aligned plurality of 3D input models. Operation704may include converting the plurality of 3D input models into voxels. The feature vectors may be generated using a 3D Discrete Fourier Transform (DFT) applied to the voxels, in an example. Extracting filters may include using a low pass filter as described above. The technique700includes an operation706to cluster the plurality of feature vectors to generate a set of clusters. Clustering may include using one or more of: k-means clustering, density-based clustering, spectral clustering, modeling with Gaussian mixtures, or the like. The technique700includes an operation708to estimate a mean shell shape of each of the set of clusters. Operation708may include by determining a component-wise average of Fourier coefficients of a matrix comprising a linear transformation of the feature vectors of a particular cluster. Other estimation techniques may be used to determine an average (mean), or median shell, of a particular cluster. The technique700includes an operation710to output a set of 3D shell models corresponding to the set of clusters using a respective mean shell shape of each of the set of clusters. In an example, before outputting the set of 3D shell models, the technique700may include inverting the respective mean shell shapes of each of the set of clusters, for example by solving a minimization problem to generate the set of 3D shell models. The minimization problem may use a Mordica-Mortola energy functional, in an example. In an example, the set of 3D shell models may be used as generic shells for users, such that the models in the set cover a portion of the population within a tolerance. For example, the models in the set may cover 90-95% of the population within a fit tolerance. The tolerance may include a physical tolerance, such as height, width of canal aperture, width of concha bowl, canal aperture height or width, hardness, durability, or the like. In another example, the tolerance may include a comfort tolerance level (e.g., users do not complain about the fit, users only experience a certain amount of discomfort, or the like). Physical shells may be generated from the set of 3D shell models, in an example. A user may test a fit using the physical shells. In an example, one or more images of a user may be captured (e.g., two orthogonal images, images of a mold, etc.). The one or more images of the user may be used to generate a model (e.g., a computer 3D rendering). The model may be aligned, such as to the template or to one or more of the set of 3D shell models that were output in operation710. Once aligned (and optionally point correspondence is performed), one of the set of 3D shell models may be selected for the user. The selection may be based on a best fit of the aligned model to the models in the set, a machine learning technique may be applied to find a best model in the set when compared to the aligned model, a distance function from the aligned model to each of the models in the set may be performed, or the like. The selected model from the set may be physically generated as a shell, which may be used by the user as part of a hearing assistance device. FIG.8illustrates a flowchart showing a technique800for fitting a model to a patient according to an example. The technique800includes an operation802to receive an image of anatomy of a patient, for example including at least a portion of a canal aperture of an ear of the patient. The image may be of a mold taken of the anatomy of the patient. In an example, the image includes two orthogonal images generated by a mobile device. The technique800includes an operation804to generate a patient model of a portion of the anatomy of the patient. The patient model may indicate at least one of a height or width of the canal aperture. In another example, the patient model may indicate at least one of a height or width of a concha bowl of the ear of the patient. The technique800includes an operation806to determine, using the patient model, a best fit model from a set of hearing assistance device shell models, which may be generated using a machine learning technique as described herein. In an example, the set of models may be generated by clustering a plurality of feature vectors corresponding to a plurality of 3D input models to generate a set of clusters, and estimating a mean shell shape of each of the set of clusters. The plurality of feature vectors may be generated by aligning the plurality of 3D input models to a template, and extracting features of each of the aligned plurality of 3D input models to generate the plurality of feature vectors corresponding to the aligned plurality of 3D input models. In an example, aligning the plurality of 3D input models to the template includes determining correspondence between respective points in a model of the plurality of 3D input models and points in the template. Extracting features of each of the aligned plurality of input models may include converting the plurality of 3D input models into voxels. In an example, clustering the plurality of feature vectors includes using at least one of: k-means clustering, density-based clustering, spectral clustering, or modeling with Gaussian mixtures. In another example, the set of hearing assistance device shell models are output after inverting the respective mean shell shapes of each of the set of clusters by solving a minimization problem. The technique800includes an operation808to output an identification of the best fit model. FIG.9illustrates generally an example of a block diagram of a machine900upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform according to an example. In alternative embodiments, the machine900may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine900may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine900may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine900may be a hearing assistance device, a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations when operating. A module includes hardware. In an example, the hardware may be specifically configured to carry out a specific operation (e.g., hardwired). In an example, the hardware may include configurable execution units (e.g., transistors, circuits, etc.) and a computer readable medium containing instructions, where the instructions configure the execution units to carry out a specific operation when in operation. The configuring may occur under the direction of the executions units or a loading mechanism. Accordingly, the execution units are communicatively coupled to the computer readable medium when the device is operating. In this example, the execution units may be a member of more than one module. For example, under operation, the execution units may be configured by a first set of instructions to implement a first module at one point in time and reconfigured by a second set of instructions to implement a second module. Machine (e.g., computer system)900may include a hardware processor902(e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory904and a static memory906, some or all of which may communicate with each other via an interlink (e.g., bus)908. The machine900may further include a display unit910, an alphanumeric input device912(e.g., a keyboard), and a user interface (UI) navigation device914(e.g., a mouse). In an example, the display unit910, alphanumeric input device912and UI navigation device914may be a touch screen display. The machine900may additionally include a storage device (e.g., drive unit)916, a signal generation device918(e.g., a speaker), a network interface device920, and one or more sensors921, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine900may include an output controller928, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NEC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.). The storage device916may include a machine readable medium922that is non-transitory on which is stored one or more sets of data structures or instructions924(e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions924may also reside, completely or at least partially, within the main memory904, within static memory906, or within the hardware processor902during execution thereof by the machine900. In an example, one or any combination of the hardware processor902, the main memory904, the static memory906, or the storage device916may constitute machine readable media. While the machine readable medium922is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions924. The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine900and that cause the machine900to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. Specific examples of machine readable media may include: nonvolatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions924may further be transmitted or received over a communications network926using a transmission medium via the network interface device920utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IFEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device920may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network926. In an example, the network interface device920may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine900, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. Hearing assistance devices typically include at least one enclosure or housing, a microphone, hearing assistance device electronics including processing electronics, and a speaker or “receiver.” Hearing assistance devices may include a power source, such as a battery. In various embodiments, the battery may be rechargeable. In various embodiments multiple energy sources may be employed. It is understood that in various embodiments the microphone is optional. It is understood that in various embodiments the receiver is optional. It is understood that variations in communications protocols, antenna configurations, and combinations of components may be employed without departing from the scope of the present subject matter. Antenna configurations may vary and may be included within an enclosure for the electronics or be external to an enclosure for the electronics. Thus, the examples set forth herein are intended to be demonstrative and not a limiting or exhaustive depiction of variations. It is understood that digital hearing assistance devices include a processor. In digital hearing assistance devices with a processor, programmable gains may be employed to adjust the hearing assistance device output to a wearer's particular hearing impairment. The processor may be a digital signal processor (DSP), microprocessor, microcontroller, other digital logic, or combinations thereof. The processing may be done by a single processor, or may be distributed over different devices. The processing of signals referenced in this application may be performed using the processor or over different devices. Processing may be done in the digital domain, the analog domain, or combinations thereof. Processing may be done using subband processing techniques. Processing may be done using frequency domain or time domain approaches. Some processing may involve both frequency and time domain aspects. For brevity, in some examples drawings may omit certain blocks that perform frequency synthesis, frequency analysis, analog-to-digital conversion, digital-to-analog conversion, amplification, buffering, and certain types of filtering and processing. In various embodiments the processor is adapted to perform instructions stored in one or more memories, which may or may not be explicitly shown. Various types of memory may be used, including volatile and nonvolatile forms of memory. In various embodiments, the processor or other processing devices execute instructions to perform a number of signal processing tasks. Such embodiments may include analog components in communication with the processor to perform signal processing tasks, such as sound reception by a microphone, or playing of sound using a receiver (i.e., in applications where such transducers are used). In various embodiments, different realizations of the block diagrams, circuits, and processes set forth herein may be created by one of skill in the art without departing from the scope of the present subject matter. Various embodiments of the present subject matter support wireless communications with a hearing assistance device. In various embodiments the wireless communications may include standard or nonstandard communications. Some examples of standard wireless communications include, but not limited to, Bluetooth™, low energy Bluetooth, IEEE 802.11 (wireless LANs), 802.15 (WPANs), and 802.16 (WiMAX). Cellular communications may include, but not limited to, CDMA, GSM, ZigBee, and ultra-wideband (UWB) technologies. In various embodiments, the communications are radio frequency communications. In various embodiments the communications are optical communications, such as infrared communications. In various embodiments, the communications are inductive communications. In various embodiments, the communications are ultrasound communications. Although embodiments of the present system may be demonstrated as radio communication systems, it is possible that other forms of wireless communications may be used. It is understood that past and present standards may be used. It is also contemplated that future versions of these standards and new future standards may be employed without departing from the scope of the present subject matter. The wireless communications support a connection from other devices, Such connections include, but are not limited to, one or more mono or stereo connections or digital connections having link protocols including, but not limited to 802.3 (Ethernet), 802.4, 802.5, USB, ATM, Fibre-channel, Firewire or 1394, InfiniBand, or a native streaming interface. In various embodiments, such connections include all past and present link protocols. It is also contemplated that future versions of these protocols and new protocols may be employed without departing from the scope of the present subject matter. In various embodiments, the present subject matter is used in hearing assistance devices that are configured to communicate with mobile phones. In such embodiments, the hearing assistance device may be operable to perform one or more of the following: answer incoming calls, hang up on calls, and/or provide two way telephone communications. In various embodiments, the present subject matter is used in hearing assistance devices configured to communicate with packet-based devices. In various embodiments, the present subject matter includes hearing assistance devices configured to communicate with streaming audio devices. In various embodiments, the present subject matter includes hearing assistance devices configured to communicate with Wi-Fi devices. In various embodiments, the present subject matter includes hearing assistance devices capable of being controlled by remote control devices. It is further understood that different hearing assistance devices may embody the present subject matter without departing from the scope of the present disclosure. The devices depicted in the figures are intended to demonstrate the subject matter, but not necessarily in a limited, exhaustive, or exclusive sense. It is also understood that the present subject matter may be used with a device designed for use in the right ear or the left ear or both ears of the wearer. The present subject matter may be employed in hearing assistance devices, such as headsets, headphones, and similar hearing devices. The present subject matter is demonstrated for hearing assistance devices, including hearing assistance devices, including but not limited to, behind-the-ear (BTE), in-the-ear (ITE), in-the-canal (ITC), receiver-in-canal (RIC), or completely-in-the-canal (CIC) type hearing assistance devices. It is understood that behind-the-ear type hearing assistance devices may include devices that reside substantially behind the ear or over the ear. Such devices may include hearing assistance devices with receivers associated with the electronics portion of the behind-the-ear device, or hearing assistance devices of the type having receivers in the ear canal of the user, including but not limited to receiver-in-canal (RIC) or receiver-in-the-ear (RITE) designs. The present subject matter may also be used in hearing assistance devices generally, such as cochlear implant type hearing devices and such as deep insertion devices having a transducer, such as a receiver or microphone, whether custom fitted, standard fitted, open fitted and/or occlusive fitted. It is understood that other hearing assistance devices not expressly stated herein may be used in conjunction with the present subject matter. Each of the following non-limiting examples may stand on its own, or may be combined in various permutations or combinations with one or more of the other examples. Example 1 is a method comprising: receiving an image of anatomy of a patient including at least a portion of a canal aperture of an ear of the patient; generating a patient model of a portion of the anatomy of the patient, the patient model indicating at least one of a height or width of the canal aperture; using the patient model, determining a best fit model from a set of hearing assistance device shell models generated using a machine learning technique; outputting an identification of the best fit model. In Example 2, the subject matter of Example 1 includes, wherein the patient model further indicates at least one of a height or width of a concha bowl of the ear of the patient. In Example 3, the subject matter of Examples 1-2 includes, wherein the image of the anatomy includes an image of a mold taken of the anatomy of the patient. In Example 4, the subject matter of Examples 1-3 includes, wherein the image of the anatomy includes two orthogonal images generated by a mobile device. In Example 5, the subject matter of Examples 1-4 includes, wherein the set of hearing assistance device shell models are generated by: clustering a plurality of feature vectors corresponding to a plurality of three-dimensional input models to generate a set of clusters; estimating a mean shell shape of each of the set of clusters. In Example 6, the subject matter of Example 5 includes, wherein the plurality of feature vectors are generated by: aligning the plurality of three-dimensional input models to a template; and extracting features of each of the aligned plurality of three-dimensional input models to generate the plurality of feature vectors corresponding to the aligned plurality of three-dimensional input models. In Example 7, the subject matter of Example 6 includes, wherein aligning the plurality of three-dimensional input models to the template includes determining correspondence between respective points in a model of the plurality of three-dimensional input models and points in the template. In Example 8, the subject matter of Examples 6-7 includes, wherein extracting features of each of the aligned plurality of input models includes converting the plurality of three-dimensional input models into voxels. In Example 9, the subject matter of Examples 5-8 includes, wherein clustering the plurality of feature vectors includes using at least one of: k-means clustering, density-based clustering, spectral clustering, or modeling with Gaussian mixtures. In Example 10, the subject matter of Examples 5-9 includes, wherein the set of hearing assistance device shell models are output after inverting the respective mean shell shapes of each of the set of clusters by solving a minimization problem. Example 11 is a system comprising: one or more processors coupled to a memory device, the memory device containing instructions which, when executed by the one or more processors, cause the system to: receive an image of anatomy of a patient including at least a portion of a canal aperture of an ear of the patient; generate a patient model of a portion of the anatomy of the patient, the patient model indicating at least one of a height or width of the canal aperture; determine, using the patient model, a best fit model from a set of hearing assistance device shell models generated using a machine learning technique; output an identification of the best fit model. In Example 12, the subject matter of Example 11 includes, wherein the patient model further indicates at least one of a height or width of a concha bowl of the ear of the patient. In Example 13, the subject matter of Examples 11-12 includes, wherein the image of the anatomy includes an image of a mold taken of a patient. In Example 14, the subject matter of Examples 11-13 includes, wherein the image of the anatomy includes two orthogonal images generated by a mobile device. In Example 15, the subject matter of Examples 11-14 includes, wherein the set of hearing assistance device shell models are generated by: clustering a plurality of feature vectors corresponding to a plurality of three-dimensional input models to generate a set of clusters; estimating a mean shell shape of each of the set of clusters; and outputting the set of hearing assistance device shell models corresponding to respective mean shell shapes of the set of clusters. In Example 16, the subject matter of Example 15 includes, wherein the plurality of feature vectors are generated by: aligning the plurality of three-dimensional input models to a template; and extracting features of each of the aligned plurality of three-dimensional input models to generate the plurality of feature vectors corresponding to the aligned plurality of three-dimensional input models. In Example 17, the subject matter of Example 16 includes, wherein the plurality of three-dimensional input models are aligned to the template by determining correspondence between respective points in a model of the plurality of three-dimensional input models and points in the template. In Example 18, the subject matter of Examples 16-17 includes, wherein the features of each of the aligned plurality of input models are extracting by converting the plurality of three-dimensional input models into voxels. In Example 19, the subject matter of Examples 15-18 includes, wherein the plurality of feature vectors are clustered using at least one of: k-means clustering, density-based clustering, spectral clustering, or modeling with Gaussian mixtures. In Example 20, the subject matter of Examples 15-19 includes, wherein the set of hearing assistance device shell models are output after inverting the respective mean shell shapes of each of the set of clusters by solving a minimization problem. Example 21 is a system comprising: one or more processors coupled to a memory device, the memory device containing instructions which, when executed by the one or more processors, cause the system to: align a plurality of three-dimensional input models to a template; extract features of each of the aligned plurality of three-dimensional input models to generate a plurality of feature vectors corresponding to the aligned plurality of three-dimensional input models; cluster the plurality of feature vectors to generate a set of clusters; estimate a mean shell shape of each of the set of clusters; and output a set of three-dimensional shell models corresponding to the set of clusters using a respective mean shell shape of each of the set of clusters. In Example 22, the subject matter of Example 21 includes, wherein the plurality of three-dimensional input models are generated from images based on patient anatomy. In Example 23, the subject matter of Example 22 includes, wherein the images include two orthogonal images generated by a mobile device. In Example 24, the subject matter of Examples 22-23 includes, wherein the images are generated from silicone molds of patient anatomy. In Example 25, the subject matter of Examples 21-24 includes, wherein to align the plurality of three-dimensional input models to the template, the instructions further cause the system to determine correspondence between respective points in a model of the plurality of three-dimensional input models and points in the template. In Example 26, the subject matter of Examples 21-25 includes, wherein to align the plurality of three-dimensional input models to the template, the instructions further cause the system to iteratively align the plurality of three-dimensional input models to the template using an expectation-maximization algorithm. In Example 27, the subject matter of Examples 21-26 includes, wherein to extract features of each of the aligned plurality of input models, the instructions further cause the system to convert the plurality of three-dimensional input models into voxels. In Example 28, the subject matter of Example 27 includes, wherein the feature vectors are generated using a three-dimensional Discrete Fourier Transform applied to the voxels. In Example 29, the subject matter of Examples 21-28 includes, wherein to cluster the plurality of feature vectors, the instructions further cause the system to use at least one of: k-means clustering, density-based clustering, spectral clustering, or modeling with Gaussian mixtures. In Example 30, the subject matter of Examples 21-29 includes, wherein the instructions further cause the system to invert the respective mean shell shapes of each of the set of clusters by solving a minimization problem to generate the set of three-dimensional shell models. Example 31 is a method comprising: aligning a plurality of three-dimensional input models to a template; extracting features of each of the aligned plurality of three-dimensional input models to generate a plurality of feature vectors corresponding to the aligned plurality of three-dimensional input models; clustering the plurality of feature vectors to generate a set of clusters; estimating a mean shell shape of each of the set of clusters; and outputting a set of three-dimensional shell models corresponding to the set of clusters using a respective mean shell shape of each of the set of clusters. In Example 32, the subject matter of Example 31 includes, wherein the plurality of three-dimensional input models are generated from images based on patient anatomy. In Example 33, the subject matter of Example 32 includes, wherein the images include two orthogonal images generated by a mobile device. In Example 34, the subject matter of Examples 32-33 includes, wherein the images are generated from silicone molds of patient anatomy. In Example 35, the subject matter of Examples 31-34 includes, wherein aligning the plurality of three-dimensional input models to the template includes determining correspondence between respective points in a model of the plurality of three-dimensional input models and points in the template. In Example 36, the subject matter of Examples 31-35 includes, wherein aligning the plurality of three-dimensional input models to the template includes iteratively aligning the plurality of three-dimensional input models to the template using an expectation-maximization algorithm. In Example 37, the subject matter of Examples 31-36 includes, wherein extracting features of each of the aligned plurality of input models includes converting the plurality of three-dimensional input models into voxels. In Example 38, the subject matter of Example 37 includes, wherein the feature vectors are generated using a three-dimensional Discrete Fourier Transform applied to the voxels. In Example 39, the subject matter of Examples 31-38 includes, wherein clustering the plurality of feature vectors includes using at least one of: k-means clustering, density-based clustering, spectral clustering, or modeling with Gaussian mixtures. In Example 40, the subject matter of Examples 31-39 includes, inverting the respective mean shell shapes of each of the set of clusters by solving a minimization problem to generate the set of three-dimensional shell models. Example 41 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-40. Example 42 is an apparatus comprising means to implement of any of Examples 1-40. Example 43 is a system to implement of any of Examples 1-40. Example 44 is a method to implement of any of Examples 1-40, Method examples described herein may be machine or computer-implemented at least in part. Some examples may include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods may include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code may include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code may be tangibly stored on one or more volatile, non-transitory, or nonvolatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.
52,273
11943588
DETAILED DESCRIPTION Embodiments herein are described primarily in terms of a bone conduction device, such as an active transcutaneous bone conduction device. However, it is noted that the teachings detailed herein and/or variations thereof are also applicable to a cochlear implant and/or a middle ear implant. Accordingly, any disclosure herein of teachings utilized with an active transcutaneous bone conduction device also corresponds to a disclosure of utilizing those teachings with respect to a cochlear implant and utilizing those teachings with respect to a middle ear implant. It is further noted that the teachings detailed herein can be applicable to other types of prostheses, such as by way of example only and not by way of limitation, a retinal implant. Indeed, the teachings detailed herein can be applicable to any component that is held against the body that utilizes an RF coil and/or an inductance coil or any type of communicative coil to communicate with a component implanted in the body. That said, the teachings detailed herein will be directed by way of example only and not by way of limitation towards a component that is held against the head of a recipient for purposes of the establishment of an external component of the hearing prosthesis. In view of this,FIG.1is a perspective view of a bone conduction device100in which embodiments may be implemented. As shown, the recipient has an outer ear101, a middle ear102, and an inner ear103. Elements of outer ear101, middle ear102, and inner ear103are described below, followed by a description of bone conduction device100. In a fully functional human hearing anatomy, outer ear101comprises an auricle105and an ear canal106. A sound wave or acoustic pressure107is collected by auricle105and channeled into and through ear canal106. Disposed across the distal end of ear canal106is a tympanic membrane104which vibrates in response to acoustic wave107. This vibration is coupled to oval window or fenestra ovalis210through three bones of middle ear102, collectively referred to as the ossicles111and comprising the malleus112, the incus113, and the stapes114. The ossicles111of middle ear102serve to filter and amplify acoustic wave107, causing oval window210to vibrate. Such vibration sets up waves of fluid motion within cochlea139. Such fluid motion, in turn, activates hair cells (not shown) that line the inside of cochlea139. Activation of the hair cells causes appropriate nerve impulses to be transferred through the spiral ganglion cells and auditory nerve116to the brain (not shown), where they are perceived as sound. FIG.1also illustrates the positioning of bone conduction device100relative to outer ear101, middle ear102, and inner ear103of a recipient of device100. Bone conduction device100comprises an external component140and an implantable component150. As shown, bone conduction device100is positioned behind outer ear101of the recipient and comprises a sound input element126to receive sound signals. Sound input element126may comprise, for example, a microphone. In an exemplary embodiment, sound input element126may be located, for example, on or in bone conduction device100, or on a cable extending from bone conduction device100. More particularly, sound input device126(e.g., a microphone) converts received sound signals into electrical signals. These electrical signals are processed by the sound processor. The sound processor generates control signals which cause the actuator to vibrate. In other words, the actuator converts the electrical signals into mechanical motion to impart vibrations to the recipient's skull. Alternatively, sound input element126may be subcutaneously implanted in the recipient, or positioned in the recipient's ear. Sound input element126may also be a component that receives an electronic signal indicative of sound, such as, for example, from an external audio device. For example, sound input element126may receive a sound signal in the form of an electrical signal from an MP3 player electronically connected to sound input element126. Bone conduction device100comprises a sound processor (not shown), an actuator (also not shown), and/or various other operational components. In operation, the sound processor converts received sounds into electrical signals. These electrical signals are utilized by the sound processor to generate control signals that cause the actuator to vibrate. In other words, the actuator converts the electrical signals into mechanical vibrations for delivery to the recipient's skull. In accordance with some embodiments, a fixation system162may be used to secure implantable component150to skull136. As described below, fixation system162may be a bone screw fixed to skull136, and also attached to implantable component150. In one arrangement ofFIG.1, bone conduction device100can be a passive transcutaneous bone conduction device. That is, no active components, such as the actuator with electric driver circuitry, are implanted beneath the recipient's skin132. In such an arrangement, the active actuator is located in external component140, and implantable component150includes a magnetic plate, as will be discussed in greater detail below. The magnetic plate of the implantable component150vibrates in response to vibration transmitted through the skin, mechanically and/or via a magnetic field, that is generated by an external magnetic plate. In another arrangement ofFIG.1, bone conduction device100can be an active transcutaneous bone conduction device where at least one active component, such as the actuator with electric driver circuitry, is implanted beneath the recipient's skin132and is thus part of the implantable component150. As described below, in such an arrangement, external component140may comprise a sound processor and transmitter, while implantable component150may comprise a signal receiver and/or various other electronic circuits/devices. FIG.2depicts an exemplary transcutaneous bone conduction device300that includes an external device340(corresponding to, for example, element140ofFIG.1) and an implantable component350(corresponding to, for example, element150ofFIG.1). The transcutaneous bone conduction device300ofFIG.3is a passive transcutaneous bone conduction device in that a vibrating electromagnetic actuator342is located in the external device340. Vibrating electromagnetic actuator342is located in housing344of the external component, and is coupled to plate346. Plate346may be in the form of a permanent magnet and/or in another form that generates and/or is reactive to a magnetic field, or otherwise permits the establishment of magnetic attraction between the external device340and the implantable component350sufficient to hold the external device340against the skin of the recipient. In an exemplary embodiment, the vibrating electromagnetic actuator342is a device that converts electrical signals into vibration. In operation, sound input element126converts sound into electrical signals. Specifically, the transcutaneous bone conduction device300provides these electrical signals to vibrating electromagnetic actuator342, or to a sound processor (not shown) that processes the electrical signals, and then provides those processed signals to vibrating electromagnetic actuator342. The vibrating electromagnetic actuator342converts the electrical signals (processed or unprocessed) into vibrations. Because vibrating electromagnetic actuator342is mechanically coupled to plate346, the vibrations are transferred from the vibrating electromagnetic actuator342to plate346. Implanted plate assembly352is part of the implantable component350, and is made of a ferromagnetic material that may be in the form of a permanent magnet, that generates and/or is reactive to a magnetic field, or otherwise permits the establishment of a magnetic attraction between the external device340and the implantable component350sufficient to hold the external device340against the skin of the recipient. Accordingly, vibrations produced by the vibrating electromagnetic actuator342of the external device340are transferred from plate346across the skin to plate355of plate assembly352. This can be accomplished as a result of mechanical conduction of the vibrations through the skin, resulting from the external device340being in direct contact with the skin and/or from the magnetic field between the two plates. These vibrations are transferred without penetrating the skin with a solid object, such as an abutment, with respect to a percutaneous bone conduction device. As may be seen, the implanted plate assembly352is substantially rigidly attached to a bone fixture341in this embodiment. Plate screw356is used to secure plate assembly352to bone fixture341. The portions of plate screw356that interface with the bone fixture341substantially correspond to an abutment screw discussed in some additional detail below, thus permitting plate screw356to readily fit into an existing bone fixture used in a percutaneous bone conduction device. In an exemplary embodiment, plate screw356is configured so that the same tools and procedures that are used to install and/or remove an abutment screw (described below) from bone fixture341can be used to install and/or remove plate screw356from the bone fixture341(and thus the plate assembly352). FIG.3depicts an exemplary embodiment of a transcutaneous bone conduction device400according to another embodiment that includes an external device440(corresponding to, for example, element140ofFIG.1) and an implantable component450(corresponding to, for example, element150ofFIG.1). The transcutaneous bone conduction device400ofFIG.3is an active transcutaneous bone conduction device in that the vibrating electromagnetic actuator452is located in the implantable component450. Specifically, a vibratory element in the form of vibrating electromagnetic actuator452is located in housing454of the implantable component450. In an exemplary embodiment, much like the vibrating electromagnetic actuator342described above with respect to transcutaneous bone conduction device300, the vibrating electromagnetic actuator452is a device that converts electrical signals into vibration. External component440includes a sound input element126that converts sound into electrical signals. Specifically, the transcutaneous bone conduction device400provides these electrical signals to vibrating electromagnetic actuator452, or to a sound processor (not shown) that processes the electrical signals, and then provides those processed signals to the implantable component450through the skin of the recipient via a magnetic inductance link. In this regard, a transmitter coil442of the external component440transmits these signals to implanted receiver coil456located in housing458of the implantable component450. Components (not shown) in the housing458, such as, for example, a signal generator or an implanted sound processor, then generate electrical signals to be delivered to vibrating electromagnetic actuator452via electrical lead assembly460. The vibrating electromagnetic actuator452converts the electrical signals into vibrations. The vibrating electromagnetic actuator452is mechanically coupled to the housing454. Housing454and vibrating electromagnetic actuator452collectively form a vibratory apparatus453. The housing454is substantially rigidly attached to bone fixture341. In an exemplary embodiment, the actuator452is a piezoelectric actuator. Any type of actuator that can enable bone conduction hearing can be used in some embodiments. As can be seen inFIG.3, the housing458of the implanted receiver coil456is mounted on the surface of bone136. In an exemplary embodiment, during implantation of the housing458, all of the soft tissue above the bone136is lifted away from the bone, to form a pocket formed on the top by the soft tissue and on the bottom by the bone136, and the housing458is inserted in the pocket such that the bottom of the housing458rests on the top surface of bone136. Thus, the housing458is non-intracutaneously above the bone136. FIG.4depicts an alternate embodiment, where the implanted receiver coil457is embedded in a silicone cover459. In this alternate embodiment, the implanted receiver coil457, or more specifically, the assembly of which the implanted receiver coil457is a part (the assembly which includes the silicon covering459covering the metallic wires making up the implanted receiver coil457, the magnet (not shown) and other ancillary components) is located away from the surface of bone136. In this regard, in the embodiment depicted inFIG.4, a layer of soft tissue, such as skin, is located between the bottom of the covering459of the implanted receiver coil457and the surface of bone136. With respect to the embodiment ofFIG.5A, the receiver coil457is located with skin above and below. FIG.5Bdepicts in conceptual terms the position of the implanted receiver coil assembly558relative to the outside surface of the skin501and the top surface536of the bone (e.g., the mastoid bone)136. More specifically, it can be seen that the bottom surface559of the implanted receiver coil assembly558is located a distance D2from the top surface536of bone136. Also, the top surface560of the implanted receiver coil assembly558is located a distance D1from the outside surface501of the skin/soft tissue above the bone136. In an exemplary embodiment, the distance D1is controlled and otherwise set so as to maximize the efficiency of the inductance link between the implanted coil and the external coil. In an exemplary embodiment, D1is 4 mm. In an exemplary embodiment, D1can be about 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 2.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, or about 6.0 mm or any value or range of values therebetween in 0.01 mm increments (about 3.33 mm, 4.44 mm, 2.54 mm to about 5.14 mm, etc.). D2can have comparable numbers (D2need not be the same as D1, D2can be a number corresponding to one of the aforementioned numbers/can be a range of the aforementioned ranges, etc.). In an exemplary embodiment, distance D1is effectively constant over the length of the assembly558, at least when measured without compression of the skin between the assembly558and the surface501of the skin (the skin is in a static and unloaded state). In an exemplary embodiment, the respective values of the distance D1measured at locations along the assembly558have differences less than about 0.5 mm, 0.4 mm, 0.3 mm, 0.2 mm, 0.1 mm, or less, or any value or range of values therebetween in about 0.01 mm increments. Such is also the case with respect to D2. It is noted thatFIG.5Bis not drawn to scale. It is also noted that the pockets of the soft tissue in which the implanted receiver coil assembly558is located is not shown per se. In the arrangement depicted inFIG.5B, the soft tissue is conceptually depicted as forming a perfect adherence to the outer surface of the silicon covering of the implanted receiver coil assembly558. In some embodiments, at least over time, the soft tissue grows around the implanted receiver coil assembly558in a manner analogous to that depicted inFIG.5B. That said, in some alternate embodiments, there will be gaps between the outer surface of the silicon covering of the assembly558and the soft tissue. It is also noted that an exemplary embodiment ofFIG.5Bcorresponds to assembly558being placed intradermally in skin of the human recipient. It is also noted that an exemplary embodiment ofFIG.5Bcorresponds to assembly558being placed intracutaneously in the human recipient. Thus, in an exemplary embodiment, skin tissue is located above and below the assembly. In an exemplary embodiment of such embodiment, fat and muscle are located below the layer of skin beneath the assembly. Accordingly, in view ofFIGS.4and5A and5B, in an exemplary embodiment, there is an implant, comprising an assembly, such as implanted receiver coil assembly558, that includes an electrically conductive inductance circuit, such as implanted receiver coil457, supported by a support structure, such as covering459, which, as noted above, can be made of silicone, or any other appropriate material, wherein the assembly is configured to placed in soft tissue of the human (where “in soft tissue” means that the soft tissue is located above and below the assembly)—here, between muscle, but in other embodiments, can be placed between skin or fat (separately or in combination, with skin on top and fat on the bottom), between fat or muscle (again, separately or in combination). This as opposed to the embodiment depicted inFIG.3, where the assembly including the implanted receiver coil456is located completely beneath the skin, completely beneath the soft tissue/where the assembly including the implanted receiver coil456is located directly against the bone136. In an exemplary embodiment, the electrically conductive circuit is a coiled wire of an inductance coil configured to establish an inductance link with an external inductance coil. It is noted that while the embodiments depicted above have focused on a coiled wire establishing the implanted receiver coil457, in an alternate embodiment, conductive traces on a PCB can be utilized as an inductive receiver component. In this regard, the phrase inductance coil as used herein includes both wired coils and conductive traces and any other structure that can enable inductance communication with an external inductance coil/an external inductance field. Note also that while the embodiments detailed herein generally focus on an inductance coil embedded in a silicon body, the teachings detailed herein are applicable to other arrangements, such as inductance coils located in a titanium housing or a plastic housing, etc., where the housings are located in the recipient respect to the surface of the bone136according to the teachings detailed herein. Still further, as can be understood from the above, in an exemplary embodiment, the assembly558is implanted in the recipient such that there is between about 2 mm and about 5 mm of skin above the assembly and at least about 1 mm of skin below the assembly. As noted above, the implanted receiver coil assembly558in general, and the receiver coil457thereof in particular, is in signal communication with one or more components located in the housing452of the vibratory apparatus453. In this regard, as noted above, a vibrating actuator452can be located in the housing numeral454. Accordingly, the housing454can include an active component of a hearing prosthesis located in a housing remote from the implanted receiver coil assembly558. In an exemplary embodiment, the receiver coil457generates a current that is supplied to the actuator and thus powers the actuator and controls the actuator to actuatoe so as to generate vibrations to evoke a bone conduction hearing percept. In an exemplary embodiment, the implanted receiver coil assembly558in general, and the receiver coil457thereof in particular, is in signal communication with the actuator452via electrical lead assembly460. In an exemplary embodiment, electrical lead assembly460extends to feedthroughs of the housing454, which feedthroughs are in turn in signal communication with the actuator452. While the embodiment ofFIG.4discloses an electromagnetic actuator as the active component of the hearing prosthesis, which active component is in signal communication with the implanted receiver coil457, in an alternate embodiment, the active component can be a piezoelectric actuator. Accordingly, in an exemplary embodiment, the active component can correspond to any type of electrode—mechanical actuator/transducer. Still further, while the embodiments detailed above have been directed towards an active transcutaneous bone conduction device, in an alternate embodiment, the implanted receiver coil457can be in signal communication with an actuator of a middle ear implant, which actuator can correspond to the active component of the hearing prosthesis. Also, the implanted receiver coil457can be in signal communication with a stimulator of a cochlear implant, which stimulator corresponds to the active component of the hearing prosthesis. It is noted that the stimulator can correspond to an active component that is located in a housing, even though the electrodes to which current is provided from the stimulator located outside the housing. In this regard, the output of the active component is output from the housing via an electrical route in a manner analogous to how the vibrations are outputted from the housing via a mechanical route. In view of the above, in an exemplary embodiment, the implant includes electro-mechanical transducer located in a housing remote from the implanted receiver coil assembly558, wherein the implanted receiver coil assembly558is in signal communication with at least one component located in the housing via a lead extending from the assembly558to the housing. FIG.6depicts an exemplary set of incisions in skin of the recipient (the area above the surface536of bone136) applicable to an exemplary method of implanting the assembly558and the associated remote housing with the active component therein. In particular, there is a first incision610, which is made from the outer surface of the skin501down to the surface536of bone136or, in some other embodiments, down to the periosteum. In some embodiment, a separate incision into the periosteum is made subsequent to the formation of incision610, which collectively forms incision610, while in other embodiments, the periosteum is cut separately after a skin flap is pulled away from the incision610(more on this below). Incision610is typically, but need not be, normal to the tangent line of the surface536of bone136. Incision610is typically, but need not necessarily be, an incision that goes all the way, or at least substantially all the way to the surface536of bone136or the periosteum. Such can have utilitarian value with respect to implanting the housing containing active component directly against bone136, as will be described in greater detail below. That said, in some alternate embodiments, the incision610is made in a manner that is not extend all the way to the surface536of the bone or to the periosteum. Indeed, with respect to the features of the implanted receiver coil assembly558, incision610need only be, in at least some exemplary embodiments, to the depth of incision620, or slightly below the depth of incision620, where incision620formed the pocket for the receiver coil assembly558. With respect to the incision620, incision620is an incision that is made parallel to the surface501of the skin. In an exemplary embodiment, the parallel features of the incision620correspond to the tangent of the surface501of the skin immediately above the device utilized to cut the pocket620(e.g., the edge and/or tip of a scalpel—such will be described in greater detail below). In an exemplary embodiment, the distance that incision620extends from incision610is about 30 mm, which is about or slightly more than the outer diameter of the implanted receiver coil assembly558. In an exemplary embodiment, the distance that the incision620extends from incision610is about 35 mm. That said, owing to the features of the teachings detailed herein these of the placement of the receiver coil457a fixed distance from the surface501of the skin, in some embodiments, the outer diameter of the implanted receiver coil457can be lower than that which would otherwise be the case owing to the increased efficiency achieved by placing the receiver coil457as detailed herein, and thus the outer diameter of the implanted receiver coil assembly558can be smaller than that which would otherwise be the case. Accordingly, in an exemplary embodiment, the distance that the incision620extends from incision610can be about 15 mm or 20 mm or 25 mm or 30 mm or 35 mm or 40 mm or more or any value or range of values therebetween in 1 mm increments, depending on the diameter of the coil assembly. FIG.6depicts distance D3and distance D4. Distance D3corresponds to the distance from the incision610to the local surface of the skin (the portion immediately above by way of the direction normal to the tangent line of the incision). In an exemplary embodiment, distance D3is effectively constant over the length of the incision610, at least when measured without compression of the skin between the incision610and the surface501of the skin (the skin is in a static and unloaded state). In an exemplary embodiment, the respective values of the distance D3measured at locations along the incision610have differences less than about 0.5 mm, 0.4 mm, 0.3 mm, 0.2 mm, 0.1 mm or less or any value or range of values therebetween in about 0.01 mm increments. Such can also be the case with respect to D4. In an exemplary embodiment, D3is 4 mm. In an exemplary embodiment, D3can be about 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 2.7, 3.8, 3.9, 4.0, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, or about 6.0 mm or any value or range of values therebetween in 0.01 mm increments (about 3.33 mm, 4.44 mm, 2.54 mm to about 5.14 mm, etc.). D4can have comparable numbers (D4need not be the same as D3, D4can be a number corresponding to one of the aforementioned numbers/can be a range of the aforementioned ranges, etc.). A tool having utilitarian value with respect to creating incision610achieving the aforementioned values will be described in greater detail below. Still with reference to the figures,FIG.7depicts how incision610is widened to create widened incision610W. In this regard, because of the elastic nature of skin, the incision610can be the basis for the widened incision by simply pushing the walls (the vertical walls) of the incision away from each other. Still further, as can be seen inFIG.7, a space710is opened between the skin and the bone136in general, and the surface536of the bone in particular. This can be done utilizing a scalpel to detach the skin from the bone536, or by utilizing any other method (e.g., the skin may be pulled away from the surface of the bone536using one hand, and some embodiments). Space710is established so as to make room for the housing454. In this regard,FIG.8depicts housing454being placed into space710. It is noted that the lead associated with the implantable component is not shown for purposes of clarity. In at least some exemplary embodiments, the lead will extend through the widened incision610W to the implanted receiver coil assembly558. It is noted that incision610can be created in an arcuate manner/semicircle manner, when viewed looking downward onto the surface of the recipient's skin.FIG.9depicts an exemplary conception schematic looking downward onto skin501from the outside of the recipient. As can be seen, incision610is an arcuate shape. Incision610is typically created so as to provide space for the housing454to be inserted therein, which housing454is depicted in—lines for purposes of conceptual visualization. Incision620is also depicted in dashed lines, where the area inside the dashed line620bounded by the incision610forms the pocket for the implanted receiver coil assembly558. FIG.10depicts a slightly more precise set of incisions, where instead of a pocket for incision620having curved surfaces all around, only a portion of the outer boundary of the pocket formed by incision620is curved. In this regard, the curved section has a radius R620which corresponds to the radius R610of incision610. In this regard, as will be detailed below, in an exemplary embodiment, the flap established by incision610is pulled backwards (thus, in some embodiments, establishing incision610W, where the skin on the outside of the curve remains in place. A scalpel having a defined length is inserted into the skin to establish incision620at a location at the apex of the curve610until a stop on the scalpel reaches the skin established on the outer curvature of incision610W. The scalpel is then moved sideways (left or right with respect to the frame of reference ofFIG.10) such that the outer surface of incision610W “pushes” the stop downward (with respect to the frame of reference ofFIG.10), and thus pulls the distance of insertion of the scalpel downwards, thereby forming the curve R620corresponding to curve610. The scalpel is moved sideways until a distance of about half the diameter of the implanted receiver coil assembly558is opened up (a distance in the horizontal direction). The process is repeated for the other direction, thus establishing pocket620. Again, some additional details of the utilization of the tool to achieve the formation of the pocket620will be described in greater detail below. The embodiment ofFIG.10andFIG.9for this matter, are embodiments where a skin flap is not created with respect to the pocket620. This is as opposed to the skin flap created by the incision610vis-à-vis the locations of the skin inside the curve. That is, the incision establishing the pocket are entirely intradermal and do not rise to the level of the surface of the skin501(hence the outline of the pocket is depicted in dashed lines in the FIGS. of9and10, as it is entirely below the surface of the skin, save for the common boundary of incision610).FIG.11A, however, depicts an alternate incision regime into the skin of the recipient where a skin flap is formed in a manner analogous to that formed by incision610for insertion of the housing454. More particularly, incision620F (for flap) is created in the surface501of the skin, as is represented by the solid lines. In an exemplary embodiment, this flap is back (upwards, with respect to the frame of reference ofFIG.11A), and hinged along the dashed line portion of the incision620F, so that the implanted receiver coil assembly558can be placed onto the bed of skin overlying the bone. In such placement, the skin flap is put back in place, and the boundary of the incision at the surface of the skin501is sutured (or any other process of closing the incision can be utilized).FIG.11Bdepicts an alternate incision regime, where the incision6100is on the side of both element558and element454. Note also that in some embodiments, the incision6100may not extend the full longitudinal length of both elements, but could extend a portion, of the length, and then, under the surface of the skin, dog leg to make room for the component(s). In any event,FIG.12depicts an exemplary placement of the implanted receiver coil assembly558into a widened pocket620W (widened from that resulting from the incision, due to the placement of the receiver coil assembly558therein). In this embodiment, the incision610W is shown in its widened state as well. It is noted that after the procedure, the width of the incision610W will be reduced for closure. In view of the above, some exemplary methods according to the teachings detailed herein will now be detailed.FIG.13depicts an exemplary flowchart for an exemplary method1300. Method1300includes method action1310, which includes the action of cutting into skin of a human recipient above a temporal bone of the recipient. By way of example only and not by way of limitation, such action can correspond to the action executed to create the pocket620detailed above. Method1300further includes method action1320, which includes placing an inductance coil, such as by way of example only and not by way of limitation, the implanted receiver coil detailed above, intracutaneously above the mastoid bone through the cut into the skin. Because the method action1320includes placing the coil intracutaneously, there will be a layer of skin between the mastoid bone and the coil (there will also be a layer of silicon between the coil in the mastoid bone in embodiments that utilize silicon as the covering for the coil). Accordingly, method action1320results in a placement of the coil where the coil is separated from the outer surface of the bone (i.e., the surface facing external to the human) and separated from the outer surface of the periosteum covering the bone. In an exemplary embodiment of method1300, method action1320results in the inductance coil assembly being located such that there is between about 2 mm and about 5 mm of skin above the inductance coil assembly and at least about 1 mm of skin below the inductance coil assembly. Still further, in an exemplary embodiment of method1300, method action1220results in the inductance coil assembly being located such that there is between about 3.5 mm and about 4.5 mm of skin above the inductance coil assembly in at least about 1 mm of skin below the inductance coil assembly. It is also noted that in some alternate embodiments, other dimensions are present, such as those detailed above by way of example only and not by way of limitation. Still further consistent with the teachings above with regard to the formation of the pocket, in an exemplary embodiment, the action of cutting into the skin of the recipient executed in method action1310includes cutting a pocket into the skin, the pocket having a width and a length that extends at least generally parallel to a surface of the mastoid bone above the recipient. As also detailed above, in an exemplary embodiment, the pocket has a width and length that extends at least generally parallel to a surface of the skin. FIG.14presents another exemplary flowchart according to another exemplary method, method1400, according to an exemplary embodiment. Method1400includes method action1410, which includes executing method1300. Method1400further includes method action1420, which includes placing a housing containing an active component of a hearing prosthesis against the mastoid bone, where and electrical lead extends from the housing to the inductance coil assembly. In this regard, this is consistent with the teachings detailed above with respect to utilizing the inductance coil to operate or otherwise energize the actuator of the bone conduction device or the stimulator of a cochlear implant, etc. FIG.15presents another exemplary flowchart according to another exemplary method, method1500, according to an exemplary embodiment. Method1500includes method action1510, which includes executing method1300. Method action1520of method1500includes placing a housing containing an active component of a hearing prosthesis inside the recipient. This can be done in any given manner, such as with respect to placing the housing against the bone of the recipient or placing the housing at an intracutaneous location (some additional details which will be described in greater detail below). Method1500further includes method action1530, which includes placing the inductance coil assembly above the housing, the results of which are seen inFIG.16. In this regard, method1500results in a layer of skin L1being located between a top of the housing and a bottom of the inductance coil assembly, and a layer of skin L2being located above the inductance coil assembly. In this exemplary embodiment, the active component located in the housing is in signal communication with the inductance coil assembly via an electrical lead extending from the housing to the inductance coil assembly. FIG.17presents another exemplary flowchart according to another exemplary method, method1700. Method1700includes method action1710, which includes executing method1300. Method1600further includes method action1720, which includes placing the inductance coil assembly in the recipient such that a layer of skin is located between a top of the inductance coil and the bottom of the inductance coil. With reference toFIG.18, the results of action1720can be seen. Method1700further includes method action1730, which includes placing a housing containing an active component of a hearing prosthesis inside the recipient such that a layer of skin is located between a top of the housing and a bottom of the housing. The results of action1730can be seen inFIG.18as well. It is noted that the embodiment ofFIG.18depicts the housing454being located at a different level within the skin of the recipient then the implanted receiver coil assembly558. In an exemplary embodiment, the bottoms of those components can be aligned with each other (e.g., can have the same distance from the surface of bone136and/or from the surface of the skin501) with the tops of those components can be aligned with each other, or a middle thereof can be aligned with each other. Any placement of the housing and the assembly corresponding to the features of method1600can be utilized in at least some exemplary embodiments. Method1700further includes method action1740, which include sending a signal from the inductance coil assembly558to the active component inside the housing454after the housing and the inductance coil assembly are placed into the recipient. In an exemplary embodiment, method action1740is executed by creating an inductance field utilizing an external coil located proximate the surface501of the skin, which inductance field is received transcutaneously by the coils of assembly558. This inductance field induces a current in the coils of the assembly558, which current is transferred via the lead to feedthrough is in the housing454, and thus from the feedthroughs to the active component located in the housing454. In an exemplary embodiment, as noted above, the active component can be a stimulator of a cochlear implant. In an exemplary embodiment, the active component can be an actuator of a bone conduction hearing prosthesis. It is noted that whileFIG.18depicts the implanted receiver coil assembly558as a separate component from the housing454, it is noted that method1700can be executed utilizing an implantable component where the inductance coil assembly is not a distinct component relative to the assembly of which the stimulator is a part. In this regard, in an exemplary embodiment, the inductance coil assembly can be part of a so-called receiver-stimulator assembly of a cochlear implant, and the housing containing the active component can be a housing the stimulator of the cochlear implant, where a silicone body making up part of the implanted inductance coil assembly extends to envelop at least a portion of the housing that houses the stimulator of the cochlear implant. In such an exemplary embodiment, method actions1720and1730can be executed simultaneously or in an overlapping manner, etc. It is noted that at least some exemplary embodiments include repeating method1300, and, in some embodiments, some of the other methods detailed herein and/or variations thereof, repeatedly for a plurality of recipients. Accordingly, in an exemplary embodiment, there is a method that includes executing method1300, method1400, method1500, and/or method1700, and/or any of the other methods or method actions detailed herein and/or variations thereof at least X times for respectively different humans. Accordingly, in an exemplary embodiment, this can include executing method actions1310and1320at least X times. In an exemplary embodiment, X is 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75 or 80 or more. In an exemplary embodiment, this method is performed at least X times within a period less than 1 month, 2 months, 3 months, 4 months, 5 months, 6 months, 7 months, 8 months, 9 months, 10 months, 11 months, 12 months, 5 quarters (five 3 month periods), 6 quarters, 7 quarters, 8 quarters, 9 quarters, or less than 10 quarters. In this exemplary embodiment of executing the various methods detailed herein, the inductance coil assembly is placed at respective first distances from an outer skin of the recipients, the respective first distances having respective values having differences there between no more than 0.25 mm, 0.33 mm, 0.5 mm, 0.66 mm, 0.75 mm, 1 mm, 1.25 mm, 1.5 mm, 1.75 mm, 2 mm, 2.25 mm, or 2.5 mm for the X number of different humans subjected to the method repeated X times. For example, if the differences therebetween are no more than 1 mm, that means that all of the X number of different humans will have the inductance coil assembly located, for example, within a range of 2 to 3 mm from the top surface, within a range of 1 to 2 mm from the top surface, within a range of 3 to 4 mm from the top surface, etc. In an exemplary embodiment, at least some or all of the method actions detailed herein are executed without utilizing skin reduction. At least some of the exemplary embodiments detailed herein can enable such because the incision made within the skin to establish the pocket into which the implanted receiver coil assembly is located can be made a defined and control distance from the top surface of the skin. Thus, a desired distance from the top surface of the skin to the implanted receiver coil assembly can be controlled or otherwise established by measuring the distance from the top surface to the incision that will form the pocket. This as opposed to utilizing the skin reduction to remove skin that would be between the implanted receiver coil assembly and the outer surface of the skin to achieve a desired depth of the implanted receiver coil assembly from the outside surface of the skin. In this regard, in scenarios where the implanted receiver coil assembly was located directly on the bone of the recipient or directly on the mucous membrane covering the bone of the recipient, the thickness of skin covering the implanted receiver coil assembly might be such that less than utilitarian results would be achieved with respect to and inductance link extending from the implanted receiver coil assembly without skin reduction and the external inductance coil. Accordingly, there is utilitarian value with respect to utilizing skin reduction to thin the overlying skin over the implanted receiver coil assembly, and thus reduce the distance between the implanted receiver coil assembly in the outer surface of the skin and thus reducing the distance between the implanted receiver coil assembly and the external component containing the external coil assembly. FIG.19depicts another exemplary flowchart for an exemplary method, method1900, according to an exemplary embodiment. Method1900is directed towards utilizing the implanted assembly implanted according to the method actions detailed above and/or utilizing the assembly having the configurations detailed above. Method1900includes method action1910, which includes generating and inductance signal. In an exemplary embodiment, method action1910is executed utilizing an external component of a hearing prosthesis that includes an external inductance coil. In an exemplary embodiment, the external component captures a sound utilizing a microphone, and, based on this captured sound, a current is applied to the external coil to generate an inductance field. Method1900further includes method action1920, which includes receiving the inductance signal via implanted inductance coil implanted in the recipient. In an exemplary embodiment, method action1920is executed utilizing the implanted receiver coil assembly558implanted according to the teachings detailed herein and/or variations thereof. In this method1900, there is a layer of skin located between the inductance coil and the skull of the recipient in which the inductance coil is implanted, consistent with the teachings detailed herein. In an exemplary embodiment of method1900, the inductance coil, such as inductance coil457, supported by an inductance coil support assembly, such as the silicon body in which the coil is embedded, which support assembly is completely away from the skull bones of the recipient, such as by way of example only and not by way of limitation, the mastoid bone of the recipient. It is noted that the support assembly is not to be confused with any other components that might support the entire assembly. In this regard, the inductance coil support assembly is just that, an assembly that supports the inductance coil (e.g., a housing, silicone body, etc.). In an exemplary embodiment, a support for the inductance coil support assembly can be utilized, which support assembly is separate from the inductance coil support assembly of this method. Consistent with the embodiment ofFIG.16, in an exemplary embodiment of method1900, the action of generating the inductance signal is executed utilizing an external inductance coil coaxially aligned with the implanted inductance coil. In an exemplary embodiment, this is achieved via the utilization of magnets having polls that are opposite one another. A first magnet is located in the external device and the external coil is wound thereabout (a given distance away from the magnet). In an exemplary embodiment, this first magnet is located such that the north or south pole of the magnet faces the skin of the recipient (although in other exemplary embodiments, the polls of the magnets are aligned horizontally with the skin of the recipient). A second magnet is located in the implanted receiver coil assembly and the implanted receiver coil is wound thereabout (again, a given distance away from the implanted magnet). In this exemplary embodiment, the polls of the second magnet are such that the pole facing the external magnet is the opposite of the pole of the external magnet facing the skin. Thus, the magnets saw the line, and the coils are generally coaxial with one another. In an exemplary embodiment, method1900further includes the action of activating an active component of the hearing prosthesis, the active component being located in a housing, such as housing454. In this exemplary method, the external inductance coil overlies the implanted inductance coil and the housing, and the implanted inductance coil overlies the housing. By “overlies,” it is meant that when looking downward onto the skull of the recipient, and axis normal to the tangent plane of the skull at a particular location extends through the two components at issue. The components need not necessarily be completely overlapping one another as shown inFIG.16. The components can be staggered. That said, in some other embodiments, all of the components are coaxially aligned and/or the component above completely overlaps the component below. That said, in an alternate embodiment, when viewed looking from the inside of the recipient outward, where, for example, the housing454would thus become the component above the implanted receiver coil assembly558with respect toFIG.16, the component above completely overlaps the component below (this can be the case where housing454completely overlaps implanted receiver coil assembly558, or implanted receiver coil assembly558completely overlaps the external coil assembly—both could be the case in some embodiments). In view of the above, it can be understood that in an exemplary embodiment, there is an implanted receiver coil assembly, such as coil assembly558, that is configured so as to not be placed directly on bone of the recipient. It can be further seen that in an exemplary embodiment, there is a coil assembly that is separated from the electronics of the implant in general, and the active component of the implant in particular. Because in at least some exemplary embodiments the distance between the coil and the surface of the skin of the recipient is smaller than that which would otherwise be the case if the implanted receiver coil assembly was placed onto bone, at least not without skin thinning or the like, the coil can be optimized and/or otherwise made smaller than that which would otherwise be the case, all other things being equal. Still further, the fact that the distance is less than that which would otherwise be the case owing to the fact that the implanted receiver coil assembly is placed intracutaneously, at least without skin thinning, could result in a longer battery life of any battery powering the implanted components (whether that battery is external or internal to the recipient), a higher output of the device for a given input, and/or a smaller diameter of the coil of the implanted receiver coil assembly, all other things being equal. Again, in an exemplary embodiment, because the external and the implanted coil can be closer to each other than that which would otherwise be the case without the soft tissue mounting detailed herein, efficiency of the energy transfer over the skin can be improved. This gain in energy transfer could be utilized in several ways such as reduced size of the coils (one or both of the internal and external coils), increased output of the implanted component, or longer battery life for the external battery, etc. It is also noted that in an exemplary embodiment, there are a plurality of separate inductance coil that are part of the implantable component. By way of example only and not by way of limitation, in an exemplary embodiment, a given housing containing an active component can have two separate inductance coils in signal communication there with. In an exemplary embodiment, one of the inductance coils is located intracutaneously within the recipient, and another one of the coils is located subcutaneously above the skull of the recipient (i.e., the second coil is not located intracutaneously within the recipient/the second coil is located non-intracutaneously). FIG.20depicts another exemplary flowchart for an exemplary method, method2000, which includes method action2010, which includes executing method1900. Method2000further includes method action2020, which includes generating an electric signal via the received inductance signal received in method action1920, and conducting the generated signal via an electrical lead to a housing remote from the inductance coil utilized in method action1920. In this exemplary embodiment, the electrical lead hold the inductance coil in position relative to the housing. In an exemplary embodiment includes a semi-rigid lead that is malleable by hand or the like but is rigid enough so as to hold the implanted receiver coil assembly558in position relative to the housing to which it is attached. More particularly, in an exemplary embodiment, the lead extending from the implanted receiver coil assembly558to the housing454includes a device configured to prevent, or at least resist, movement of at least a portion of the lead assembly in a manner greater than that with respect to conventional leads. More specifically, in an exemplary embodiment, there can be a lead assembly including a device that is configured to resist movement of at least a portion of the lead assembly, and thus the implanted receiver coil assembly. In an exemplary embodiment, the movement is resisted or otherwise prevented from occurring due to a structure co-located with the lead assembly. In an exemplary embodiment, this entails a malleable portion, co-located with the leads in the lead assembly. That said, in another exemplary embodiment, the malleable portion can be the lead wires themselves, where, for example, the lead wires are made thicker than that which would normally be the case so as to establish the aforementioned rigidity/malleability so as to maintain the position of the implanted receiver coil assembly in place at least relative to that which would be the case in the absence of such lead assembly (e.g., where a normal lead assembly was utilized). In an exemplary embodiment, there is a lead assembly that includes a malleable metal wire, embedded in the body establishing the lead assembly. In an exemplary embodiment, the wire leads of the lead assembly are embedded in silicone, which establishes the body of the lead assembly. A malleable wire can be embedded in the silicone body of the lead assembly. In an exemplary embodiment, the metal wire is made of platinum or some other “soft” metal. That said, in some embodiments, depending on the dimensions, a stainless steel or the like could be used (providing that the diameter was thin enough to enable the bending having utilitarian value detailed herein). Other metals and alloys can be utilized. Any metal and/or alloy that is malleable in a given structural configuration that can enable the teachings detailed herein and/or variations thereof to be practiced can be utilized in at least some embodiments. Other types of material can be utilized as well, such as by way of example only and not by way of limitation, a plastically deformable polymer, again providing that the teachings detailed herein and/or variations thereof can be practiced. In some embodiments, this malleable wire providing the aforementioned rigidity is not utilized to conduct signals, while in other embodiments, the malleable wire is utilized to conduct signals. In an exemplary embodiment, the diameter of the malleable structures utilized to achieve the aforementioned positioning of the implanted receiver coil assembly have a diameter that is an order of magnitude larger than that of a given lead wire of the lead assembly. To be clear, these embodiments are not to be confused with the mere fact that a lead assembly exists that limits the distance that the implanted receiver coil assembly might travel from the housing454owing to the fact that the lead has a finite length. That is not positioning. That is captivity. Accordingly, in an exemplary embodiment, there is an implant that is configured to provide for a defined placement of the receiver coil assembly relative to a surface of the skin. As detailed above, in this exemplary embodiment, such is achieved utilizing the enhanced lead assembly according to the teachings detailed herein and/or variations thereof. FIG.21provides for another exemplary embodiment for providing for defined placement of the assembly relative a surface of the skin. In particular, there is an implantable receiver coil assembly2100depicted, the assembly being attached to lead2170. In an exemplary embodiment, the lead2170can include the features detailed above vis-à-vis the stiffening feature. In an exemplary embodiment, the lead2170is a non-stiffened lead. In any event, implanted assembly2100includes holes2115that extend completely from a top of the implanted receiver coil assembly2100to the bottom of the assembly2100so as to enable can growth therethrough (e.g. from the top to the bottom, from the bottom to the top, and were from the top and bottom meeting somewhere in between). (Note some elements are not present for reasons of clarity.) As can be seen, holes2115are inboard of the coils21457and outboard of the magnet2160. In an exemplary embodiment, holes2115through the silicon body are holes that holds the coils21457in place. That said, in an alternate embodiment, such as where the coils21457are traces on a PCB or the like, instead of wires held in space via a silicon body, holes2115can extend through the PCB. In an exemplary embodiment where the coils21457are located in a housing or the like, holes2115can extend from the top of the housing to the bottom of the housing. In some exemplary embodiments, the housing is such that the holes are formed by extensions of the housing walls inward towards each other so as to create a hermetic environment within the housing at the locations on the other side of the extender walls. That is, holes2115do not interfere with the purposes of the housing these are the protecting what is in the housing from the external environment of the housing. While three holes are depicted in the embodiment ofFIG.21, in some alternate embodiments, only one hole is utilized. In some other embodiments, two holes are utilized. In some embodiments, four or more holes are utilized. It is also noted that while the embodiment depicted inFIG.21utilizes holes that are generally the same as one another and symmetrically arrayed relative to one another, in an alternate embodiment, the holes can be different from one another and are not symmetrically arrayed. Also, while the holes are depicted inboard of the coils21457, in an alternate embodiment, one or more of the holes can be located outboard of the coils. Any arrangement that can enable the teachings detailed herein can be practiced in at least some exemplary embodiments. It is also noted that in an exemplary embodiment, other features that can enhance the locational stability of the implantable receiver coil assembly2100can be utilized. By way of example only and not by way of limitation, instead of through holes that extend completely through the assembly, in an alternate embodiment, hollows or divits can be utilized. Still further, in an exemplary embodiment, spikes can be utilized. Surface features can be provided that enhance the locational positioning, such as by way of example only and not by way of limitation, a roughened surface. Still further, in an exemplary embodiment, the surface of the implantable receiver coil assembly can include some form of compound that enhances adherence to skin. In some embodiments, the surface of the implantable receiver coil assembly can be coated with a material that enhances such adherence to skin. In an exemplary embodiment, a gridlike structure can be placed on one or both sides of the implantable receiver coil assembly, which gridlike structure is configured so as to enhance skin in-growth and the like. Any arrangement that can further enhance the locational stability of the implantable receiver coil assembly2100can be utilized. Is also noted that any of the aforementioned features can be utilized in combination with any of the other aforementioned features. In at least some exemplary embodiments, the holes2115are configured such that skin or other soft tissue (herein, any disclosure of skin also corresponds to a disclosure of other types of soft tissue, and vice versa—this does not mean that skin has been equated to any type of soft tissue, this simply means that for the purposes of linguistic economy, Applicant intends for the disclosure of skin to also correspond to the disclosure of other types of soft tissue for purposes of written description support for the latter) grows into the holes, thus providing for defined placement of the assembly relative to the surface of the skin. Accordingly, in an exemplary embodiment of method1900, method1900is executed where skin is ingrown into an assembly including the inductance coil, the skin extending from a first side of the inductance coil to a second side of the inductance coil, thereby preventing the coil from migrating within the skin of the recipient, or at least substantially limiting the ability of the coil from migrating within the skin of the recipient, at least relative to that which would be the case in the absence of the holes2115. It is further noted that the aforementioned stiffened lead can also provide utility with respect to preventing or at least substantially limiting the ability of the coil from migrating within the skin of the recipient. That said, in some alternative embodiments, the holes may not necessarily prevent or otherwise limit migration. Instead, the holes are utilized to stabilize the implanted receiver coil assembly. Accordingly, in an exemplary embodiment, the holes are configured for soft tissue of the recipient to grow therethrough so as to stabilize the implanted receiver coil assembly558. With reference back toFIG.18and the housing454that is located intracutaneously, along with reference back toFIG.16, for example, where the housing454is located subcutaneously, in an exemplary embodiment, there is a method represented by the flowchart ofFIG.22. More particularly, there is a method2200, which includes method action2210, which includes executing method1900. Method2200further includes method action2220, which includes generating an electric signal via the received inductance signal received in method1900, and conducting the generated signal via an electrical lead to a housing remote from the inductance coil (the inductance coil utilized to execute method1900). In this exemplary method, the housing includes an active component of a hearing prosthesis. In this exemplary method, this housing is one of (i) retained to the skull only via osseointegration, (ii) retained to the skull only via pressure of skin over the top of the housing or (iii) not in contact with the skull or periosteum. That is, in some embodiments, the housing that contains the active component does not include a bone screw or the like to hold the housing in place. To be clear, in an exemplary embodiment, the implant is drill-hole and/or screw hole free, or more accurately, the implant is implanted without drilling and/or without screwing into bone. In an exemplary embodiment, at least the bottom of the housing can have a surface that is structured or otherwise coated so as to stimulate or otherwise encourage osseointegration to bone of the recipient. That said, in some alternate embodiments, the surface of the housing, or at least the bottom of the housing, is structured or otherwise coated so as to prevent or otherwise discourage osseointegration to bone of the recipient. As noted above, some exemplary embodiments include a tool that is utilized to make the incision620that forms the pocket into which the implanted receiver coil assembly558is inserted. In this regard,FIGS.23and24depict such a tool. More particularly,FIGS.23and24depict a device, comprising a first portion2310that includes a first surface2312. In an exemplary embodiment, the portion2310is a composite of a plate component2314to which is attached a scalpel blade2316. In this regard, in an exemplary embodiment, the device depicted inFIGS.23and24is a soft-tissue gauge, such as that marketed and otherwise distributed by Cochlear Limited LTD, to which is attached a scalpel blade2316. Additional details of such are described in greater detail below. In any event, according to the embodiment ofFIGS.23and24, the first surface is part of a scalpel blade. The device ofFIGS.23and24further includes a second portion2320including a second surface2322a fixed distance D23from the first surface. The second surface2322is parallel to the first surface and overlying the first surface when the surfaces are positioned perpendicular to the direction of gravity (e.g., downward with respect to the frame of reference ofFIG.23). The first portions of the second portions are joined together by third portion2340which extends in a perpendicular direction to the first and second portion. With respect to the exemplary embodiment ofFIG.23, other than the scalpel blade2316, the first portion, the second portion of the third portion are part of a monolithic component in the form of a plate that has been bent over upon itself as can be seen. The third portion includes a surface2342which corresponds to the stop noted above with respect to the exemplary method of creating the pocket utilizing the tool described above. In this regard, in an exemplary embodiment, the stop2342, during use, strikes the inside wall of the incision610on the side facing the pocket so as to prevent the device in general, and the tip of the scalpel blade2316in particular, from traveling further therein. That is, the stop2342establishes the length and/or the width of the pocket, and provides that no more cutting into the skin in the direction parallel to the surface of the skin is performed than that needed. As can be seen from the figures, the first portion, the second portion and the third portion collectively form a U-shaped component when viewed from the side (the view ofFIG.23). The device inFIGS.23and24further include a handle2330. This is utilized by the surgeon or other healthcare professional so as to allow for ease of manipulation of the device during the incision process creating the pocket into which the implanted receiver coil assembly558is inserted. In the embodiments ofFIGS.23and24, the second surface2322is configured to about an outside of the skin, such as surface501, of a human, where the skin is over the mastoid bone of the human. Still further, the first surface is configured to incise the pocket620in skin of the human over the mastoid bone such that the pocket has a constant distance from the outside of the skin of the human. In the exemplary embodiment depicted inFIGS.23and24, this is due to the second surface2322and the fact that the first surface2312is rigidly connected indirectly to the second surface2322via the structure of the device. The distance D23is a set distance of the manufactured tool. In an exemplary embodiment, D23can correspond to any of the dimensions D1or D3noted above. Indeed, in an exemplary embodiment, the distance D23establishes D1or D3. To be clear, in the embodiment depicted in the figures, the first surface and the second surface are separated by a distance of D23. As noted above, the device can be a soft-tissue gauge or a modified soft-tissue gauge to which a scalpel blade has been attached. By way of example only and not by way of limitation, a recess can be formed in the upper surface of the plate that forms the portion2310so that the scalpel blade2316, or, more accurately, the bank of the scalpel blade2316, can be recessed such that the top surface of the scalpel blade is parallel with the top surface of the plate that forms the portion2310. That said, in an alternate embodiment, the scalpel blade can be located proud of the top surface that forms the plate. In this regard, if for example, a pocket located 4 mm below the outside surface of the skin is desired, a skin thickness gauge of 4.25 mm might be utilized, where the thickness of the scalpel blade is about 0.25 mm. Any arrangement that can enable the teachings detailed herein and/or variations thereof can be utilized in at least some exemplary embodiments. Owing to the fact that the surface2322is configured to be placed against the outside surface501of the skin, the device ofFIGS.23and24is thus configured to cut respective pockets in skin of a human parallel to the skull bone of the human a constant depth from a surface of the skin when the second surface is positioned against the outside of the skin. In view of the above, in an exemplary embodiment, method1300includes the additional action of utilizing a tool, such as the tool ofFIGS.23and24, to a button outside of the skin of the human to cut into the skin of the human to form an intracutaneous pocket in the skin while the tool is abutting the outside of the skin. In this exemplary embodiment, the tool maintains a uniform depth of the cut pocket relative to the outside of the skin due to the abutting of the outside of the skin. In an exemplary embodiment, the tool can have markings thereon that provide a visual indication to the surgeon or other health care professional as to where a portion of the blade is located (which is eclipsed by the skin during normal use). For example, the tool could have an indication above the tip of the blade, indicating to the surgeon the location of the blade. For example, an outline of the blade can be located on2320. Indeed, a cut-out in2320could be present that would correspond to the shape of the blade beneath. Alternatively,2320could be a transparent material, with an outline of the blade (or a schematic having even more details, such as the tapered portion of the blade as well) stenciled on the transparent portion. These latter embodiments could give the surgeon a visual cue of where the blade is located when the surgeon is looking downward directly from the top. The tool could have markings indicating the actual (lateral) depth of the cut or markings corresponding to recommended incision depths, etc. Corollary to this is that in an exemplary embodiment, there is a method, comprising cutting a generally vertical incision into skin of the recipient extending towards bone of the recipient. In an exemplary embodiment, the incision corresponds to incision610detailed above. It is noted that this does not mean that the incision extends all the way to the bone. All that is required by this method action is that the incision extend towards the bone of the recipient. It is also noted that while this embodiment references a generally vertical incision, in some other embodiments, the incision need not necessarily be vertical relative to the bone. It is also noted that this vertical incision, as detailed above, is art unit when viewed looking downward from the outside of the skin. This method further includes the action of cutting a pocket perpendicular to the generally vertical incision utilizing the device ofFIGS.23and24. In an exemplary embodiment, this method includes inserting an inductance coil assembly according to the teachings detailed herein into the formed pocket. In an exemplary embodiment, there is a method as described above and/or below, further comprising executing the method actions of cutting into skin of the human and placing the inductance coil above the mastoid bone through the cut at least 25 times for respectively different humans, wherein respective top surfaces of the inductance coil assemblies are placed respective first distances from an outer skin of the recipients, the respective first distances having respective values having differences therebetween no more than 1 millimeter for the 25 different humans. In an exemplary embodiment, there is a method as described above and/or below, further comprising executing the method actions of cutting into skin of the human and placing the inductance coil above the mastoid bone through the cut at least 25 times for respectively different humans without using skin reduction, the inductance coil is placed respective first distances from an outer skin of the recipients, the respective first distances having respective values having differences therebetween no more than 1 millimeter for the 25 different humans. In an exemplary embodiment, there is a device as describe above and/or below, wherein the first surface and the second surface are separated by a distance of between 3 mm and 5 mm. It is noted that any disclosure of a device and/or system herein corresponds to a disclosure of a method of utilizing such device and/or system. It is further noted that any disclosure of a device and/or system herein corresponds to a disclosure of a method of manufacturing such device and/or system. It is further noted that any disclosure of a method action detailed herein corresponds to a disclosure of a device and/or system for executing that method action/a device and/or system having such functionality corresponding to the method action. It is also noted that any disclosure of a functionality of a device herein corresponds to a method including a method action corresponding to such functionality. Also, any disclosure of any manufacturing methods detailed herein corresponds to a disclosure of a device and/or system resulting from such manufacturing methods and/or a disclosure of a method of utilizing the resulting device and/or system. Unless otherwise specified or otherwise not enabled by the art, any one or more teachings detailed herein with respect to one embodiment can be combined with one or more teachings of any other teaching detailed herein with respect to other embodiments. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. In view ofFIG.3, embodiments include a method, comprising cutting into skin of a human recipient above a temporal bone of the recipient and placing an inductance coil assembly subcutaneously above the mastoid bone through the cut into the skin, wherein placing the inductance coil assembly subcutaneously above the mastoid bone through the cut into the skin is executed by placing the inductance coil assembly non-intracutaneously above the mastoid bone through the cut into the skin and/or placing the inductance coil assembly non-intracutaneously above the mastoid bone through the cut into the skin includes placing the inductance coil assembly directly against the mastoid bone at a location above an ear canal relative to a height of a human into which the inductance coil assembly is being placed, wherein the sensory implant is a cochlear implant.
72,975
11943589
DETAILED DESCRIPTION Various exemplary embodiments and details are described hereinafter, with reference to the figures when relevant. It should be noted that the figures may or may not be drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described. In the present disclosure, whenever referring to a proximal side of a component, layer, an element, a device or part of a device, the referral is to the side of the component, layer, element, device or part thereof closest to the circuit board. Further, whenever referring to a proximal surface of a component, layer, an element, a device or part of a device, the referral is to the surface of the component, layer, element, device or part thereof facing the circuit board. Likewise, whenever referring to the distal side of a component, layer, an element, a device or part of a device, the referral is to the side furthest away from the circuit board. Further, whenever referring to a distal surface of a component, layer, an element, a device or part of a device, the referral is to the surface of the component, layer, element, device or part thereof facing away from the circuit board. In other words, the proximal side or surface is the side or surface closest to the circuit board and the distal side is the opposite side or surface—the side or surface furthest away from the circuit board. A method of manufacturing an electronic circuit of an audio device is disclosed. The method comprises providing a body. The body comprises a circuit board and one or more components including a first component mounted on the circuit board. The one or more components may comprise one or more electronic components including a first electronic component. The circuit board may e.g. be a printed circuit board, PCB, the circuit board may e.g. be configured to mechanically support and electrically connect the one or more components or electrical components using e.g. conductive tracks or pads. The circuit board may comprise one or more sheet layers of a conductive layer, laminate, or film such as of copper e.g. laminated onto and/or between sheet layers of a non-conductive substrate. The electronic circuit may be designated as a system in package electronic circuit. The method may comprise mounting one or more components including the first component and optionally a second component on the circuit board. The one or more components, such as the first component and/or the second component may be mounted to e.g. by being soldered, embedded in the circuit board, or bonded e.g. wire bonded or adhesive bonded to the circuit board. The method may comprise mounting a plurality of components on the circuit board. The one or more components may include a power supply unit such as switch-mode power supply e.g. comprising a switch capacitor and/or an inductor, e.g. as the first component. In other words, the first component may be a power supply unit such as switch-mode power supply e.g. comprising a switch capacitor and/or an inductor. The one or more components may include a processing unit or chip, e.g. as the first or second component. In other words, the first component and/or the second component may be a processing unit or chip. The one or more components may include a receiver such as a speaker, a microphone, a filter, an antenna e.g. a magnetic radio, a battery, a transceiver, and/or an interface. The one or more components may comprise a third electrical component, such as a speaker, a microphone, a filter, an antenna e.g. a magnetic radio, a battery, a transceiver, and/or an interface. The second component may be electrically and/or magnetically shielded. The third component may be non-shielded. The one or more components may generate electromagnetic fields of different magnitudes and at different frequencies, thereby creating electromagnetic interference between the components, the electromagnetic interference being more or less disturbing for other components e.g. depending on the operating frequencies of the components and the magnitude of the electromagnetic fields. The one or more components may generate electromagnetic fields such as electrically and/or magnetically noisy. The one or more components may generate electromagnetic fields such as E-fields (electrical fields) and/or H-fields (magnetic fields). The first component has a proximal surface and a distal surface and may have a first area A_C_1, a first height H_C_1, and a first width. The first component may for example be a power supply generating a first electromagnetic field at a first frequency (or first frequency range) and of a first magnitude. The first component may have a first position on the circuit board. The first position of the first component may e.g. be varied depending on the first area, the first height, and the first width of the component. The first position of the component may be determined based on the distance to the neighbouring components, the distance to the edge of the circuit board, and/or the height of the components. For example, it may be advantageous to position a component having the largest height in the centre of the circuit board, such as to minimize the height of the electronic circuit at the edges giving more flexibility regarding the size and dimensions of the electronic circuit. The first component may comprise a proximal surface facing towards the circuit board, and a distal surface facing away from the circuit board and optionally towards the first insulation layer (proximal surface of the first insulation layer). In one or more exemplary methods and/or electronic circuits, the first component may be positioned such that a ground connection of the first component faces towards a ground connection of the circuit board. In other words, the first component may be positioned such that a ground connection of the first component faces towards a ground pad element, such as towards a ground pad ring. An advantage of having the first component positioned such that a ground connection of the first component faces towards a ground connection of the circuit board is to reduce the risk of a short circuit connection of the first component in the event that the ground connection of the first component is short-circuited to e.g. the first shielding layer. In one or more exemplary methods and/or electronic circuits, the first component may be positioned such that the ground connection of the first component faces a corner and/or an edge of the circuit board where the one or more ground pad elements are positioned. By positioning the first component such that the ground connection of the first component faces a corner and/or an edge of the circuit board, a ground connection of the first component may be allowed for connection to one or more ground pad elements in case of a short-circuit via the first shielding layer. A distance between two neighbouring components, e.g. a distance between the first component and the second component, may preferably be such that the first insulation layer, the second insulation, and optionally even the first shielding layer may penetrate between the components. The method comprises applying a first insulation layer, e.g. outside, such as on the distal side of, the first component and/or on the circuit board. Applying the first insulation layer may comprise applying the first insulation layer around the first component, between the first component and the second component, e.g. at one or more first areas, including a first primary area A1_1. In other words, the method comprises applying at first insulation layer on the distal side of the first component, i.e. the first component is arranged between the circuit board and the first insulation layer or at least a first area of the first insulation layer. Applying a first insulation layer outside the first component may comprise applying the first insulation layer on the distal surface of the first component. Applying a first insulation layer may comprise applying a first insulation layer outside a plurality of components e.g. applying a portion of the first insulation layer on each component individually or applying a first insulation layer on a plurality of components such that the first insulation layer is substantially continuous on the plurality of components. Applying a first insulation layer may comprise conformal coating of the first insulation layer. Conformal coating may provide a uniform first insulation layer on the component(s), such as the first component and/or on the second component, and minimize the thickness of the first insulation layer that is needed to cover the component(s). The method comprises applying a second insulation layer, e.g. outside, such as on the distal side of, the first component, on the circuit board, and/or on the distal side or on the first insulation layer. In other words, the method comprises applying a second insulation layer on the distal side of the first component and/or the first insulation layer, i.e. the first component and the first insulation layer are arranged between the circuit board and the second insulation layer or at least a first area of the second insulation layer. Applying a second insulation layer outside the first component and/or the first insulation layer may comprise applying the second insulation layer on the distal surface of the first component the first insulation layer. Applying a second insulation layer may comprise applying a second insulation layer outside a plurality of components e.g. applying a portion of the second insulation layer on each component individually or applying a second insulation layer on a plurality of components such that the second insulation layer is substantially continuous on the plurality of components. Applying a second insulation layer may comprise conformal coating of the second insulation layer. Conformal coating may provide a uniform second insulation layer on the component(s), such as the first component and/or on the second component, and minimize the thickness of the second insulation layer that is needed to cover the component(s). The first insulation layer may contact or be substantially in contact with, e.g. adhering to, component(s), e.g. the first component and/or the second component, for example such that the proximal surface (or at least a part) of the first insulation layer adheres to the distal surface and/or the side surface of the first component and/or the second component. It may be advantageous that the first insulation layer adheres to the first component and/or the second component, such that substantially no air is trapped between the first insulation layer and at least the distal surface of the first component and/or the second component. This may further avoid that any moisture penetrates and collects between the first insulation layer and the component(s), such as the first component and/or the second component, which may lead to damage or misfunction of the component(s). The second insulation layer may contact or be substantially in contact with, e.g. adhering to, the first insulation layer, for example such that the proximal surface (or at least a part) of the second insulation layer adheres to the distal surface of the first insulation layer. In the same way, it may be advantageous that the second insulation layer adheres to the first insulation layer, such that substantially no air is trapped between the second insulation layer and the first insulation layer. It is to be understood that further insulation layers, such as a third insulation layer, a fourth insulation layer, and/or one or more adhesive layers may be applied and/or arranged between the first insulation layer and the first shielding layer. In an exemplary method/electronic circuit, a second insulation layer of a second insulation material (optionally with a second viscosity different from such as lower or higher than the first viscosity) may be applied and/or arranged between the circuit board and the component(s) and/or in gaps between neighbouring components. Accordingly, the method may comprise applying a second insulation material to the circuit board, the first insulation layer and/or between components. Applying a second insulation material to the circuit board and/or between components may comprise underfilling the second insulation material. In other words, the second insulation layer may be built up on top of the first insulation layer. It is an advantage that the first insulation layer is cured before applying the second insulation layer, e.g., instead of applying only one insulation layer that may flow out on the body before being cured since large amount of insulation material may have to be used to cover the component to insulate. An advantage of the present disclosure is therefore that smaller amounts of insulation material are used. The first insulation layer and the second insulation layer may be applied in a targeted manner in specific areas, instead of flooding the body to cover the component to insulate. In other words, the applying of the first insulation layer and the second insulation layer may be a dedicated application. A sequential applying of the first insulation layer and the second insulation layer may be achieved when using the disclosed technique. The first insulation layer and/or second insulation layer may be said to encapsulate or cover one or more of the components, such as the first component and/or the second component, such that the component(s) may be protected from the surrounding environment. The first insulation layer and/or second insulation layer may be an electrically non-conductive layer such that no electrical or galvanic contact may be established to the first component, e.g. from the first shielding layer. Thus, the first insulation layer may be made of a first insulation material optionally comprising one or more polymers. Thus, the second insulation layer may be made of a second insulation material optionally comprising one or more polymers. The first insulation material and the second insulation material may be of the same material. The first insulation material and/or the second insulation material may be electrically non-conductive materials. The first insulation layer and/or second insulation layer may insulate the first component from the first shielding layer. In other words, the first insulation layer and/or second insulation layer may prevent galvanic contact between the first component and the first shielding layer. In one or more exemplary methods, the first material has a first viscosity prior to curing of in the range from 0.30 Pa·s to 200 Pa·s. The first insulation layer may have a first viscosity associated with the first insulation material (prior to curing) and/or a first thickness T_FIL_1associated with the first component. The first thickness T_FIL_1may preferably be the thickness of the first insulation layer as the final product i.e. after the last processing step has been performed on the first insulation layer. The first viscosity may e.g. be in the range from 0.2 Pa·s to 300 Pa·s, in the range from 0.5 Pa·s to 175 Pa·s, in the range from 1 to 30 Pa·s, in the range from 1 Pa·s-20 Pa·s, or in the range from 3 Pa·s to 10 Pa·s, when measured at a temperature of 20-25° C. In one or more exemplary methods and/or electronic circuits, the first viscosity of the first insulation material may be in the range from 80 Pa·s to 120 Pa·s, such as about 100 Pa·s. The first thickness may also be understood as a first distance from the proximal surface of the first insulation layer to the distal surface of the first insulation layer, e.g. to the proximal surface of the first shielding layer. The first insulation layer may have a second thickness associated with the second component. The first thickness of the first insulation layer may be the same or different from, such as larger than or smaller than the second thickness of the first insulation layer. The first thickness T_FIL_1of the first insulation layer may be defined as the maximal thickness of the first insulation layer i.e. the point or area where the first insulation layer is the thickest in the first area A_C_1of the first component. The first insulation layer may comprise a first height H_FIL_1. The first height H_FIL_1may be defined as the distance between the surface of the circuit board facing the proximal surface of the first insulation layer and the distal surface of the first insulation layer at the maximal point or area of the first insulation layer at the first area A_C_1of the first component. The first height H_FIL_1may substantially correspond to the first thickness T_FIL_1added with the first height of the first component H_C_1. The second thickness T_FIL_2of the first insulation layer may be defined as the maximal thickness of the first insulation layer i.e. the point or area where the first insulation layer is the thickest in the second area A_C_2of the second component. The first insulation layer may comprise a second height H_FIL_2. The second height H_FIL_2may be defined as the distance between the surface of the circuit board facing the proximal surface of the first insulation layer and the distal surface of the first insulation layer at the maximal point or area of the first insulation layer at the second area A_C_2of the second component. The second height H_FIL_2may substantially correspond to the second thickness T_FIL_2added with the second height of the second component H_C_2. The second insulation layer may have a second viscosity associated with the second insulation material (prior to curing) and/or a first thickness T_SIL_1associated with the first component. The first thickness T_SIL_1may preferably be the thickness of the second insulation layer as the final product i.e. after the last processing step has been performed on the second insulation layer. The second viscosity may e.g. be in the range from 0.2 to 300 Pa·s, in the range from 0.5 to 175 Pa·s, in the range from 1 to 30 Pa·s, in the range from 1-20 Pa·s, or in the range from 3 to 10 Pa·s, when measured at a temperature of 20-25° C. In one or more exemplary methods and/or electronic circuits, the second viscosity of the second insulation material may be in the range from 80 Pa·s to 120 Pa·s, such as about 100 Pa·s. The first thickness T_SIL_1may also be understood as a second distance from the distal surface of the first insulation layer to the distal surface of the second insulation layer, e.g. to the proximal surface of the first shielding layer. The second insulation layer may have a second thickness associated with the second component. The second thickness of the second insulation layer may be the same or different from, such as larger than or smaller than the first thickness of the first insulation layer. The second thickness T_SIL_2of the second insulation layer may be defined as the maximal thickness of the second insulation layer i.e. the point or area where the second insulation layer is the thickest in the second area A_C_2of the first component. The second insulation layer may comprise a first height H_SIL_1. The first height H_SIL_1may be defined as the distance between the surface of the circuit board facing the proximal surface of the first insulation layer and the distal surface of the second insulation layer at the maximal point or area of the second insulation layer at the second area A_C_2of the first component. The first height H_SIL_1may substantially correspond to the first thickness T_SIL_1and the first thickness T_FIL_1added with the first height of the first component H_C_1. The second thickness T_SIL_1of the second insulation layer may be defined as the maximal thickness of the second insulation layer i.e. the point or area where the second insulation layer is the thickest in the second area A_C_2of the second component. The second insulation layer may comprise a second height H_FIL_2. The second height H_FIL_2may be defined as the distance between the surface of the circuit board facing the proximal surface of the first insulation layer and the distal surface of the second insulation layer at the maximal point or area of the second insulation layer at the second area A_C_2of the second component. The second height H_FIL_2may substantially correspond to the second thickness T_SIL_1added with the second height of the second component H_C_2. The first viscosity and the first thickness may e.g. be chosen based on one or more of the distance or gap between the components, the method of applying the first insulation layer, and the type of component. For example, for a smaller distance between the components, i.e. a smaller gap, the viscosity of the first insulation material may be lower than for a larger distance between the components, i.e. a larger gap. This may allow the first insulation material to penetrate gaps between the components. The gap between two neighbouring components, e.g. between the first component and the second component may e.g. be in the range from 1 μm to 1 cm, in the range from 5 μm to 5 mm, in the range from 10 μm to 1 mm, in the range from 20 μm to 500 μm, in the range from 20 μm to 200 μm, in the range from 20 μm to 100 μm, in the range from 500 μm to 1 cm, or in the range from 1 mm to 5 mm. In one or more exemplary methods and/or electronic circuits, the gap between two neighbouring components, e.g. between the first component and the second component may e.g. be in the range from 20 μm to 20 mm. The viscosity of the first insulation material may be proportional to the distance between the components. A lower viscosity e.g. in the range from 1 to 20 Pa·s may be preferred for smaller gaps e.g. gaps smaller than 500 μm, e.g. to promote the flowing of the first insulation material into smaller gaps. A higher viscosity may be preferred e.g. to avoid that the first insulation material flows out of the circuit board or unintentionally covers portions of the circuit board, such as ground pad elements. The first insulation layer may comprise a plurality of portions, e.g. a first portion and a second portion, separated from each other. The first portion of the first insulation layer may cover and insulate the first component. The second portion of the first insulation layer may cover and insulate the second component. The first insulation layer may comprise a plurality of portions, e.g. a first portion and a second portion, separated from each other. The first portion of the first insulation layer may cover and insulate the first component. The second portion of the first insulation layer may cover and insulate the second component. An adhesive layer or coating may be applied before applying the first insulation layer, e.g. for promoting adhesion of the first insulation layer. An adhesive layer or coating may be applied after applying the first insulation layer and/or the second insulation layer, and before applying the first shielding layer, e.g. for promoting adhesion of the first shielding layer. In one or more exemplary methods, the first insulation layer is made of a first insulation material comprising one or more polymers. The first insulation layer may be made of a first insulation material, for example comprising, essentially consisting of, or be of a polymer material. The first insulation layer may be of a non-conductive material, e.g. a non-electrically conductive polymer material. The first insulation material may be a material that cures by polymerization induced by UV light source. The first insulation material may be a material that cures through solvent removal induced by heating. Examples of first insulation materials may be acrylated polyurethane (e.g. Electrolube® UVCL, HumiSeal® UV40, HumiSeal® UV40-250, HumiSeal® UV40 Gel, and/or HumiSeal® UV40HV), acrylate, epoxy resin (e.g. Namics® U8443, Elpeguard® SL 1367, and/or Humiseal® 1R32A-2). The first thickness T_FIL_1of the first insulation layer may be in the range from 10 μm to 500 μm, in the range from 50 μm to 400 μm, in the range from 100 μm to 300 μm, or in the range from 100 μm to 200 μm. The second thickness T_FIL_2of the first insulation layer may be in the range from 10 to 500 μm, in the range from 50 μm to 400 μm, in the range from 100 μm to 300 μm, or in the range from 100 μm to 200 μm. The first insulation material may e.g. comprise and/or function as an underfill material having low viscosity, i.e. lower than 15 Pa·s, such as lower than 1 Pa·s. Thereby, the first insulation layer may penetrate around and below the first component and/or the second component. The method comprises applying one or more shielding layers including a first shielding layer. The first shielding layer may cover at least a part of the second insulation layer and/or the first insulation layer. The first shielding layer may be applied outside, such as on the distal side of, the first insulation layer, on the circuit board, and/or the second insulation layer. In other words, the method comprises applying at first shielding layer, e.g. a first portion and/or a second portion, on the distal side of the first insulation layer, i.e. the first insulation layer (first area of the first insulation layer) is arranged between the first component and the first shielding layer (first area of the first shielding layer). Applying a first shielding layer outside the first insulation layer may comprise applying the first shielding layer on the distal surface of the first insulation layer. Thus, the proximal surface of the first insulation layer faces towards the circuit board and the distal surface of the first insulation layer may be facing towards the first shielding layer (proximal surface of the first shielding layer). The first shielding layer has a proximal surface facing towards the circuit board and optionally facing the distal surface of the first insulation layer. The first shielding layer may contact or be substantially in contact with, e.g. adhering to, the first insulation layer and/or the second insulation layer, for example such that the proximal surface (or at least a part) of the first shielding layer adheres to the distal surface of the first insulation layer and/or of the second insulation layer. In the same way, it may be advantageous that the first shielding layer adheres to the first insulation layer and/or the second insulation layer, such that substantially no air is trapped between the first shielding layer and the first insulation layer and/or the second insulation layer. It is to be understood that further insulation layers, such as a third insulation layer, a fourth insulation layer and/or one or more adhesive layers may be applied and/or arranged between the first insulation layer and/or the second insulation layer and the first shielding layer. Further, it is to be understood that further shielding layers, such as second shielding layer and/or third shielding layer may be applied outside the first shielding layer, e.g. between the first shielding layer and the first protection layer. The first shielding layer may be an electrically conducting layer. Thus, the first shielding layer may be made of a first shielding material. The first shielding material may be an electrically conductive material. The conductivity of the first shielding layer may be in the range from 1 μΩ·cm to 100 mΩ·cm. The first shielding layer may shield component(s), such as the first component and/or the second component, from electromagnetic radiation (act as a Faraday cage), optionally from other components of the electronic circuit. In other words, the shielding layer may prevent electromagnetic radiation disturbing the components. On the other hand, the first shielding layer may shield other component(s) of the circuit board from electromagnetic radiation generated by the first component and/or the second component. The shielding provided by the first shielding layer may be in the range of 1 dB to 200 dB depending on the frequency or frequency range to shield. The first shielding layer may be made of a first shielding material being a conductive material, e.g. an electrically conductive polymer material. The first shielding material may be conductive polymer material, e.g. a conductive coating, e.g. based on inorganic or organic material, a conductive ink, a conductive micro-ink comprising micrometer-sized particles, or a conductive nano-ink comprising nanometer-sized particles. Examples of first shielding materials may be Genes'ink® Smart spray S-CS11101, Genes'ink® Smart'ink S-CS21303, Genes'ink® Smart'ink S-CS01520, Tatsuta® AE1244, Tatsuta® AE5000A5, Tatsuta® AE5000L, Tatsuta® AE5000ST, or Tatsuta® SF-PC5600. The first shielding layer may have a first viscosity associated with the first shielding material and/or a first thickness associated with the first component. The first viscosity of the first shielding material may (prior to curing) e.g. be in the range from 0.001 Pa·s to 200 Pa·s, in the range from 0.01 Pa·s to 100 Pa·s, in the range from 1 to 50 Pa·s, in the range from 1 Pa·s to 30 Pa·s, in the range from 1 Pa·s to 20 Pa·s, in the range from 3 Pa·s to 10 Pa·s, in the range from 0.001 Pa·s to 10 Pa·s, or in the range from 0.005 Pa·s to 10 Pa·s, e.g. measured at a temperature of 20-25° C. The first thickness of the first shielding layer may also be understood as a first distance from the proximal surface of the first shielding layer to the distal surface of the first shielding layer. The first shielding layer may have a second thickness associated with the second component. The first thickness of the first shielding layer may be the same or different from, such as larger than or smaller than the second thickness of the first shielding layer. The first thickness T_FSL_1of the first shielding layer may be defined as the maximal thickness of the first shielding layer i.e. the point or area where the first shielding layer is the thickest at the first area A_C_1of the first component. The first shielding layer may comprise a first height H_FSL_1. The first height H_FSL_1may be defined as the distance between the surface of the circuit board facing the proximal surface of the first shielding layer and the distal surface of the first shielding layer at the maximal point or area of the first shielding layer at the first area A_C_1. The first height H_FSL_1may substantially correspond to the first thickness T_FSL_1, added to the first thickness T_FIL_1and/or the first thickness T_SIL_1and added with the first height of the first component H_C_1. The first thickness T_FSL_1may preferably be the thickness of the first shielding layer as the final product i.e. after the last processing step has been performed on the first shielding layer. The second thickness T_FSL_2of the first shielding layer may be defined as the maximal thickness of the first shielding layer i.e. the point or area where the first shielding layer is the thickest at the second area A_C_2of the second component. The first shielding layer may comprise a second height H_FSL_2. The second height H_FSL_2may be defined as the distance between the surface of the circuit board facing the proximal surface of the first shielding layer and the distal surface of the first shielding layer at the maximal point or area of the first shielding layer at the second area A_C_2. The second height H_FSL_2may substantially correspond to the second thickness T_FSL_2, added to the second thickness T_FIL_2and/or the second thickness T_SIL_2and added with the second height of the second component H_C_2. The second thickness T_FSL_2may preferably be the thickness of the first shielding layer as the final product i.e. after the last processing step has been performed on the first shielding layer. Properties of exemplary electronic circuits EC1-EC4are outlined in table 1 below. The second component of EC1-EC4may be omitted. TABLE 1Properties of exemplary electronic circuitsEC1EC2EC3EC4A_C_15-10mm28-9mm20.5-2mm21-10mm2H_C_10.5-2mmNA<1mm>H_C_2A_C_22-7mm23-4mm26-8mm21-10mm2H_C_20.5-2mmNA<1mm<1mmT_10-30μm20-30μm20-30μm<T_FIL_2FIL_1T_35-60μm45-55μm30-40μm>30μmFIL_2T_10-30μm20-30μm20-30μm<T_SIL_2SIL_1T_10-200μm10-200μm10-200μm>30μmSIL_210-200μmT_50-150μm80-120μm30-70μm<120μmFSL_1T_150-250μm180-220μm60-70μm>T_FSL_2FSL_2 The first viscosity and the first thickness of the first shielding layer may e.g. be chosen based on one or more of the distance or the gap between the components, the method of applying the first shielding layer, the type of component, and properties of the first insulation layer. For example, for a smaller distance between the components, i.e. a smaller gap, the viscosity of the first shielding material may be lower than for a larger distance between the components, i.e. a larger gap. This may allow the first shielding material to penetrate between the components. The viscosity of the first shielding material may be proportional to the distance between the components. The first thickness of the first shielding layer may depend on a frequency of the generated electromagnetic interference by the first component to be shielded. The frequency to be shielded may be determined based on the operating frequency of one or more components of the electronic circuit. For example, an antenna may operate at a frequency that matches a frequency of the generated electromagnetic interference of a component, such as the first component. In that case it may be important to shield that specific frequency such that the antenna may operate without being disturbed. Thus, depending on the frequency of the generated electromagnetic interference to be shielded, the first thickness of the first shielding layer may be varied. The shielded frequency is dependent on the thickness of the first shielding layer. For example, in order to shield an electromagnetic interference having a frequency about 1 MHz, the first thickness, T_FSL_1, of the first shielding layer may be in the range from 1 μm to 500 μm, in the range from 10 μm to 300 μm, in the range from 20 μm to 200 μm, in the range from 30 μm to 100 μm, or in the range from 50 μm to 80 μm. The first shielding material may be selected depending on the frequency or frequency range to be shielded. The frequency range to be shielded may e.g. be in the range from 0.1 kHz to 10 GHz or in the range from 1 MHz-1 GHz. The first shielding material may comprise one or more metals including a first metal and/or a second metal. The first shielding material may comprise a base material, such as a base matrix of a polymer, such as epoxy resin comprising metal particles. The one or more metals may be selected from copper, silver, gold, platinum, and nickel. The first shielding material may comprise an alloy. The first shielding material may be or comprise a conducive polymer. The first shielding material may comprise metal particles, such as μm metal particles and/or nm metal particles. The metal particles may be or comprise copper particles, silver particles, gold particles, zinc particles, and/or nickel particles. The first shielding material may comprise copper particles that are silver coated. The metal particles may have a concentration in the first shielding material in the range from 1 to 100 wt %, such as in the range from 5 to 30 wt %. The first shielding layer may comprise a plurality of portions, e.g. a first portion and a second portion, separated from each other. The first portion of the first shielding layer may cover and shield the first component. The second portion of the first shielding layer may cover and insulate the second component. In one or more exemplary methods, applying a first insulation layer comprises jetting the first insulation layer and curing the first insulation layer. Jetting the first insulation layer may comprise jetting first insulation material on the first component, e.g. on distal surface of the first component. In one or more exemplary methods, jetting first insulation material on the first component may be combined with masking prior to jetting first insulation material, e.g. by arranging a masking element. Thus, in one or more exemplary methods, applying a first insulation layer outside the first component comprises applying a masking before jetting the first insulation material. Jetting first insulation material may comprise printing first insulation material on the first component and/or circuit board. Jetting the first insulation material may allow for a more automized and accurate application of the first insulation layer, e.g. by removing human steps in the manufacturing of the electronic circuit. This may provide a higher uniformity of the layers applied e.g. the thickness of the layers, and in turn provide more reliable layers. Further, introduction of potential human/operator-related contamination on the boards-to-be-coated can be reduced and/or prevented. In one or more exemplary methods, jetting the first insulation layer comprises applying one or more droplets of first insulation material, the droplets having a volume in the range from 0.01 μL to 0.1 μL. In other words, the droplets may have a volume per droplet in the range from 0.01 μL/dot to 0.1 μL/dot. In one or more exemplary methods, the droplets may have a density in the range from 0.01 mg/dot to 0.1 g/dot. In one or more exemplary methods, the first insulation material may have a density in the range from 0.5 g/mL to 20 g/m L. The volumes and densities may be provided at a temperature in a range from 20° C. to 25° C. In one or more exemplary methods, jetting the first insulation layer comprises jetting one or more first areas including a first primary area A1_1of the body and optionally a first secondary area A1_2. The first primary area A1_1may be an area around and/or on the first area A_C_1of the first component. The first secondary area A1_2may be an area around and/or on the second area A_C_2of the second component. In one or more exemplary methods, jetting the second insulation layer comprises jetting one or more second areas including a second primary area A2_1of the body and optionally a second secondary area A2_2. The second primary area A2_1may be an area around and/or on the first area A_C_1of the first component. The second secondary area A2_2may be an area around and/or on the second area A_C_2of the second component. In one or more exemplary methods, the first primary area and the second primary area is partly overlapping. The overlapping area may be identified after the electronic circuit has been manufactured, e.g. a transition may be identified by microscope at the overlapping area. For example, the molecular structure of the first insulation layer after curing may look differently than the molecular structure of the second insulation layer after curing. Curing the first insulation layer may comprise curing the first insulation material. In one or more exemplary methods, curing the first insulation layer comprises UV-curing the first insulation layer. Curing the first insulation layer may comprise, e.g. low-temperature curing, heat-curing, moisture-curing, UV-curing, infrared light curing, near infrared light curing, or photonic curing. The UV-curing may be performed at a wavelength in the range of 100 nm to 400 nm, e.g. for the first insulation materials curing by polymerization. The curing may be performed using flooding exposure, e.g. for a period in the range of 0.5 s to 30 s. The curing time may vary depending on the thickness of the insulation layer. For example, the first insulation layer may require certain dose before being cured, such as a UV dose. For example, the first insulation layer may require a higher dose and/or a longer curing time to be cured properly. The dose provided during the curing may vary depending on the wavelength used and the time of exposure. The dose provided during the curing may be in the range from 0.1 J/cm2to 10 J/cm2. In other words, the irradiance provided during curing may be in the range from 0.1 W/cm2to 1.15 W/cm2. A preferred UV light for the curing may be UV-C light, as a faster curing may be achieved. This may be advantageous in order to avoid that the first insulation layer flow out before being cured. The curing temperature may e.g., be in the range from 60° C. to 500° C., in the range from 60° C. to 400° C., in the range from 80° C. to 300° C., or in the range from 50° C. to 200° C., e.g., for the insulation materials cured by solvent removal. The curing of the first insulation material may comprise evaporating part of the first insulation material. The composition of the first insulation material may therefore be different after the first insulation material have been cured. The first thickness T_FIL_1may also be different before and after curing e.g., T_FIL_1is thinner after curing than before. The curing of the first insulation material may comprise polymerization reaction due to the UV light source. Moreover, for UV-curable materials, a secondary moisture-curing mechanism may be applied, e.g., for shadowed areas. In one or more exemplary methods, applying a second insulation layer comprises jetting the second insulation layer and curing the second insulation layer, e.g., after that the first insulation layer has been jetted and cured. Jetting the second insulation layer may comprise jetting second insulation material on the first component, e.g., on distal surface of the first component. In one or more exemplary methods, jetting second insulation material on the first component may be combined with masking prior to jetting second insulation material, e.g., by arranging a masking element. Thus, in one or more exemplary methods, applying a second insulation layer outside the first component comprises applying a masking before jetting the second insulation material. Jetting second insulation material may comprise printing second insulation material on the first component and/or circuit board. Jetting the second insulation material may allow for a more automized and accurate application of the second insulation layer, e.g., by removing human steps in the manufacturing of the electronic circuit. This may provide a higher uniformity of the layers applied e.g., the thickness of the layers, and in turn provide more reliable layers. Further, introduction of potential human/operator-related contamination on the boards-to-be-coated can be reduced and/or prevented. Curing the second insulation layer may comprise curing the second insulation material. Curing the second insulation layer may comprise, e.g., low-temperature curing, heat-curing, moisture-curing, UV-curing, infrared light curing, near infrared light curing, or photonic curing. The UV-curing may be performed at a wavelength in the range of 100 nm to 400 nm. The curing may be performed using flooding exposure, e.g., for a period in the range of 0.5 s to 10 s. The curing time may vary depending on the thickness of the insulation layer. For example, the second insulation layer may require certain dose before being cured, such as a UV dose. The dose provided during the curing may vary depending on the wavelength used and the time of exposure. The dose provided during the curing may be in the range from 0.1 J/cm2to 10 J/cm2. In other words, the irradiance provided during curing may be in the range from 0.1 W/cm2to 1.15 W/cm2. The dose provided during the curing may depend on the wavelength of the light source that is used. A preferred UV light for the curing may be UV-C light, as a faster curing may be achieved. This may be advantageous in order to avoid that the second insulation layer flow out before being cured. The curing temperature may e.g., be in the range from 60° C. to 500° C., in the range from 60° C. to 400° C., in the range from 80° C. to 300° C., or in the range from 50° C. to 200° C. The curing of the second insulation material may comprise evaporating part of the second insulation material. The composition of the second insulation material may therefore be different after the second insulation material have been cured. The first thickness T_SIL_1may also be different before and after curing e.g., T_SIL_1is thinner after curing than before. The curing of the second insulation material may comprise polymerization reaction due to the UV light source. Moreover, for UV-curable materials, a secondary moisture-curing mechanism may be applied, e.g., for shadowed areas. In one or more exemplary methods, curing the second insulation layer comprises UV-curing the second insulation layer to form a first interface between the first insulation layer and the second insulation layer. The first interface may be identified after the electronic circuit has been manufactured, e.g., a transition may be identified by microscope at the first interface. For example, the molecular structure of the first insulation layer after curing may look differently than the molecular structure of the second insulation layer after curing. Both the applying of the first insulation layer, the applying of the second insulation layer, and the applying of the first shielding layer may be achieved by jetting, which allows the use of the same machine for all three steps. By using the same machine, the number of fabrication steps of the electronic circuit may be reduced, whereby an easier and faster fabrication process may be achieved. Jetting first insulation material and/or second insulation material may e.g. comprise one or more of screen printing, inkjet, and aerosol printing. The jetting may e.g. be tilt jetting e.g. to provide a more uniform layer and/or provide a better coverage of component terminals. In one or more exemplary methods, applying the first shielding layer outside the first insulation layer comprises contacting the first shielding layer, such as the first portion and/or the second portion of the first shielding layer, to a ground connection, such as to one or more ground pad elements, e.g. of a ground pad ring. The ground connection may e.g. be a ground connection of the circuit board, a ground connection through the first component being connected to a ground connection of the circuit board, a ground pad ring e.g. at least partly encircling the first component. The ground connection may comprise one or more ground pad elements. The ground pad ring may be a continuous ring such that the ground pad ring is whole. The ground pad ring may be formed by a number of ground pad elements arranged along a closed curve, e.g. encircling the first component and/or the second component. A ground pad ring having a continuous ring may provide greater flexibility for the grounding of the first shielding layer. The continuous ring of the ground pad ring may have a width in the range from 1 μm to 500 μm, 100 μm to 500 μm, 200 μm to 500 μm, 1 μm to 100 μm, preferably between 5-50 μm, more preferably between 10-50 μm. In one or more exemplary methods/electronic circuits, the first shielding layer is not contacted to a ground connection but is outside the first insulation layer without being in contact with a ground connection. In one or more exemplary methods, applying a first insulation layer outside the first component comprises moulding first insulation material on the first component, e.g. on distal surface of the first component. Moulding first insulation material may comprise to provide a mould around the first component e.g. to delimit the area to mould, and then applying first insulation material on the first component, e.g. by injecting first insulation material into the space/cavity between the mould and the first component/circuit board. In one or more exemplary methods, applying a first insulation layer outside the first component comprises spraying first insulation material on the first component. Applying a first insulation layer outside the first component may comprise masking, e.g. by arranging a masking element, e.g. prior to spraying first insulation material on the first component. Thus, application of first insulation material to selected areas is provided for e.g. preventing ground connection from being covered with first insulation material. In one or more exemplary methods, applying a first shielding layer outside the first component comprises curing the first shielding material. Curing the first shielding material may comprise, e.g. low-temperature curing, heat-curing, moisture curing, UV-curing, infrared light curing, near infrared light curing, or photonic curing. The curing temperature may e.g. be in the range from 60° C. to 500° C., in the range from 60° C. to 400° C., in the range from 80° C. to 300° C., in the range from 50° C. to 200° C., or in the range from 150° C. to 180° C. The curing of the first shielding material may comprise evaporating part of the first shielding material. The composition of the first shielding material may therefore be different after the first shielding material has been cured. After the curing, the metal particles of the first shielding layer may e.g. be more concentrated than before curing, providing a higher density of metal particles, whereby a higher conductivity may be achieved. The first thickness T_FSL_1may also be different before and after curing e.g. T_FSL_1is thinner after curing than before curing. The curing of the first shielding material may comprise polymerization reaction due to the UV light source. Moreover, for UV-curable materials, a secondary moisture-curing mechanism may be applied, e.g. for shadowed areas. In one or more exemplary methods, applying a first shielding layer outside the first component comprises moulding first shielding material on the first component. In one or more exemplary methods, applying a first shielding layer outside the first component comprises spraying first shielding material on the first component. Spraying first shielding material on the first component may be advantageous for low-viscosity material. In one or more exemplary methods, applying a first shielding layer outside the first component comprises jetting first shielding material on the first component. In one or more exemplary methods, applying a first shielding layer outside the first component comprises applying a masking before jetting, spraying or otherwise applying the first shielding material. Thereby, improved control of the application of first shielding material may be provided. Jetting first shielding material may e.g. comprise inkjet and/or aerosol printing. The jetting may e.g. be tilt jetting e.g. to provide a more uniform layer. In one or more exemplary methods, applying a first shielding layer outside the first component comprises covering the first component with first shielding material. In one or more exemplary methods, the body comprises a plurality of circuit boards and one or more components including a first component mounted on each of the circuit boards, and wherein applying a first insulation layer comprises applying a first insulation layer to each of the circuit boards before applying the second insulation layer. A body comprising a plurality of circuit boards may also be denoted a panel, such as a PCB panel. Applying a first insulation layer may comprise jetting a first number of circuit boards, such as a first row of circuit boards on the body, for a first jetting time period. The first jetting time period may be in the range from 0.5 s to 20 s or 0.5 s to 10 s per circuit board and/or component. The first jetting time period may depend on the first number of circuit boards to be jetted. The first jetting time period may depend on one or more of the droplet size, the panel size, the circuit board size, and the first component size and/or the second component size. Applying a first insulation layer may comprise curing the first insulation layer jetted on the first number of circuit boards, for a first curing time period in the range from 0.5 s to 20 s or 0.5 s to 10 s per insulation layer, such as first insulation layer, per circuit board and/or per panel. The first curing time period may for example depend on the thickness of the first insulation layer and/or the first insulation material. An advantage of this, is that it may be avoided that the first insulation layer flows out on the body, e.g., when too many circuit boards are processed at a time, before being cured. In other words, it may be avoided that the first insulation layer flows out on the body while applying the remaining first insulation layer on the rest of the body. Applying a first insulation layer may comprise jetting a second number of circuit boards, such as a second row of circuit boards on the body, for a second jetting time period in the range from 0.5 s to 20 s or 0.5 s to 10 s per circuit board and/or component. The second jetting time period may depend on the first number of circuit boards to be jetted. The second jetting time period may depend on one or more of the droplet size, the panel size, the circuit board size, the first component size, and the second component size. The second jetting time period may be different or equivalent to the first jetting time period. Applying a first insulation layer may comprise curing the first insulation layer on the second number of circuit boards, for a second curing time period in the range from 0.5 s to 20 s or 0.5 s to 10 s per insulation layer, such as second insulation layer, per circuit board and/or per panel. The second curing time period may for example depend on the thickness of the second insulation layer and/or the second insulation material. Applying a first insulation layer to each of the circuit boards before applying the second insulation layer may comprise repeating the above jetting and curing steps for all the circuit board of the body for the first insulation layer before applying the second insulation layer. An advantage of this, is that the first insulation layer may be cured for the entire body before applying the second insulation layer. In one or more exemplary methods, applying a second insulation layer comprises applying a second insulation layer to each of the circuit boards before applying the first shielding layer. The method may comprise applying a first protection layer outside the first shielding layer. The first protection layer may be an environment protecting layer protecting the first shielding layer, the first insulation layer, the first component, and more generally the electronic circuit (or at least parts thereof) and the audio device e.g. from the surrounding environment such as climate, e.g. climate-related stressors (moisture, temperature, liquid water), climate-related contaminants (e.g. dust), and/or human, e.g. human-related contaminants (human secretion products, e.g. cerumen, sebum, sweat). The first protection layer may fully cover the first insulation layer and/or the first shielding layer. The first protection layer may be made of a first protection material. The first protection material may be the same as the first insulation material. The first protection material may comprise or essentially consist of a similar or the same material as the first insulation material of the first insulation layer. This may be an advantage with regards to the adhesion between the first protection layer, the first shielding layer, and the first insulation layer. Further, use of the same material for the first insulation layer and the first protection layer simplifies the manufacture of the electronic circuit. The first protection material may alternatively be different from the first insulation material. The first protection layer may protect the first shielding layer from corroding. This may avoid e.g. an unwanted connection between one or more components. An unwanted connection may for example be a connection between a battery having a first voltage and a component having a second voltage different from the first voltage, whereby the battery may be drained or damaged and/or the component may be damaged. An audio device is disclosed. The audio device comprises a housing and an electronic circuit accommodated in the housing. The electronic circuit comprises a circuit board and one or more components including a first component mounted on the circuit board. The electronic circuit comprises a first shielding layer, a first insulation layer, and a second insulation layer, e.g., covering the first component. The second insulation layer is arranged between the first insulation layer and the first shielding layer. The audio device may be a hearing device such as a hearable or a hearing aid, comprising a processor configured to compensate for a hearing loss of a user. The audio device may be of the behind-the-ear (BTE) type, in-the-ear (ITE) type, in-the-canal (ITC) type, receiver-in-canal (RIC) type or receiver-in-the-ear (RITE) type. The hearing aid may be a binaural hearing aid. The first insulation layer and/or the first protection layer may insulate and protect the electronic circuit and in turn the audio device from the environment that the audio device is exposed to. For example, when the audio device is worn by a user the audio device may be exposed to sweat and cerumen from the user and weather conditions such as humidity, heat, and dust, which may be desirable to be insulated and protected from. An electronic circuit for an audio device is disclosed. The electronic circuit comprises a circuit board and one or more components including a first component mounted on the circuit board. The electronic circuit comprises a first shielding layer, a first insulation layer and optionally a second insulation layer, the second insulation layer being arranged between the first insulation layer and the first shielding layer. The first insulation layer may at least partly be arranged between the circuit board and the first component. In one or more exemplary electronic circuits, the first shielding layer has a thickness in the range from 50 μm to 150 μm, such as in the range from 75 μm to 125 μm. Optionally, the first shielding layer has a thickness in the range from 1 μm to 150 μm. The first shielding layer may have a thickness less than 50 μm. In one or more exemplary methods/electronic circuits/audio devices, the one or more components comprises a second component. The method may comprise applying the first insulation layer on the second component. In one or more exemplary electronic circuits/audio devices, the one or more components comprise a second component mounted on the circuit board. The first insulation layer and/or the first shielding layer may cover the second component. In one or more exemplary electronic circuits/audio devices, the electronic circuit comprises a first protection layer outside the first shielding layer. The first protection layer may fully or at least partially cover the first shielding layer. In one or more exemplary electronic circuits/audio devices, the circuit board comprises a ground connection contacting the first shielding layer. In one or more exemplary electronic circuits/audio devices, the first shielding layer, or at least a first portion and/or a second portion of the first shielding layer, is insulated from the ground connection of the circuit board. In one or more exemplary electronic circuits, the first insulation layer may substantially cover the circuit board combined with the first shielding layer covering the components and the first protection layer covering the first shielding layer. It is to be understood that a description of a feature in relation to method(s) is also applicable to the corresponding feature in electronic circuit/audio device. Examples of methods and products (electronic circuit and audio device) according to the disclosure are set out in the following items: Item 1. Method of manufacturing an electronic circuit of an audio device, the method comprising: providing a body comprising a circuit board and one or more components including a first component mounted on the circuit board; applying a first insulation layer; applying a second insulation layer; and applying one or more shielding layers including a first shielding layer covering at least a part of the second insulation layer, wherein applying the first insulation layer comprises jetting the first insulation layer and curing the first insulation layer, and applying the second insulation layer comprises jetting the second insulation layer and curing the second insulation layer. Item 2. Method according to item 1, wherein jetting the first insulation layer comprises applying one or more droplets of first insulation material, the droplets having a volume in the range from 0.01 μL to 0.1 μL. Item 3. Method according to any of items 1-2, wherein jetting the first insulation layer comprises jetting one or more first areas including a first primary area of the body. Item 4. Method according to any of items 1-3, wherein jetting the second insulation layer comprises jetting one or more second areas including a second primary area of the body. Item 5. Method according to item 4 as dependent on item 3, wherein the first primary area and the second primary area is partly overlapping. Item 6. Method according to any of items 1-5, wherein curing the first insulation layer comprises UV-curing the first insulation layer. Item 7. Method according to any of items 1-6, wherein curing the second insulation layer comprises UV-curing the second insulation layer to form a first interface between the first insulation layer and the second insulation layer. Item 8. Method according to any of items 1-7, wherein the first insulation layer is made of a first insulation material comprising one or more polymers. Item 9. Method according to item 8, wherein the first material has a first viscosity prior to curing of in the range from 0.30 to 200 Pa·s. Item 10. Method according to any of items 1-9, wherein the body comprises a plurality of circuit boards and one or more components including a first component mounted on each of the circuit boards, and wherein applying a first insulation layer comprises applying a first insulation layer to each of the circuit boards before applying the second insulation layer. Item 11. Method according to item 10, wherein applying a second insulation layer comprises applying a second insulation layer to each of the circuit boards before applying the first shielding layer. Item 12. Audio device comprising a housing and an electronic circuit accommodated in the housing, the electronic circuit comprising a circuit board and one or more components including a first component mounted on the circuit board, the electronic circuit comprising a first shielding layer, a first insulation layer and a second insulation layer, the second insulation layer being arranged between the first insulation layer and the first shielding layer. Item 13. Electronic circuit for an audio device, the electronic circuit comprising a circuit board and one or more components including a first component mounted on the circuit board, the electronic circuit comprising a first shielding layer, a first insulation layer and a second insulation layer, the second insulation layer being arranged between the first insulation layer and the first shielding layer. Item 14. Electronic circuit according to item 13, wherein the first shielding layer has a thickness in the range from 50 μm to 150 μm. FIG.1shows a first or distal view of parts of an exemplary body comprising a circuit board. The electronic circuit6,6A,6B,6C comprises a circuit board8and one or more components including a first component10having a first area A_C_1. The first component10is mounted on the circuit board8at a first position P_C_1. The first component10may be a power supply module. The electronic circuit6optionally comprises a second component12having a second area A_C_2. The second component12is mounted on the circuit board8at a second position P_C_2. The electronic circuit6optionally comprises a third component14having a third area A_C_3. The third component14is mounted on the circuit board8at a third position P_C_3. The third component14may be an antenna. The circuit board8comprises a ground connection15, the ground connection15comprising one or more ground pad elements15A exposed on the circuit board. The ground pad elements15A are connected to a common ground of the circuit board8. In one or more exemplary methods and/or electronic circuits, the ground pad elements15A form a ground pad ring around one or more components, such as encircling one or more components on the circuit board. In other words, the ground pad elements15A optionally form a ground pad ring or ground pad ring structure encircling the first component10and/or the second component12. The ground pad elements15A optionally form a ground pad ring or ground pad ring structure encircling the third component14. FIG.2shows a first view or distal view of parts of exemplary electronic circuits. The electronic circuit6,6C comprises a first insulation layer16covering a first primary area A1_1of the body (e.g. around the first component10) and optionally a first secondary area A1_2of the body (e.g. around the second component12). As may be seen inFIG.2the first primary area A1_1and the first secondary area A1_2overlaps. FIG.3shows a first view or distal view of parts of exemplary electronic circuits. The electronic circuit6,6C comprises a second insulation layer17covering a second primary area A2_1of the body (e.g., covering the first component10) and optionally a second secondary area A2_2of the body (e.g., covering the second component12). As may be seen inFIG.3the second primary area A2_1and the second secondary area A2_2overlaps. FIG.4shows a first view or distal view of parts of exemplary electronic circuits. The electronic circuit6,6C comprises a first insulation layer16covering a first primary area A1_1of the body (e.g., around the first component10) and optionally a first secondary area A1_2of the body (e.g., around the second component12). As may be seen inFIG.4the first primary area A1_1and the first secondary area A1_2are separated and do not overlap. FIG.5shows a first view or distal view of parts of exemplary electronic circuits. The electronic circuit6,6C comprises a second insulation layer17covering a second primary area A2_1of the body (e.g., covering the first component10) and optionally a second secondary area A2_2of the body (e.g., covering the second component12). As may be seen inFIG.5the second primary area A2_1and the second secondary area A2_2are separated and do not overlap. FIG.6shows a first view or distal view of parts of exemplary electronic circuits. The electronic circuit6,6C comprises a first shielding layer18outside and covering the first insulation layer16,16A,16B, the second insulation layer17,17A,17B seeFIGS.2-5. The first shielding layer18covers and shields the first component10(first area A_C_1) and optionally the second component12(second area A_C_2). Further, the first shielding layer18is in electrical (galvanic) contact with the ground connection15via one or more ground pad elements. The first shielding layer18may have first electromagnetic properties in the first area of the first component10and may be configured to shield a first electromagnetic interference of the first component10, e.g. to shield in a first frequency range such as in a frequency range used by another component of the electronic circuit. The first shielding layer18may have second electromagnetic properties in the second area of the second component12and may be configured to shield a second electromagnetic interference of the second component12, e.g. to shield in a second frequency range such as in a frequency range used by another component of the electronic circuit. FIG.7shows a first view or distal view of parts of exemplary electronic circuits. The electronic circuit6B shown inFIG.7is similar to the electronic circuit6C shown inFIG.6, but the first portion16A of the first insulation layer, the first portion17A of the second insulation layer, and the first portion18A of the first shielding layer are separated from the second portion16B of the first insulation layer, the second portion17B of the second insulation layer, and the second portion18B of the first shielding layer. The first insulation layer (not visible inFIG.7) is separated into at least a first portion16A and a second portion16B, e.g., to provide increased design flexibility when designing the electronic circuit. Accordingly, the electronic circuit6A,6B comprises a first portion16A of first insulation layer and a second portion16B of first insulation layer. The second insulation layer (not visible inFIG.7) is separated into at least a first portion17A and a second portion17B, e.g. to provide increased design flexibility when designing the electronic circuit. Accordingly, the electronic circuit6A,6B comprises a first portion17A of second insulation layer and a second portion17B of second insulation layer. The first portion17A of second insulation layer is outside and covering the first component10. The second portion17B of second insulation layer is outside and covering the second component12. The first shielding layer is separated into at least a first portion18A and a second portion18B, e.g., to provide increased design flexibility when designing the electronic circuit. Accordingly, the electronic circuit6B comprises a first portion18A of first shielding layer outside and covering the first portion16A of the first insulation layer and the first portion17A of the second insulation layer. The first portion18A of the first shielding layer covers and shields the first component10. Further, the electronic circuit6B optionally comprises a second portion18B of first shielding layer outside and covering the second portion16B of the first insulation layer and the second portion17B of the second insulation layer. The second portion18B of the first shielding layer covers and shields the second component12. The first portion18A of the first shielding layer and the second portion18B of the first shielding layer may have the same of different properties, such as thickness and/or shielding material. The first portion18A may have first electromagnetic properties configured to shield a first electromagnetic interference of the first component, e.g., to shield in a first frequency range such as in a frequency range used by another component of the electronic circuit. The second portion18A may have second electromagnetic properties configured to shield a second electromagnetic interference of the second component, e.g., to shield in a second frequency range such as in a frequency range used by another component of the electronic circuit. The first portion18A and/or the second portion may contact one or more ground pad elements of the circuit board, respectively. In one or more exemplary electronic circuits, the first portion of the first shielding layer may be insulated from the ground connection of the circuit board. In one or more exemplary electronic circuits, the second portion of the first shielding layer may be insulated from the ground connection of the circuit board. FIG.8shows a first or distal view of parts of exemplary electronic circuits. The electronic circuit6,6A,6B,6C optionally comprises a first protection layer20outside and covering the first shielding layer18,18A,18B. FIGS.9A-Bis a flow diagram of an exemplary method. The method100of manufacturing an electronic circuit of an audio device comprises providing102a body comprising a circuit board and one or more components including a first component. Optionally, the method100comprises mounting104one or more components including mounting104A the first component on the circuit board. Mounting104one or more components may comprise mounting104B a second component on the circuit board, and/or mounting104C a third component on the circuit board. The method100comprises applying106a first insulation layer; applying110a second insulation layer, and applying116one or more shielding layers including a first shielding layer covering at least part of the second insulation layer. In method100, applying106a first insulation layer comprises jetting108the first insulation layer. In method100, applying106a first insulation layer optionally comprises applying106A a masking before jetting108the first insulation layer. In method100, applying106a first insulation layer comprises curing109the first insulation layer. In the method100, jetting108the first insulation layer comprises applying108A one or more droplets of first insulation material, the droplets having a volume in the range from 0.01 μL to 0.1 μL. In the method100, jetting108the first insulation layer comprises jetting108B one or more first areas including a first primary area of the body. In the method100, curing109the first insulation layer comprises UV-curing109A the first insulation layer. In the method100, the body comprises a plurality of circuit boards and one or more components including a first component mounted on each of the circuit boards, and applying106a first insulation layer comprises applying106B a first insulation layer to each of the circuit boards before applying110the second insulation layer. In method100, applying110a second insulation layer comprises jetting112the second insulation layer. In method100, applying110a second insulation layer optionally comprises applying110A a masking before jetting112the second insulation layer. In method100, applying110the second insulation layer comprises curing114the second insulation layer. In the method100, jetting112the second insulation layer comprises applying112A one or more droplets of second insulation material, the droplets having a volume in the range from 0.01 μL to 0.1 μL. In the method100, jetting112the second insulation layer comprises jetting112B one or more second areas including a second primary area of the body. In the method100, curing114the second insulation layer comprises UV-curing114A the second insulation layer to form a first interface between the first insulation layer and the second insulation layer. In the method100, applying110a second insulation layer comprises applying110B a second insulation layer to each of the circuit boards before applying116the first shielding layer. In the method100, applying116a first shielding layer optionally comprises applying118a first shielding layer outside, e.g. on a distal side of, the second insulation layer. Applying116a first shielding layer optionally comprises contacting118A the first shielding layer to a ground connection, e.g. as part of applying118a first shielding layer outside, e.g. on a distal side of, the first insulation layer. Applying118the first shielding layer outside the second insulation layer may comprise one or more of moulding118B first shielding material on the second insulation layer, spraying118C first shielding material on the second insulation layer, and jetting118D first shielding material on the second insulation layer, e.g. as part of optionally covering the second insulation layer with first shielding material. In method100, applying118a first shielding layer outside the second insulation layer optionally comprises applying118E a masking optionally before jetting118D and/or spraying118C the first shielding material. In method100, applying116a first shielding layer optionally comprises curing116A the first shielding layer. The method100optionally comprises applying120a first protection layer outside the first shielding layer. FIG.10shows a cross-sectional view along a cross section line A of electronic circuit6C. The first portion16A of the first insulation layer covers a first primary area A1_1of the body and surrounds the first component10. The first portion of the first insulation layer has a first thickness T_FIL_1(not shown) in the range from 1 μm to 500 μm. The second portion16B of the first insulation layer covers a first secondary area A1_2of the body and surrounds the second component12. The second portion of the first insulation layer has a second thickness T_FIL_2(not shown) in the range from 1 μm to 500 μm. The first portion17A of the second insulation layer covers a second primary area A2_1of the body and covers the first component10. The first portion17A of the second insulation layer has a first thickness T_SIL_1in the range from 10 μm to 500 μm. The second portion17B of the second insulation layer covers a second secondary area A2_2of the body and covers the second component12. The second portion17B of the second insulation layer has a second thickness T_SIL_2(not shown) in the range from 10 μm to 500 μm. As may be seen onFIG.9, the first primary area A1_1and the second primary area A2_1are partly overlapping. The first secondary area A1_2and the second secondary area A2_2are also partly overlapping. The first shielding layer18may comprise metallic particles and contacts ground pad element15A. The first shielding layer18covers the first portion16A and the second portion16B of the first insulation layer16, and the first portion17A and the second portion17B of the second insulation layer17and therefore also the first component10and the second component12. The first shielding layer has a first thickness T_FSL_1in the range from 1 μm to 500 μm and a second thickness T_FSL_2(maximum thickness in the second area of the second component) in the range from 1 μm to 500 μm. The first thickness T_FSL_1is different from the second thickness T_FSL_2and configured to shield a first electromagnetic field from the first component10. The second thickness T_FSL_2is configured to shield a second electromagnetic field from the second component12. FIG.11shows a cross-sectional view along a cross section line B of electronic circuit6B. The electronic circuit6B shown inFIG.11is similar to the electronic circuit6C shown inFIG.10, but the first portion16A of the first insulation layer, the first portion17A of the second insulation layer, and the first portion18A of the first shielding layer are separated from the second portion16B of the first insulation layer, the second portion17B of the second insulation layer, and the second portion18B of the first shielding layer. FIG.12shows a cross-sectional view along the cross section line A of electronic circuit6C. The electronic circuit6C shown inFIG.12is similar to the electronic circuit6C shown inFIG.10, but the first portion16A of the first insulation layer16has a larger first thickness T_FIL_1and the second portion16B of the first insulation layer16has a larger second thickness T_FIL_2than inFIG.10. As may be seen inFIG.12, the first insulation layer16covers at least partly the edges/corners of the first component10and the second component12. As may be seen inFIG.12, the first portion17A of the second insulation layer17covers a central part of the first component10, and the second portion17B of the second insulation layer17covers a central part of the second component12. The combination of the first insulation layer16and the second insulation layer17provides an insulation of the first component10and the second component12. By having the first insulation layer16on the edges of the first component10and the second component12the first insulation layer may promote the adhesion of the second insulation17to the edges/corners of the first component10and/or the second component12. The first insulation layer16and the second insulation layer12may thereby build up at the edges/corners of the first component10and the second component12in order to insulate them efficiently. FIG.13shows a cross-sectional view along the cross section line A of electronic circuit6C. The electronic circuit6C shown inFIG.13is similar to the electronic circuit6C shown inFIG.10, but the first insulation layer16only comprises a first portion16A with a first thickness T_FIL_1covering at least partly the first component10and the second component12. In other words, the first insulation layer16is applied as one continuous first portion16A or layer on both the first component10and the second component12. As may be seen inFIG.13, the second insulation layer17only comprises a first portion17A with a first thickness T_SIL_1covering the first component10and the second component12. In other words, the second insulation layer17is applied as one continuous first portion17A or layer on both the first component10and the second component12. FIG.14shows an exemplary audio device2. The audio device2comprises a housing4and an electronic circuit6accommodated in the housing4. The housing4being connected to an ear part24by a tubular member22. The ear part24is configured to be positioned in an ear of a user of the audio device2. The housing4is configured to be positioned behind the ear of a user. The tubular member22is configured to connect the housing4and thereby the electronic circuit6to the ear part24e.g. by being positioned above or beneath the ear of the user. The first insulation layer16, the second insulation layer17, the first shielding layer18, and/or the first protection layer20may insulate and protect the electronic circuit6and in turn the audio device2from the environment that the audio device2is exposed to. For example, when the audio device2is worn by a user the audio device2may be exposed e.g. to sweat and cerumen from the user and weather conditions such as humidity, heat, and dust, which may be desirable to be insulated and protected from. In other exemplary audio devices (not shown) such as an in-the-ear (ITE) type or in-the-canal (ITC), the housing4may be an ear part24, such that the housing4and the ear part24are in one piece positioned in the ear of the user. The ear part24may thereby be the audio device2. The use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not imply any particular order, but are included to identify individual elements. Moreover, the use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not denote any order or importance, but rather the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used to distinguish one element from another. Note that the words “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used here and elsewhere for labelling purposes only and are not intended to denote any specific spatial or temporal ordering. Furthermore, the labelling of a first element does not imply the presence of a second element and vice versa. It may be appreciated thatFIGS.1-14comprise some modules or operations which are illustrated with a solid line and some modules or operations which are illustrated with a dashed line. The modules or operations which are comprised in a solid line are modules or operations which are comprised in the broadest example embodiment. The modules or operations which are comprised in a dashed line are example embodiments which may be comprised in, or a part of, or are further modules or operations which may be taken in addition to the modules or operations of the solid line example embodiments. It should be appreciated that these operations need not be performed in order presented. Furthermore, it should be appreciated that not all of the operations need to be performed. The exemplary operations may be performed in any order and in any combination. It is to be noted that the word “comprising” does not necessarily exclude the presence of other elements or steps than those listed. It is to be noted that the words “a” or “an” preceding an element do not exclude the presence of a plurality of such elements. It should further be noted that any reference signs do not limit the scope of the claims, that the exemplary embodiments may be implemented at least in part by means of both hardware and software, and that several “means”, “units” or “devices” may be represented by the same item of hardware. The various exemplary methods, devices, and systems described herein are described in the general context of method steps processes, which may be implemented in one aspect by a computer program product, embodied in a computer-readable medium, including computer-executable instructions, such as program code, executed by computers in networked environments. A computer-readable medium may include removable and non-removable storage devices including, but not limited to, Read Only Memory (ROM), Random Access Memory (RAM), compact discs (CDs), digital versatile discs (DVD), etc. Generally, program modules may include routines, programs, objects, components, data structures, etc. that perform specified tasks or implement specific abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of program code for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps or processes. Although features have been shown and described, it will be understood that they are not intended to limit the claimed invention, and it will be made obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the claimed invention. The specification and drawings are, accordingly to be regarded in an illustrative rather than restrictive sense. The claimed invention is intended to cover all alternatives, modifications, and equivalents. LIST OF REFERENCES 2audio device4housing6,6A,6B,6C electronic circuit8circuit board10first component, power supply circuitry12second component14third component15ground connection15A ground pad element16first insulation layer16A first portion of first insulation layer16B second portion of first insulation layer17second insulation layer17A first portion of second insulation layer17B second portion of second insulation layer18first shielding layer18A first portion of first shielding layer18B second portion of first shielding layer20first protection layer22tubular member24ear part100method of manufacturing an electronic circuit of an audio device102providing a body104mounting one or more components on the circuit board104A mounting a first component on the circuit board104B mounting a second component on the circuit board104C mounting a third component on the circuit board106applying a first insulation layer106A applying a masking106B applying a first insulation layer to each of the circuit boards108jetting the first insulation layer108A applying one or more droplets of first insulation material108B jetting one or more first areas including a first primary area of the body109curing the first insulation layer109A UV-curing the first insulation layer110applying a second insulation layer110A applying a masking110B applying a second insulation layer to each of the circuit boards112jetting the second insulation layer112A applying one or more droplets of second insulation material112B jetting one or more second areas including a second primary area of the body114curing the second insulation layer114A UV-curing the second insulation layer116applying a first shielding layer116A curing the first shielding layer118applying a first shielding layer outside the first insulation layer118A contacting the first shielding layer to a ground connection118B moulding first shielding material on the first insulation layer118C spraying first shielding material on the first insulation layer118D jetting first shielding material on the first insulation layer118E applying a masking120applying a first protection layer outside the first shielding layerA cross section lineB cross section lineA1_1first primary area of first insulation layerA1_2first secondary area of first insulation layerA2_1second primary area of second insulation layerA2_2second secondary area of second insulation layerA_C_1first area of first componentA_C_2second area of second componentA_C_3third area of third componentP_C_1first position of first componentP_C_2second position of second componentP_C_3third position of third componentT_FIL_1first thickness of the first insulation layerT_FIL_2second thickness of the first insulation layerT_SIL_1first thickness of the second insulation layerT_SIL_2second thickness of the second insulation layerT_FSL_1first thickness of the first shielding layerT_FSL_2second thickness of the first shielding layer
89,434
11943590
DETAILED DESCRIPTION I. Introduction In devices having one or more microphone arrays, such as auditory prostheses (e.g., hearing aids, cochlear implants, bone conduction devices, etc.), multi-microphone noise reduction systems are used to preserve desired sounds (e.g., speech), while rejecting unwanted sounds (e.g., noise). In certain conventional noise reduction systems, a local microphone array (LMA) worn on the recipient (i.e., part of the device) is used to focus on a sound source (e.g., speaker) that is in a predefined direction, such as directly in front of recipient. While such a noise reduction system may be robust, it is also prone to poor performance in situations where the desired speaker is not in the predefined direction. Examples of such situations may be found in classroom environments or while a recipient is travelling in a motor vehicle. The integrated noise reduction techniques presented herein improve upon these existing noise reduction systems in several distinct ways: (i) by including the ability to focus on a target sound source (e.g., speaker) that is not in the predefined direction and, in certain arrangements, (ii) by including external microphones (XMs) that operate together with the LMA, resulting in further noise reduction as opposed to using only the LMA. In certain embodiments presented herein, integrated noise reduction techniques will utilize two separate tuning parameters, one for controlling the sound received from the predefined direction, and the other for the sound received from an estimated direction where the target sound source may be located. In these embodiments, each of these directions can be defined using the LMA and the XMs. In order to define the predefined direction with the LMA and the XMs, a modified version of the improved method of estimation of a transfer function for the XM is used, where the input signals have to undergo a specific series of transformations. Using one or several XMs along with the LMA can provide significant speech intelligibility improvement, for instance in the case where XMs may be quite close to the desired speaker, or even if it provides a relevant noise reference. Additionally, the integrated noise reduction techniques presented herein are flexible in that they encompass a wide range of noise reduction options according to the tuning of the system. For ease of understanding, the following description is organized into several sections. In particular, section II describes a data model, which considers the general case of a local microphone array (LMA) in conjunction with one or several external microphones (XMs), which can be reduced to a single external microphone without compromising the equations provided herein. A transformed domain, as well as a pre-whitened-transformed domain is also introduced in order to simplify the flow of signal processing operations and realize distinct digital signal processing (DSP) block schemes. In section III, an integrated minimum variance distortionless response (MVDR) beamformer is discussed as applied to a local microphone array. In particular, section III describes an integrated MVDR beamformer, which leverages the use of a priori assumptions and the use of estimated quantities. In section IV, an integrated MVDR beamformer as applied to a local microphone array together with one or more external microphones is described. Again, an integrated MVDR beamformer for application to a local microphone array together with one or more external microphones, which leverages the use of a priori assumptions and the use of estimated quantities is described. II. Data Model A. Unprocessed Signals Consider a noise reduction system that consists of a local microphone array (LMA) of Mamicrophones and Meexternal microphones, providing a total of Ma+Menumber of microphones. Also consider a scenario where there is only one desired/target sound source, such as a target speech source, in a noisy environment. Proceeding to formulate the problem in the short-time Fourier transform (STFT) domain, the received signal can be represented at one particular frequency, k, and one time frame, l as: y⁡(k,l)=x⁡(k,l)+n⁡(k,l)(1)=a⁡(k,l)⁢s⁡(k,l)+n⁡(k,l)(2) where (dropping the dependency on k and l for brevity), y=[yaT,yeT]T, ya=[ya,1ya,2. . . ya,Ma]Tare the local microphone signals, ye=[ye,1ye,2. . . ye,Me]Tare the external microphone signals, x is the speech component consisting of a=[aaTaeT]T, the acoustic transfer function (ATF) from the speech source to all Ma+Memicrophones and s, the speech source signal. Finally, n=[naTneT]Trepresents the noise component, which consists of a combination of correlated and uncorrelated noises. Variables with the subscript “a” refer to the LMA signals and variables with the subscript “e” refer to the XM signals. The dependencies on k and l will be introduced herein, as needed, for mathematical derivations. In general, the speech component (target sound), x, can be represented in terms of a relative transfer function (RTF) vector such that: x=as=hs1(3) where s1=aa,1s, is the speech in a reference microphone of the LMA (w.l.o.g the first microphone is chosen as the reference microphone) and h is the RTF vector defined as: h=[1⁢aa,2aa,1⁢…⁢aa,Maaa,1⁢❘"\[LeftBracketingBar]"ae,1aa,1⁢…⁢ae,Meaa,1]T=[1⁢ha,2⁢…⁢ha,Ma⁢❘"\[LeftBracketingBar]"he,1⁢he,2⁢…⁢he,Me]T=[haT⁢❘"\[LeftBracketingBar]"heT]T(4) consisting of an RTF vector corresponding to the LMA signals, haand an RTF vector corresponding to the XM signals, he. With such a formulation, the noise reduction system will aim to produce an estimate for the speech component in the reference microphone, s1. The (Ma+Me)×(Ma+Me) speech-plus-noise, noise-only, and speech-only spatial correlation matrices are given respectively as: Ryy={yyH}  (5) Rnn={nnH}  (6) Rxx={xxH}  (7) where{.} is the expectation operator and H is the Hermitian transpose. It is assumed that the speech components are uncorrelated with the noise components, and hence the speech-only correlation matrix can be found from the difference of the speech-plus-noise correlation matrix and the noise-only correlation matrix: Rxx=Ryy−Rnn(8) The speech-plus-noise and noise-only correlation matrices are estimated from the received microphone signals during speech-plus-noise and noise-only periods, using a voice activity detector (VAD). The correlation matrices can also be calculated solely for the LMA signals respectively as Ryaya={yayaH}, RnanaH={nanaH}, and Rxaxa={xaxaH} (which can be realized by the top left (Ma×Ma) block of the corresponding entire correlation matrices in (5)-(7)). The estimate of the speech component in the reference microphone, z1, is then obtained through the linear filtering of the microphone signals, such that: z1=wH⁢y(9) Where w=[waTweT]T is the complex-valued filter to be designed. B. Transformed Domain As will be described later, working with the signals in a transformed domain will result in convenient relations to be made and an overall simplification of the flow of signal processing operations. The transformation will be based on an a priori assumed RTF vector for the LMA signals, {tilde over (h)}a(which may or may not be equal to ha). Firstly, an Ma×(Ma−1) unitary blocking matrix Bafor {tilde over (h)}aand an Ma×1 vector baare defined such that: BaH⁢h~a=0;ba=h~ah~a(10) where BaHBa=1(Ma−1)and in general Iϑdenotes the ϑ×ϑ identity matric, and bacan be interpreted as a scaled matched filter. W.l.o.g, bawill simply be referred to as a matched filter in the following derivations. Using Baand ba, an (Ma+Me)×(Ma+Me) unitary transformation matrix, T, can be subsequently defined: T=[Ta00IMe]=[[Baba]00IMe](11) where Ta=[Baba],TaHTa=IMa, and hence indeed THT=I(Ma+Me). Consequently, the transformed input signals, y, become: TH⁢y=[TaH⁢yaye]=[BaH⁢yabaH⁢yaye](12) The transformed noise signals can also be similarly defined: TH⁢n=[TaH⁢nane]=[BaH⁢nabaH⁢nane](13) It should be understood that this transformation domain is the LMA signals that pass through a blocking matrix and a matched filter, as in the first stage of a generalized sidelobe canceller (GSC) (i.e., the adaptive implementation of an MVDR beamformer), along with the XM signals. C. Pre-Whitened-Transformed Domain A spatial pre-whitening operation can be defined from the noise-only correlation matrix in the previously described transform domain by using the Cholesky decomposition: {(THn)(THn)H}=LLH(14) where L is an (Ma+Me)×(Ma+Me) lower triangular matrix. In block form, L can be realized as: L=[La(Ma×Ma)Lc(Me×Ma)⁢❘"\[LeftBracketingBar]"0(Ma×Me)Lx(Me×Me)] Where Laand Lxare lower triangular matrices. It should be noted that Lacorresponds to the LMA signals and are from a Cholesky decomposition of the noise correlation matrix from the LMA signals in the transformed domain, hence: {(TaHna)(TaHna)H}=LaLaH(16) A signal vector in the transformed domain can be consequently pre-whitened by pre-multiplying it with L−1. Such signal quantities will be denoted with the underbar (.) notation. Hence, the signal y in this so-called pre-whitened-transformed domain is given by: y_=[ya_ye_]=L-1⁢TH⁢y(17) and similarly for n: n_==[na_ne_]=L-1⁢TH⁢n(18) The respective correlation matrices are also given by: Ryy={yyH} Rnn={nnH}=I(Ma+Me) Rxx=Ryy−Rnn The spatial correlation matrices for the speech and noise and the noise-only, and the speech-only can also be calculated solely for the LMA signals respectively asRyaya={yayaH},Rnana=IMa, andRxaxa=Ryaya−Rnana&anti. D. Summary of Symbols and Realization FIG.1is a block diagram illustrating the flow of the previously described transformations on the unprocessed signals. Transformation block102is a processing block that represents the first transformation of section II-B, in which the LMA signals pass through a blocking matrix104and a matched filter106, analogous to the first stage of a GSC. The XM signals are unaltered. The pre-whitening block108is a processing block that represents the pre-whitening operation of section II-C, yielding signals109in the pre-whitened-transformed domain. The noise reduction filters that will be developed below will then be directly applied to these pre-whitened-transformed signals (i.e., the output of pre-whitening block108) in order to yield the desired speech estimate. The following is also a summary of how the symbolic notation should be interpreted throughout this document:(.)arefer to quantities associated with the LMA signals, e.g., ya.(.)erefer to quantities associated with the XM signals, e.g., ye.refer to a priori assumed quantities, e.g., {tilde over (h)}.refer to estimated quantities, e.g., ĥ.refer to quantities in the pre-whitened-transformed domain, e.g.,ya. III. MVDR Using a LMA (MVDRa) The MVDR beamformer minimizes the total noise power (minimum variance), while preserving the received signal in a particular direction (distortionless response). This direction is specified by defining the appropriate RTF vector for the MVDR beamformer. Considering only the LMA, the MVDR problem can be formulated as follows (which will be referred to as the MVDRa): minwawaH⁢Rna⁢na⁢was.t.waH⁢ha=1 where hais the RTF vector from (4), which in practice is unknown and hence will be replaced either by a priori assumptions or estimated from the speech-plus-noise correlation matrices. The optimal noise reduction filter is then given by: wa=Rna⁢na-1⁢hahaH⁢Rna⁢na-1⁢ha(23) Finally, the speech estimate, za,1, from this MVDRabeamformer is obtained through the linear filtering of the microphone signals with the complex-valued filter wa: za,1=waHya(24) In sections III-A and III-B, strategies for designing an MVDRabeamformer using an RTF vector based either on a priori assumptions or estimated from the speech-plus-noise correlation matrices are discussed. Section III-C illustrates an integrated beamformer that integrates the use of priori assumptions with estimates. A. Using an a Priori Assumed RTF Vector The MVDRaproblem can be formulated as in (22), except with using an a priori assumed RFT vector, {tilde over (h)}a=[1 {tilde over (h)}a,2. . . {tilde over (h)}a,M]Tinstead of ha. This {tilde over (h)}acan be based on a priori assumptions regarding microphone characteristics, position, speaker location and room acoustics (e.g., no reverberation). Similar to (23), the optimal noise reduction filter is then given by: w~a=Rna⁢na-1⁢h~ah~aH⁢Rna⁢na-1⁢ha(25) The speech estimate, {tilde over (z)}a,1, from this MVDRawith an a priori assumed RTF vector is then: {tilde over (z)}a,1={tilde over (w)}aHya(26) This conventional formulation of the MVDRacan also be equivalently posed in the pre-whitened-transformed domain (section II-C). As derived in Appendix A, the speech estimate in this domain is given by: z~a,1=lMah~a⁢y_a,Ma(27) Where lMais the bottom-right element in La, andya,Mais the last component of the pre-whitened-transformed signals,ya. In other words, the speech estimate for an MVDRafilter that uses an a priori assumed RTF vector results in a simple scaling of the last component of the pre-whitened-transformed signals. With such a formulation in this domain, this beamforming algorithm can be realized in a distinct set of signal processing blocks as illustrated inFIG.2. More specifically,FIG.2illustrates transformation block102and pre-whitening block108, as described above with reference toFIG.1. However, in the example ofFIG.2, in-whitening block108, the only the last row of La−1is used, (16), thus the resulting in the signalya,Ma. Also shown is an a priori filter110, which produces lMah~a and processing block112which applies lMah~a toya,Ma. The application of lMah~a toya,Maproduces an a priori speech estimate {tilde over (z)}a,1. The apriori speech estimate, {tilde over (z)}a,1, is an estimate of the target sound (e.g., speech) in the received sound signals, based solely on an a priori RTF vector. The RTF vector is generated uses assumptions regarding, for example, location of the source of the target sound, characteristics of the microphones (e.g., microphone calibration in regards to gains, phases, etc.), reverberant characteristics of the target sound source, etc. The a priori speech estimate {tilde over (z)}a,1, is an example of an a priori estimate of at least one target sound in the received sound signals. B. Using an Estimated RTF Vector The RTF vector may also be estimated without reliance on any a priori assumptions and can be used to enhance the speech regardless of the speech source location. One such method is a method of covariance whitening or equivalently that which involves a Generalized Eigenvalue Decomposition (GEVD). In such examples, a rank-1 matrix approximation problem can be formulated to estimate the RTF vector for a given set of LMA signals such that: minR^x,r⁢1(Rya⁢ya-Rna⁢na)-R^xa,r⁢1F2(28) where ∥.∥Fis the Frobenius norm, and {circumflex over (R)}xa,r1is a rank-1 approximation to (Ryaya−Rnana) defined as: {circumflex over (R)}xa,r1={circumflex over (Φ)}xa,r1ĥaĥaH(29) Where ĥa=[ĥa,2. . . ĥa,Ma]Tthe estimated RTF vector. As opposed to using the raw signal correlation matrices, the estimation problem of (28) can be equivalently formulated in the pre-whitened-transformed domain. In appendix B, it is shown that the estimated RTF vector is then: h^a=Ta⁢La⁢pmaxηρ(30) where pmaxis a generalized eigenvector of the matrix pencil {Ryaya,Rnana}, which as a result of the pre-whitening (Rnana=IMa) corresponds to the principal (first in this case) eigenvector ofRyaya, the scaling ηρ=eTa1TaLaPmaxand the M×1 vector ea1=[1 0 . . . 0]T. The resulting MVDRausing this estimated RTF vector is now given by: w^a=Rna⁢na-1⁢h^ah^aH⁢Rna⁢na-1⁢h^a(31) As was done in section III-A, this filter based on estimated quantities can also be reformulated in the transformed, pre-whitened-transformed domain. Leaving the derivations once again to Appendix B, the corresponding speech estimate using the estimated RTF vector is: z^a,1=ηρ⁢pmaxH⁢La-1⁢TaH⁢ya︸y_az^a,1=ηρ⁢pmaxH⁢y_a(32) where ηρ*pmaxcan be considered as the pre-whitened-transformed filter (where {.}* is the complex conjugate), which can be used to directly filter the pre-whitened, transformed signals,ya. These operations can also be realized in a distinct set of signal processing blocks, as illustrated inFIG.3. More specifically,FIG.3illustrates transformation block102and pre-whitening block108, as described above with reference toFIG.1, which produce pre-whitened-transformed signals. Also shown is block114, which filters the pre-whitened-transformed signals in accordance with ηρ*pmax(i.e.,114represents the hermitian transposed pre-whitened-transformed filter). The output of the pre-whitened-transformed filter114is a direct speech estimate, {circumflex over (z)}a,1(i.e., (32), above). The direct speech estimate, {circumflex over (z)}a,1, is an estimate of the target sound (e.g., speech) in the received sound signals, based solely on an estimated RTF vector. The estimated RTF vector is generated using real-time estimates of, for example, the location of the source of the target sound, reverberant characteristics of the target sound source, etc. The direct speech estimate, {circumflex over (z)}a,1, is an example of a direct estimate of at least one target sound in the received sound signals. C. Integrated MVDRaBeamformer Described above are two general MVDR approaches, one that imposes a priori assumptions for the definition of the RTF vector in the MVDR filter, and another that involves an estimation of this RTF vector. In conventional arrangements, a choice typically has to be made between one of these approaches with an acceptance of their inevitable drawbacks. However, in accordance the integrated noise reduction techniques presented herein, both approaches are integrated into one global filter, referred to herein as an “integrated MVDRabeamformer” that exploits the benefits of each approach. In general, the integrated MVDRabeamformer provides for integrated tunings which allow different “weights” to be applied to each of (1) an a priori assumed representation of target sound within received sound signals (e.g., an a priori estimate of at least one target sound in the received sound signals), and (2) an estimated representation of the target sound within received sound signals (e.g., a direct estimate of at least one target sound in the received sound signal). The weights applied to each of the a priori assumed representation of the target sound and the estimated representation of the target sound are selected based on “confidence measures” associated with each of the a priori assumed representation of the target sound and the estimated representation of the target sound, respectively. For instance, with the integrated MVDRabeamformer, if the speech source moves outside of the direction defined by an a priori assumed RTF vector, more weight can be given to an estimated RTF vector to account for the loss in performance that would otherwise result from using the a priori assumed RTF vector alone. On the other hand, if the estimated RTF vector becomes unreliable, less weight can be given thereto and the system can revert to using the a priori assumed RTF vector, which may have an improved performance if the speech source is indeed in the direction defined by the a priori assumed RTF vector. Combination/mixing of the a priori assumed RTF vector and the estimated RTF vector is also possible. That is, the tuning parameters can achieve multiple beamformers, i.e. one that relies on a priori assumptions alone, one that relies on estimated quantities alone, or the mixture of both. One particular tuning of interest may be to place a large weight on an a priori assumed RTF vector, but weighting an estimated RTF vector only when appropriate. This represents a mechanism for reverting to an a priori assumed RTF vector when the estimated RTF vector was unreliable. In the following, the integrated MVDRabeamformer is briefly derived. If the case is considered where ĥais defined according to a priori assumptions and hais estimated from (86), an integrated MVDRacost function can be given as: minwawaH⁢Rna⁢na⁢wa+α⁢❘"\[LeftBracketingBar]"waH⁢h~a-1❘"\[RightBracketingBar]"2+β⁢❘"\[LeftBracketingBar]"waH⁢h^a-1❘"\[RightBracketingBar]"2(33) where α∈[0,∞] and β∈[0,∞] are tuning parameters that control how much of the respective RTF vectors (i.e., the a priori assumed RTF vector and the estimated RTF vector) are weighted. This cost function is the combination of that of an MVDRa(as in (22)) defined by {tilde over (h)}aand another defined by ĥa, except that the constraints have been softened by α and β. The solution to (33) is given by: wa,int=ƒpr(α,β){tilde over (w)}a+ƒest(α,β)ŵa(34) where {tilde over (w)}aand ŵaare defined in (25) and (31) respectively. fpr(α,β)=[α⁢kdd[1+β⁡(kpp-kdp)]α⁢kdd+β⁢kpp+αβ⁡(kpp⁢kdd-kdp⁢kpd)+1](35)fest(α,β)=[β⁢kpp[1+α⁡(kdd-kpd)]α⁢kdd+β⁢kpp+αβ⁡(kpp⁢kdd-kdp⁢kpd)+1](36) with the constants: kdd=h~aH⁢Rna⁢na-1⁢h~a;⁢kpp=h~aH⁢Rna⁢na-1⁢h^a;⁢kdp=h~aH⁢Rna⁢na-1⁢h^a;⁢kpd=h~aH⁢Rna⁢na-1⁢h~a(37) This integrated MVDR beamformer reveals that the MVDRabeamformer based on α priori assumptions from (25) and that which is based on estimated quantities from (31) can be combined according to the functions ƒpr(α,β) and ƒest(α,β) respectively. As in the previous sections, this integrated beamformer can also be expressed in the pre-whitened-transformed domain as follows: wa,int=fpr(α,β)⁢Ta⁢La-H⁢lMah~a+fest(α,β)⁢Ta⁢La-H⁢ηp⁢pmax(38) and with the constants equivalently, but alternatively defined as: kdd=h~_aH⁢h~_a;⁢kpp=h~_aH⁢h^_a;⁢kdp=h~aH⁢h^a;⁢kpd=h~_aH⁢h~_a(39) where{tilde over (h)}aandĥaare given in (79) and (88) respectively. The resulting speech estimate from this integrated beamformer is then given by: z^a,int=fpr*⁢(α,β)⁢lMah~a+y_a,Ma+fest*⁢(α,β)⁢ηp⁢pmaxH⁢y_az^a,int=fpr*(α,β)⁢z~a,1+fest*(α,β)⁢z~a,1(40) The benefit of this pre-whitened-transformed domain is apparent where, with such an integrated beamformer of (38),{tilde over (w)}a,Maandŵacan be directly used to filter the pre-whitened-transformed signals, and then combined with the appropriate weightings as defined by the functions ƒpr(α,β) and ƒest(α,β), to yield the respective speech estimate. These functions ƒpr(α,β) and ƒest(α,β) can be tuned such as to emphasize the result from an MVDR beamformer that uses either an a priori assumed RTF vector or an estimated RTF vector. This results in a digital signal scheme as depicted inFIG.4. More specifically,FIG.4is a block diagram of an integrated MVDRabeamformer125in accordance with embodiments presented herein. The integrated MVDRabeamformer125comprises a plurality of processing blocks, which include transformation block102and pre-whitening block108. As described above with reference toFIG.1transformation block102and pre-whitening block108produce signals109in the pre-whitened-transformed domain (pre-whitened-transformed signals). Also shown inFIG.4are two processing branches113(1) and113(2) that each operate based on all or part of the pre-whitened-transformed signals109. The first processing branch113(1) includes an a priori filter110, which produces lMah~a and a processing block112which applies lMah~a toya,Ma. The application of lMah~a toya,Magenerates the a priori speech estimate {tilde over (z)}a,1, that is generated based solely on an a priori RTF vector (i.e., an estimate of the speech in the received sound signals, based solely on a priori assumptions such as microphone characteristics, source location, and reverberant characteristics of the target sound (e.g., speech) source. In other words, application of lMah~a toya,Magenerates an a priori estimate of at least one target sound in the received sound signals. The first branch113(1) also comprises a first weighting block116. The first weighting block116is configured to weight the speech estimate, {tilde over (z)}a,1, in accordance with the complex conjugate of the function ƒpr(α,β) (i.e., (35) and (40), above). More generally, the first weighting block116is configured to weight the speech estimate, {tilde over (z)}a,1, in accordance with a cost function controlled by a plurality of tuning parameters (e.g., (α,β)). The tuning parameters of the cost function (e.g., ƒpr(α,β)), are set based on one or more confidence measures118generated for the speech estimate, {tilde over (z)}a,1. The one or more confidence measures118represent an assessment or estimate of the accuracy/reliability of the a priori speech estimate, {tilde over (z)}a,1, and the hence the accuracy of the a priori RTF vector used to generate the speech estimate, {tilde over (z)}a,1. The first weighting block116generates a weighted a priori speech estimate, shown inFIG.5by arrow119. The second branch113(2) includes a pre-whitened-transformed filter114, which filters the pre-whitened-transformed signals in accordance with (32). The output of the pre-whitened-transformed filter114is a direct speech estimate, {circumflex over (z)}a,1, that is generated based solely on an estimated RTF vector (i.e., an estimate of the speech in the received sound signals, which takes into consideration microphone characteristics and may contain information such as the location and some reverberant characteristics of the speech source). In other words, the direct speech estimate {circumflex over (z)}a,1, is an example of a direct estimate of at least one target sound in the received sound signals. The second branch113(2) also comprises a second weighting block120. The second weighting block120is configured to weight the speech estimate, za,1, in accordance with complex conjugate of the function ƒest(α,β) (i.e., (36) and (40), above). More generally, the second weighting block120is configured to weight the direct speech estimate, {circumflex over (z)}a,1, in accordance with a cost function controlled by a plurality of tuning parameters (e.g., (α,β)). The tuning parameters of the cost function (e.g., ƒest(α,β) are set based on one or more confidence measures122generated for the speech estimate, {circumflex over (z)}a,1. The one or more confidence measures122represent an assessment or estimate of the accuracy/reliability of the speech estimate, {circumflex over (z)}a,1, and the hence the accuracy of the estimated RTF vector used to generate the speech estimate, {circumflex over (z)}a,1. The second weighting block120generates a weighted direct speech estimate, shown inFIG.5by arrow123. FIG.4also illustrates processing block124which integrates/combines the weighted a priori speech estimate119and the weighted direct speech estimate123. The combination of the weighted a priori speech estimate119and the weighted direct speech estimate123is referred to as an integrated speech estimate, {circumflex over (z)}a,int(i.e., (40), above). The integrated speech estimate may be used for subsequent processing in the device (e.g., auditory prosthesis). IV. MVDR with a LMA and XM Signals (MVDRa,e) Section III, above, illustrates an embodiment in which the integrated beamformer operates based on local microphone array (LMA) signals. As noted above, LMA signals are generated by a local microphone array (LMA) that are part of the device that performs the integrated noise reduction techniques. In the case of auditory prostheses, such as cochlear implants, the LMA is worn on the recipient. As described further below, the integrated noise reduction techniques described herein can be extended to include external microphone (XM) signals, in addition to the LMA signals. These XM signals are generated by one or more external microphones (XMs) that are not part of the device that performs the integrated noise reduction techniques, but that can nevertheless communicate with the device (e.g., via a wireless connection). The external microphones may be any type of microphone (e.g., microphones in a wireless microphone device, microphones in a separate computing device (e.g., phone laptop, tablet, etc.), microphones in another auditory prosthesis, microphones in a conference phone system, microphones in hands-free system, etc.) for which the location of the microphone(s) is unknown relative to the microphones of the LMA. In other words, as used herein, an external microphone may be any microphone that has an unknown location, which may change over time, with respect to the local microphone array. Extending the techniques herein to the use of LMA signals and XM signals, the integrated beamformer is referred to as the MVDRa,e: minwwH⁢Rnn⁢ws.t.Wh⁢h=1 where h is the RTF vector ((4), above) that includes Macomponents corresponding to the LMA, ha, and Mecomponents corresponding to the XMs, he, and Rnnis the (Ma+Me)×(Ma+Me) noise correlation matrix: Rnn=[Rna⁢na(Ma×Ma)Rna⁢ne(Ma×Me)Rna⁢neH(Me×Ma)Rne⁢ne(Me×Me)](42) where the upper left block is the noise correlation matrix from the LMA signals, Rnane, is the noise cross-correlation between the LMA signals and the XM signals and Rneneis the noise correlation of the XM signals. Similar to (23), the solution to (41) is given by: w=Rnn-1⁢hhH⁢Rnn-1⁢h(43) with the speech estimate, z=wHy. Since, as noted above, the XMs have an unknown location, which may change over time, with respect to the local microphone array, generally no a priori assumptions can be made about the location of the XMs. Consequently, there are two potential approaches that can be taken in order to find h, namely: (i) only the missing component of the RTF vector corresponding to that of the XM signals needs to be estimated, while the a priori assumed RTF vector for the LMA signals is preserved; or (ii) the entire RTF vector is estimated for the LMA signals and the XM signals. In sections, IV-A and IV-B strategies for both approaches are briefly described. A. Using a Partial a Priori Assumed RTF Vector and Partial Estimated RTF Vector As previously mentioned, one option for the definition of h for the MVDRa,eis such that the a priori RTF vector for the LMA signals, ha, is preserved and only the RTF vector for the XM signals is estimated. Such an RTF will therefore be defined as follows: h~=[h~aT❘h^eT]T(44) It should be noted that although {tilde over (h)} partially contains an estimated RTF vector, this is done with respect to the a priori assumptions set by {tilde over (h)}a, and hence the notation for h is kept to be that of an a priori RTF vector (this is further elaborated upon in section IV-E). A method to compute ĥein the case of one XM using the cross-correlation between the external microphone and a speech reference provided by (26) using a GEVD is outlined below As in (28) a rank-1 matrix approximation problem can be formulated to estimate an entire RTF vector for a given set of microphone signals such that: minR~x,r⁢1(Ryy-Rnn)-Rx,r⁢1F2(45) where {tilde over (R)}x,r1is a rank-1 approximation to Rxx(recall (8)). The a priori assumed RTF vector for the LMA signals can also be included for the definition of {tilde over (R)}x,r1and hence is given by: R~x,r⁢1=Φ^x,r⁢1[h~ah^e][h~aH⁢h^eH](46) As opposed to using the raw signal correlation matrices, the estimation problem of (45) can be equivalently formulated in the pre-whitened-transformed domain. In Appendix C, it is shown that the estimated RTF vector could be found from a GEVD on the matrix pencil {JTRyyJ,JTRnnλJ}, where the selection matrix, J=[0Me+1×(Ma−1)|IMe+1]T. As a result of the pre-whitening (Rnn=IMa+Me), this GEVD can consequently be computed from the EVD of JTRyyJ, which is a lower order correlation matrix, of dimensions (Me+1)×(Me+1) that could be constructed from the last (Me+1) elements of the pre-whitened-transformed signals, namely that in relation to the last element of the LMA—ya,Ma, and those in relation to the XM signals—ye. The resulting RTF vector for the XM signals is then defined from the corresponding principal (first in this case) eigenvector, vmax: h~e=h~alMa⁢v1⁢JeT⁢TLJvmax(47) where the selection matrix, Je=[0(Me×Ma)|IMe]T. Finally, this estimate is then used to compute the corresponding MVDRa,efilter with an a priori assumed RTF vector and a partially estimated RTF vector as: w~=Rnn-1⁢h~h~H⁢Rnn-1⁢h~(48) where {tilde over (h)} as defined in (53) can be equivalently represented as: h~=h~alMa⁢v1⁢TLJvmax As was done in section III, this filter can also be reformulated in the pre-whitened-transformed domain. Leaving the derivations once again to Appendix C, the corresponding speech estimate was then found to be: z~1=lMa⁢v1h~a⁢vmaxH[y_a,May_e](50) where lMa⁢v1*h~a⁢vmax can be considered as a pre-whitened-transformed filter, which can be used to directly filter the last (Me+1) elements of the pre-whitened-transformed signals, i.e.ya,Maandye. More specifically,FIG.5is a block diagram illustrating a transformation block502representing the first transformation of section II-B, in which the LMA signals pass through a blocking matrix504and a matched filter506, analogous to the first stage of a GSC. The XM signals are unaltered. The pre-whitening block508represents the pre-whitening operation. The output of the pre-whitening block508is signals in the pre-whitened-transformed domain, referred to as pre-whitened-transformed signals509. Also shown inFIG.5is filter530(i.e., (50), above), which uses the whitened-transformed signals509to generate an a priori speech estimate, {tilde over (z)}1. As such, the a priori speech estimate, {tilde over (z)}1, is a speech estimate using a partial a priori assumed RTF vector and partial estimated RTF vector (i.e., using a priori assumptions for the definition of the RTF vector for the LMA signals, while estimating only the RTF vector for the XM signals). Stated differently, the a priori speech estimate, {tilde over (z)}1, is generated from assumptions such as microphone characteristics, location and reverberant characteristics of the speech within the sound signals detected by the LMA, and based on a real-time estimate of speech within the sound signals detected by the XM, which adhere to the same assumptions used for the LMA. The a priori speech estimate {tilde over (z)}1, is an example of an a priori estimate of at least one target sound in the received sound signals. In the case where the RTF vector for both the LMA and XM signals is to be estimated, a variation of (45) is considered: minR^x,r⁢1(Ryy-Rnn)-R^x,r⁢1F2(51) where {circumflex over (R)}x,r1is a rank-1 approximation to Rxx(without any a priori information): R^x,r⁢1=Φ^x,r⁢1⁢h^⁢h^H=Φ^x,r⁢1[q^aq^e][q^aHq^eH](52) with {circumflex over (q)}athe estimated RTF vector for the LMA signals and {circumflex over (q)}ethe RTF vector for the XM signals. Once again, it will be convenient to re-frame the problem in the pre-whitened-transformed domain. From the derivations in Appendix D, the estimated RTF vector is given by: h^=[q^aq^e]=TLqmaxηq(53) where qmaxis a generalized eigenvector of the matrix pencil {Ryy,Rnn}, which as a result of the pre-whitening (Rnn=IMa+Me) corresponds to the principal (first in this case) eigenvector ofRyy, ηq=ex1TTL qmaxand ex1=[10 . . . 0|0 . . . 0]T. The estimated RTF vector can therefore be used as an alternative to h for the MVDRa,e: w^=Rnn-1⁢h^h^H⁢Rnn-1⁢h^(54) As derived in Appendix D, the corresponding speech estimate in the pre-whitened-transformed domain is given by: z^1=ηq⁢qmaxH⁢Lλ-1TH⁢y︸y_(55)z^1=ηq⁢qmaxH⁢y_ where ηq*qmaxcan be considered as a pre-whitened-transformed filter, which can be used to directly filter the pre-whitened-transformed signals,y. More specifically,FIG.6is a block diagram illustrating a transformation block502representing the first transformation of section II-B, in which the LMA signals pass through a blocking matrix504and a matched filter506, analogous to the first stage of a GSC. The XM signals are unaltered. The pre-whitening block508represents the pre-whitening operation. The output of the pre-whitening block508is signals in the pre-whitened-transformed domain, referred to as pre-whitened-transformed signals509. Also shown inFIG.6is filter532(i.e., (55), above), which uses the whitened-transformed signals509to generate a direct speech estimate, {circumflex over (z)}1. As such, the direct speech estimate, {circumflex over (z)}1, is a speech estimate using an estimated RTF vector including both the LMA and XM signals. Stated differently, the speech estimate, {circumflex over (z)}1, is generated from a real-time estimate of the speech within the sound signals detected by both the LMA and XM, which takes into consideration microphone characteristics and may contain information such as the location and some reverberant characteristics of the target sound. The speech estimate {circumflex over (z)}1, is an example of a direct estimate of at least one target sound in the received sound signals. B. Integrated Beamformer In the case of the integrated MVDRafor the LMA signals in section III-C, two general approaches for designing the beamformer were considered: one that imposes a priori assumptions for the definition of the RTF vector in the MVDR filter, and another that involves an estimation of this RTF vector. For the MVDRa,e, two analogous approaches can also be considered: one that imposes a priori assumptions for the definition of the RTF vector for the LMA signals, while estimating only the RTF vector for the XM signals or an estimation of the entire RTF vector including both the LMA and XM signals. Although in both approaches there is an estimation; for the approach where only the RTF vector for the XM signals is estimated, it is done so in accordance with the a priori assumptions set by the LMA. Therefore, just as in the integrated MVDRa, two general approaches to designing the MVDRa,eaccording to either a priori assumptions or full estimation can be considered. Consequently, an integrated MVDRa,ebeamformer can also be derived in order to integrate the two general approaches. The resulting cost function, is: minwwH⁢Rnn⁢w+α⁢❘"\[LeftBracketingBar]"wH⁢h~-1❘"\[RightBracketingBar]"2+β⁢❘"\[LeftBracketingBar]"wH⁢h^-1❘"\[RightBracketingBar]"2(56) where {tilde over (h)} is defined from (49) and ĥ from (53). The solution is then: wint=gpr(α,β){tilde over (w)}+gest(α,β)ŵ(57) where {tilde over (w)}λ, and ŵλ, are given (48) and (54) respectively. gpr(α,β)=[α⁢khh[1+β⁢(kqq-khq)]α⁢khh+β⁢kqq+αβ⁢(kqq⁢khh-khq⁢kqh)+1](58)gest⁢(α,β)=[β⁢kqq[1+α(kqq-khq)]α⁢khh+β⁢kqq+αβ⁢(kqq⁢khh-khq⁢kqh)+1](59) with the constants: khh=h~H⁢Rnn-1⁢h~;⁢kqq=h^H⁢Rnn-1⁢h^;⁢khq=h~H⁢Rnn-1⁢h~;⁢kqh=h^aH⁢Rnn-1⁢h~(60) As in section III-C, this integrated MVDRa,ebeamformer also reveals that the MVDRa,ebeamformer based on a priori assumptions from (48) and that which is based on estimated quantities from (54) can be combined according to the functions gpr(α,β) and gest(α,β) respectively. This integrated beamformer can also be expressed in the pre-whitened-transformed domain as follows: wintλ=ℊpr(α,β)⁢TL-H⁢lMa⁢v1h~a⁢Jvmax+ℊest(α,β)⁢TL-H⁢ηq⁢qmax(61) and the constants equivalently, but alternatively defined as: khh=h_~H⁢h_~;⁢kqq=h_^H⁢h_^;⁢khq=h_~H⁢h_^;⁢kqh=h_^H⁢h_~(62) where{tilde over (h)}andĥare given in (88) from Appendix C and (97) from Appendix D respectively. The resulting speech estimate from this integrated beamformer is then given by: z^int=ℊpr*(α,β)⁢lMa⁢v1h~a⁢vmaxH[y_a,May_e]+ℊest*(α,β)⁢ηp⁢qmaxH⁢y_(63)z^int=ℊpr*⁢(α,β)⁢z~1+ℊest*⁢(α,β)⁢z^1 The benefit of the pre-whitened-transformed domain is once again apparent. With such an integrated beamformer, the transformed, pre-whitened signals can be directly filtered accordingly, and then combined with the appropriate weightings as defined by the functions gpr(α,β) and gest(α,β), to yield the respective speech estimate. These functions gpr(α,β) and gest(α,β) can be tuned such as to emphasize the result from an MVDR beamformer that uses either an a priori assumed RTF vector or an estimated RTF vector. This results in a digital signal processing scheme as depicted inFIG.7. More specifically,FIG.7is a block diagram of an integrated MVDRa,ebeamformer525in accordance with embodiments presented herein. The integrated MVDRa,ebeamformer525comprises a plurality of processing blocks, which include transformation block502and pre-whitening block508. As described above with reference toFIGS.5and6, the transformation block502represent the first transformation of section II-B, in which the LMA signals pass through a blocking matrix504and a matched filter506, while the XM signals are unaltered. The pre-whitening block508represents the pre-whitening operation. The output of the pre-whitening block508is signals in the pre-whitened-transformed domain, referred to as pre-whitened-transformed signals509. Also shown inFIG.7are two processing branches513(1) and513(2) that each operate based on all or part of the pre-whitened-transformed signals509. The first processing branch513(1) includes a filter530which, as described above with reference toFIG.5, uses the whitened-transformed signals509to generate an a priori speech estimate, {tilde over (z)}1(i.e., an estimate of the speech in the received sound signals, based on a priori assumptions for the definition of the RTF vector for the LMA signals, while estimating only the RTF vector for the XM signals). The speech estimate {tilde over (z)}1, is an example of an a priori estimate of at least one target sound in the received sound signals. The first branch513(1) also comprises a first weighting block516. The first weighting block516is configured to weight the speech estimate, {tilde over (z)}1, in accordance with the complex conjugate of the function gpr(α,β) (i.e., (58) and (63), above). More generally, the first weighting block516is configured to weight the speech estimate, {tilde over (z)}1, in accordance with a cost function controlled by a plurality of tuning parameters (e.g., (α,β)). The tuning parameters of the cost function (e.g., gpr(α,β)), are set based on one or more confidence measures518generated for the speech estimate, {tilde over (z)}1. The one or more confidence measures518represent an assessment or estimate of the accuracy/reliability of the speech estimate, {tilde over (z)}1, and the hence the accuracy of the partial a priori assumed RTF vector and partial estimated RTF vector used to generate the speech estimate (i.e., using a priori assumptions for the definition of the RTF vector for the LMA signals, while estimating only the RTF vector for the XM signals). The first weighting block518generates a weighted a priori speech estimate, shown inFIG.5by arrow519. The second branch513(2) includes the filter532(i.e., (55), above), which uses the whitened-transformed signals509to generate a direct speech estimate, {circumflex over (z)}1(i.e., a speech estimate generated using an estimated RTF vector including both the LMA and XM signals). The second branch513(2) also comprises a second weighting block520. The second weighting block520is configured to weight the direct speech estimate, {circumflex over (z)}1, in accordance with the complex conjugate of the function gest(α,β) (i.e., (59) and (63), above). More generally, the second weighting block120is configured to weight the direct speech estimate, {circumflex over (z)}1, in accordance with a cost function controlled by a plurality of tuning parameters (e.g., (α,β)). The tuning parameters of the cost function (e.g., gest(α,β) are set based on one or more confidence measures522generated for the speech estimate, {circumflex over (z)}1. The one or more confidence measures522represent an assessment or estimate of the accuracy/reliability of the speech estimate, {circumflex over (z)}1, and the hence the accuracy of the estimated RTF vector including both the LMA and XM signals. The second weighting block520generates a weighted direct speech estimate, shown inFIG.5by arrow123. FIG.7also illustrates processing block524which integrates/combines the weighted a priori speech estimate519and the weighted direct speech estimate523. The combination of the weighted a priori speech estimate519and the weighted direct speech estimate523is referred to as an integrated speech estimate, {circumflex over (z)}int(i.e., (63), above). The integrated speech estimate, {circumflex over (z)}int, may be used for subsequent processing in the device (e.g., auditory prosthesis). With this integrated beamformer for both the LMA and XMs, the decision process is now, as shown in the flowchart ofFIG.8, a two stage process840. More specifically, the process840is comprised of two main decisions, referred to as decisions842and844. Referring first to842, it can be determined whether or not the XM signals are reliable (i.e., decide whether or not to use the XM signals). If the XM signals are not reliable, the system uses MVDR with LMA only (i.e., MVDRa). If the XM signals are reliable, the system uses MVDR with LMA and XMs (i.e., MVDRa,e). At844, after determining whether or not the XM signals should be used, a decision is made as to whether or not estimated RTF vector is reliable. In other words, a decision can then be made on how much to weight the a priori assumed RTF vector and the estimated RTF vector. This decision is controlled by a and in the same manner as for the Integrated MVDRaBeamformer from section III-C. In the case where the XM is used, the a priori assumed RTF vector consists of an a priori assumed RTF vector for the LMA signals and an estimated RTF vector for the XM signals, the estimated RTF vector is for both the LMA and XM signals. In the second stage of the decision process, it should be noted that in order to simplify the tuning, a and could be made inversely proportional, and can even be tuned such that gpr(α,β) and gest(α,β) form a convex combination. Alternatively, if it is imposed that α→∞, then this preserves the a priori constraint and it is only that remains to be tuned, which would be that of a contingency noise reduction strategy. In the case where both α→∞ and β→∞, this corresponds to two hard constraints imposed upon the noise minimization, and is then considered as a linearly constrained minimum variance (LCMV) beamformer. It is also noted for the case of the MVDRawhere α→∞, =0, that the original MVDRawith a priori constraints is achieved. Hence, the original beamformer has not been compromised and can be reverted to at anytime with this particular tuning. A summary of the various noise reduction strategies encompassed by this integrated beamformer is summarized inFIG.9. More specifically,FIG.9includes a table, referred to as Table I, which illustrates limiting cases of a, for the various MVDR beamformers. The integrated noise reduction techniques presented herein may be implemented in a number of devices/systems that include a local microphone array (LMA) to capture sound signals. These devices/systems include, for example, auditory prostheses (e.g., cochlear implant, acoustic hearing aids, auditory brainstem stimulators, bone conduction devices, middle ear auditory prostheses, direct acoustic stimulators, bimodal auditory prosthesis, bilateral auditory prostheses, etc.), computing devices (e.g., mobile phones, tablet computers, etc.), conference phones, hands-free telephone systems, etc.FIGS.10A,10B,11, and12are schematic block diagrams of example devices configured to implement the integrated noise reduction techniques presented herein. It is to be appreciated that these examples are illustrative and that, as noted, the integrated noise reduction techniques presented herein may be implemented in a number of different devices/systems. Referring first toFIG.10A, shown is a schematic diagram of an exemplary cochlear implant1000configured to implement aspects of the techniques presented herein, whileFIG.10Bis a block diagram of the cochlear implant1000. For ease of illustration,FIGS.10A and10Bwill be described together. The cochlear implant1000comprises an external component1002and an internal/implantable component1004. The external component1002includes a sound processing unit1012that is directly or indirectly attached to the body of the recipient, an external coil1006and, generally, a magnet (not shown inFIG.10A) fixed relative to the external coil1006. The sound processing unit1012comprises a local microphone array (LMA)1013, comprised of microphones1008(1) and1008(2), configured to receive sound input signals. In this example, the sound processing unit1012may also include one or more auxiliary input devices1009, such as one or more telecoils, audio ports, data ports, cable ports, etc., and a wireless transmitter/receiver (transceiver)1011. The sound processing unit1012also includes, for example, at least one battery1007, a radio-frequency (RF) transceiver1021, and a processing block1050. The processing block1050comprises a number of elements, including an integrated noise reduction module1025and a sound processor1033. The processing block1050may also include other elements that, have for ease of illustration, been omitted fromFIG.10B. Each of the integrated noise reduction module1025and a sound processor1033may be formed by one or more processors (e.g., one or more Digital Signal Processors (DSPs), one or more uC cores, etc.), firmware, software, etc. arranged to perform operations described herein. That is, the integrated noise reduction module1025and a sound processor1033may each be implemented as firmware elements, partially or fully implemented with digital logic gates in one or more application-specific integrated circuits (ASICs), partially or fully implemented in software, etc. The integrated noise reduction module1025is configured to perform the integrated noise reduction techniques described elsewhere herein. For example, the integrated noise reduction module1025corresponds to the integrated MVDRabeamformer125and the MVDRa,ebeamformer525, described above. As such, in different embodiments, the integrated noise reduction module1025may include the processing blocks described above with reference toFIGS.4and7, as well as other combinations of processing blocks configured to perform the integrated noise reduction techniques described elsewhere herein. As noted above, the integrated noise reduction techniques, and thus the integrated noise reduction module1025, generates an integrated speech estimate from sound signals received via at least the LMA1013. Shown inFIG.10is at least one optional external microphone (XM) which may also be in communication with the sound processing unit1012. If present, the XM1017is configured to capture sound signals and provide XM signals to the sound processing unit1012. These XM signals may also be used to generate the integrated speech estimate. The sound processor1033is configured to use the integrated speech estimate (generated from one or both of the LMA signals and the XM signals) to generate stimulation signals for delivery to the recipient. Returning to the example embodiment ofFIGS.10A and10B, the implantable component1004comprises an implant body (main module)1014, a lead region1016, and an intra-cochlear stimulating assembly1018, all configured to be implanted under the skin/tissue (tissue)1005of the recipient. The implant body1014generally comprises a hermetically-sealed housing1015in which RF interface circuitry1024and a stimulator unit1020are disposed. The implant body1014also includes an internal/implantable coil1022that is generally external to the housing1015, but which is connected to the RF interface circuitry1024via a hermetic feedthrough (not shown inFIG.10B). As noted, stimulating assembly1018is configured to be at least partially implanted in the recipient's cochlea1037. Stimulating assembly1018includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes)1026that collectively form a contact or electrode array1028for delivery of electrical stimulation (current) to the recipient's cochlea. Stimulating assembly1018extends through an opening in the recipient's cochlea (e.g., cochleostomy, the round window, etc.) and has a proximal end connected to stimulator unit1020via lead region1016and a hermetic feedthrough (not shown inFIG.10B). Lead region1016includes a plurality of conductors (wires) that electrically couple the electrodes1026to the stimulator unit1020. As noted, the cochlear implant1000includes the external coil1006and the implantable coil1022. The coils1006and1022are typically wire antenna coils each comprised of multiple turns of electrically insulated single-strand or multi-strand platinum or gold wire. Generally, a magnet is fixed relative to each of the external coil1006and the implantable coil1022. The magnets fixed relative to the external coil1006and the implantable coil1022facilitate the operational alignment of the external coil with the implantable coil. This operational alignment of the coils1006and1022enables the external component1002to transmit data, as well as possibly power, to the implantable component1004via a closely-coupled wireless link formed between the external coil1006with the implantable coil1022. In certain examples, the closely-coupled wireless link is a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electromagnetic, capacitive and inductive transfer, may be used to transfer the power and/or data from an external component to an implantable component and, as such,FIG.10Billustrates only one example arrangement. As noted above, the integrated noise reduction module1025is configured to generate an integrated speech estimate, and the sound processor1033is configured to use the integrated speech estimate to generate stimulation signals for delivery to the recipient. More specifically, the sound processor1033(e.g., one or more processing elements implementing firmware, software, etc.) is configured to use the integrated speech estimate to generate stimulation control signals1036that represent electrical stimulation for delivery to the recipient. In the embodiment ofFIG.10B, the stimulation control signals1036are provided to the RF transceiver1021, which transcutaneously transfers the stimulation control signals1036(e.g., in an encoded manner) to the implantable component1004via external coil1006and implantable coil1022. That is, the stimulation control signals1036are received at the RF interface circuitry1024via implantable coil1022and provided to the stimulator unit1020. The stimulator unit1020is configured to utilize the stimulation control signals1036to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient's cochlea via one or more stimulating contacts1026. In this way, cochlear implant1000electrically stimulates the recipient's auditory nerve cells, bypassing absent or defective hair cells that normally transduce acoustic vibrations into neural activity, in a manner that causes the recipient to perceive one or more components of the input audio signals. FIGS.10A and10Billustrate an arrangement in which the cochlear implant1000includes an external component. However, it is to be appreciated that embodiments of the present invention may be implemented in cochlear implants having alternative arrangements. For example, the techniques presented herein could also be implemented in a totally implantable or mostly implantable auditory prosthesis where components shown in sound processing unit1012, such as processing block1050, could instead be implanted in the recipient. FIG.11is a functional block diagram of one example arrangement for a bone conduction device1100in accordance with embodiments presented herein. Bone conduction device1100is configured to be positioned at (e.g., behind) a recipient's ear. The bone conduction device1100comprises a microphone array1113, an electronics module1170, a transducer1171, a user interface1172, and a power source1173. The local microphone array (LMA)1113comprises microphones1108(1) and1108(2) that are configured to convert received sound signals1116into LMA signals. Although not shown inFIG.11, bone conduction device1100may also comprise other sound inputs, such as ports, telecoils, etc. The LMA signals are provided to electronics module1170for further processing. In general, electronics module1170is configured to convert the LMA signals into one or more transducer drive signals1180that active transducer1171. More specifically, electronics module1170includes, among other elements, a processing block1150and transducer drive components1176. The processing block1174comprises a number of elements, including an integrated noise reduction module1125and sound processor1133. Each of the integrated noise reduction module1125and the sound processor1133may be formed by one or more processors (e.g., one or more Digital Signal Processors (DSPs), one or more uC cores, etc.), firmware, software, etc. arranged to perform operations described herein. That is, the integrated noise reduction module1125and the sound processor1133may each be implemented as firmware elements, partially or fully implemented with digital logic gates in one or more application-specific integrated circuits (ASICs), partially or fully in software, etc. The integrated noise reduction module1125is configured to perform the integrated noise reduction techniques described elsewhere herein. For example, the integrated noise reduction module1125corresponds to the integrated MVDRabeamformer125and the MVDRa,ebeamformer525, described above. As such, in different embodiments, the integrated noise reduction module1125may include the processing blocks described above with reference toFIGS.4and7, as well as other combinations of processing blocks configured to perform the integrated noise reduction techniques described elsewhere herein. Although not shown inFIG.11is at least one optional external microphone (XM) may be in communication with the bone conduction device1100. If present, the XM is configured to capture sound signals and provide XM signals to the conduction device1100for processing by the integrated noise reduction module1125(i.e., the XM signals may also be used to generate the integrated speech estimate). The sound processor1133is configured to process the integrated speech estimate (generated from one or both of the LMA signals and the XM signals) for use by the transducer drive components1176. The transducer drive components1176generate transducer drive signal(s)1180which are provided to the transducer1171. The transducer1171illustrates an example of a stimulation unit that receives the transducer drive signal(s)1180and generates vibrations for delivery to the skull of the recipient via a transcutaneous or percutaneous anchor system (not shown) that is coupled to bone conduction device1100. Delivery of the vibration causes motion of the cochlea fluid in the recipient's contralateral functional ear, thereby activating the hair cells in the functional ear. FIG.11also illustrates the power source1173that provides electrical power to one or more components of bone conduction device1300. Power source1173may comprise, for example, one or more batteries. For ease of illustration, power source1173has been shown connected only to user interface1172and electronics module1170. However, it should be appreciated that power source1173may be used to supply power to any electrically powered circuits/components of bone conduction device1100. User interface1172allows the recipient to interact with bone conduction device1100. For example, user interface1172may allow the recipient to adjust the volume, alter the speech processing strategies, power on/off the device, etc. Although not shown inFIG.11, bone conduction device1100may further include an external interface that may be used to connect electronics module1170to an external device, such as a fitting system. FIG.12is a block diagram of an arrangement of a mobile computing device1200, such as a smartphone, configured to be implemented the integrated noise reduction techniques presented herein. It is to be appreciated thatFIG.12is merely illustrative. Mobile computing device1200first comprises an antenna1236and a telecommunications interface1238that are configured for communication on a telecommunications network. The telecommunications network over which the radio antenna1236and the radio interface1238communicate may be, for example, a Global System for Mobile Communications (GSM) network, code division multiple access (CDMA) network, time division multiple access (TDMA), or other kinds of networks. The mobile computing device1200also includes a wireless local area network interface1240and a short-range wireless interface/transceiver1242(e.g., an infrared (IR) or Bluetooth® transceiver). Bluetooth® is a registered trademark owned by the Bluetooth® SIG. The wireless local area network interface1240allows the mobile computing device1200to connect to the Internet, while the short-range wireless transceiver1242enables the external device1206to wirelessly communicate (i.e., directly receive and transmit data to/from another device via a wireless connection), such as over a 2.4 Gigahertz (GHz) link. It is to be appreciated that that any other interfaces now known or later developed including, but not limited to, Institute of Electrical and Electronics Engineers (IEEE) 802.11, IEEE 802.16 (WiMAX), fixed line, Long Term Evolution (LTE), etc., may also or alternatively form part of the mobile computing device1200. In the example ofFIG.12, mobile computing device1200also comprises an audio port1244, a local microphone array (LMA)1213, a speaker1248, a display screen1258, a subscriber identity module or subscriber identification module (SIM) card1252, a battery1254, a user interface1256, one or more processors1250, and a memory1260. The LMA1213includes microphones1208(1) and1208(2). Stored in memory1260is integrated noise reduction logic1225and sound processing logic1233. The display screen1258is an output device, such as a liquid crystal display (LCD), for presentation of visual information to the cochlear implant recipient. The user interface1256may take many different forms and may include, for example, a keypad, keyboard, mouse, touchscreen, display screen, etc. Memory1260may comprise any one or more of read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors1258are, for example, microprocessors or microcontrollers that execute instructions for the integrated noise reduction logic1225and sound processing logic1233. When executed by the one or more processors1250, the integrated noise reduction logic1225is configured to perform the integrated noise reduction techniques described elsewhere herein. For example, the integrated noise reduction logic1225corresponds to the integrated MVDRabeamformer125and the MVDRa,ebeamformer525, described above. As such, in different embodiments, the integrated noise logic1225may include software forming the processing blocks described above with reference toFIGS.4and7, as well as other combinations of processing blocks configured to perform the integrated noise reduction techniques described elsewhere herein to generate an integrated noise estimate. When executed by the one or more processors1250, the sound processing logic1233is configured to perform sound processing operations using the integrated noise estimate. FIG.13is a flowchart of a method1390performed/executed by a device comprising at least a local microphone array (LMA), in accordance with embodiments presented herein. Method1390begins at1392where sound signals are received with at least the local microphone array of the device. The received sound signals comprise/include at least one target sound. At1394, an a priori estimate of the at least one target sound in the received sound signals is generated, wherein the a priori estimate is based at least on a predetermined location of a source of the at least one target sound. At1396, a direct estimate of the at least one target sound in the received sound signals is generated, wherein the direct estimate is based at least on a real-time estimate of a location of a source of the at least one target sound. At1398, a weighted combination of the a priori estimate and the direct estimate is generated, where the weighted combination is an integrated estimate of the target sound. Subsequent sound processing operations may be performed in the device using the integrated estimate of the target sound. In certain embodiments, the a priori estimate of the at least one target sound is generated using only an a priori relative transfer function (RTF) vector generated from the received sound signals. In certain embodiments, the direct estimate of the at least one target sound is generated using only an estimated relative transfer function (RTF) vector for the received sound signals. In certain embodiments, the weighted combination of the a priori estimate and the direct estimate is generated by weighting the a priori estimate in accordance with a first cost function controlled by a first set of tuning parameters to generate a weighted a priori estimate; and weighting the direct estimate in accordance with a second cost function controlled by a second set of tuning parameters to generate a weighted direct estimate. The weighted direct estimate with the weighted a priori estimate are then mixed with one another. The first set of tuning parameters may be set based on one or more confidence measures associated with the a priori estimate of the of the at least one target sound, wherein the one or more confidence measures represent an estimate of a reliability of the a priori estimate. The second set of tuning parameters may be set based on one or more confidence measures associated with the direct estimate of the of the at least one target sound, wherein the one or more confidence measures represent an estimate of a reliability of the direct estimate. As detailed above, presented herein are integrated noise reduction techniques, sometimes referred to as an integrated beamformer (e.g., an integrated MVDRabeamformer or an integrated MVDRa,ebeamformer). In general, the integrated noise reduction techniques combine the use of an apriori (i.e., predetermined, assumed, or pre-defined) location of a target sound source with a real-time estimated location of the sound source. It is to be appreciated that the above described embodiments are not mutually exclusive and that the various embodiments can be combined in various manners and arrangements. The invention described and claimed herein is not to be limited in scope by the specific preferred embodiments herein disclosed, since these embodiments are intended as illustrations, and not limitations, of several aspects of the invention. Any equivalent embodiments are intended to be within the scope of this invention. Indeed, various modifications of the invention in addition to those shown and described herein will become apparent to those skilled in the art from the foregoing description. Such modifications are also intended to fall within the scope of the appended claims. APPENDIX I. Appendix A—MVDRawith a Priori Assumed RTF Vector A pre-whitened-transformed version of the a priori assumed RTF vector can be considered where: h_~a=La-1⁢TaH⁢h~a=[0⋮0h~alMa](64) where lMais the bottom-right element in La. Using the definition from (16), i.e., Rnana−1=(TaLaLaHTaH)−1=TaLa−HLa−1TaH, the MVDRafilter of (25) can then be re-written as: w~a=Ta⁢La-H⁢w_~a(65)wherew_~a=h_~ah_~aH⁢h_~a=[0⋮0w_~a,Ma]=[0⋮0lMah~a](66) Substitution of (65) into (26) yields the speech estimate as: z~a,1=w_~aH⁢La-1TaH⁢ya︸y_a=lMah~a⁢y_a,Ma(67) II. Appendix B MVDRawith Estimated RTF Vector As opposed to using the raw signal correlation matrices, the estimation problem of (28) can be equivalently formulated first in the transformed domain since the Frobenius norm is invariant under a unitary transformation, therefore: minR^xa,r⁢1TaH((Rya⁢ya-Rna⁢na)-R^xa,r⁢1)⁢TaF2(68) Furthermore, it is argued in that spatial pre-whitening should also be included in the optimisation problem. Consequently, the estimation problem can be re-framed in the pre-whitened-transformed domain as follows: minR^xa,r⁢1(R_ya⁢ya-R_na⁢na)-La-1⁢TaH⁢R^xa,r⁢1⁢Ta⁢La-HF2(69) whereRyaya=La−1TaHRyayaTaLa−H, and Rnana=La−1TaHRnanaTaLa−H=IMa. The solution then follows from the GEVD on the matrix pencil {Ryaya,Rnana}, and hence reduces to an EVD ofRyaya: Ryaya=PAPH(70) where P is a unitary matrix of eigenvectors and A is a diagonal matrix with the associated eigenvalues in descending order. The estimated RTF vector is then defined using the principal (first in this case) eigenvector, Pmax: h^a=Ta⁢La⁢pmaxηp(71) where the scaling ηρ) ea1TTaLaPmaxand the M×1 vector ea1=[1 0 . . . 0]T. This estimated RTF vector can now be used as an alternative to hafor the MVDRadefined in (25), and is given by: w^a=Rna⁢na-1⁢h^ah^aH⁢Rna⁢na-1⁢h^a(72) This filter based on estimated quantities cart also be reformulated in the pre-whitened-transformed domain. Starting with the definition of the pre-whitened-transformed version of ĥa: h_^a=La-1⁢TaH⁢h^a=pmaxηp(73) Hence (72) becomes: ŵa=TaLa−HŴa(74) where w^_a=h^_ah_^aH⁢h^_a=ηp*⁢pmax(75) Substitution of (74) into (32) yields the speech estimate as: z^a,1=w_^aH⁢La-1TaH⁢ya︸y_a=ηp⁢pmaxH⁢y_a(76) III. Appendix C—MVDRa,ewith Partial a Priori Assumed RTF Vector and Partial Estimated RTF Vector Following the procedure as in (68), the transformation is firstly applied, also including the per term: minΦ^x,r⁢1,h^eTH(Ryy-Rnnλ)-Φ^x,r⁢1[h~ah^e]⁢[h~aHh^eH])⁢TF2(77) after the pre-whitening operation can also be included in the optimisation problem: minΦ^x,r⁢1,h^e(R_yy-R_nn)-L-1⁢TH(Φ^x,r⁢1[h~ah^e]⁢[h~aHh^eH])⁢TL-HF2(78) whereRyy=L−1THRyyTL−HandRnn=L−1THRnnλTL−H=I(Ma+Me). Expansion of (78) then results in: minΦ^x,r⁢1,h^e[K_AK_BK_CK_x+]-[000K_x,r⁢1]F2(79) where the block dimensions are such thatKAis an (Ma−1)×(Ma−1) matrix.KBan (Ma−1)×(Me−1) matrix.Kca (Me+1)×(Ma−1) matrix andKx,r1and Kx+are (Me+1)×(Me+1) matrices realised as: K_x,r⁢1=JT⁢R~_x,r⁢1⁢J(80)K_x+=JT⁢R_yy⁢J-JT⁢R_nn⁢J︸I(Me+1)(81) where{tilde over (R)}x,r1=L−1THRx,r1TL−Hand J=[0(Me+1)×(Ma−1)|I(Me+1)]Tis a selection matrix. It is then evident thatKx: can essentially be constructed from the Last (Me+1) elements of the pre-whitened-transformed signals, namely that in relation to the last element of the LMA·ya,Ma, and those in relation to the XM signals—ye. Hence the first term ofKx+is equivalently: JT⁢R_yy⁢J=𝔼⁢{[Y_a,May_e][y_a,MaHy_eH]}(82) and similarly for the second term ofKx+. It follows that (79) then reduces to the following (Me+1)×(Me+1) matrix approximation problem: minΦ^x,r⁢1,h^eK_x+-K_x,r⁢1F2(83) The solution then follows from the GEVD on the matrix pencil {JTRyyJ,JTRnnJ} and hence reduces to an EVD of JTRyyJ: JTRyyJ=VΓVH(84) where V is a (Me+1)×(Me+1) unitary matrix of eigenvectors and F is a diagonal matrix with the associated eigenvalues in descending, order. The estimated RTF vector for the XM signals is then defined from the corresponding principal (first in this case) eigenvector vmax: hˆe=||h~a||lMa⁢v1⁢JeT⁢T⁢L⁢J⁢vmax(85) where the selection matrix, Je=[0(Me×Ma)|IMe]T. Finally, this estimate is then used to compute the corresponding MVDRa,efilter with an a priori assumed RTF vector and a partially estimated RTF vector, along with the penalty term as: w~=Rn⁢n-1⁢h~h~H⁢Rn⁢n-1⁢h~(86) where {tilde over (h)} as defined in (44) can be equivalently represented as: h~=||h~a||lMa⁢v1⁢T⁢L⁢J⁢vmax(87) This filter can also be realised in the pre-whitened-transformed domain. The pre tend-transformed version of {tilde over (h)} can firstly be considered where: h¯~=L-1⁢TH⁢h~=||h~a||lMa⁢v1⁢Jvmax=||h~a||lMa⁢v1[0⋮0v1ve](88) Therefore, (86) can be re-written as: {tilde over (w)}=TL−H{tilde over (w)}(89) where: w~_=h¯~h¯~H⁢h¯~=[00⋮w~¯λ,v]=lMa⁢v1*||h~a||[0⋮0v1ve](90) Therefore, the corresponding speech estimate will be: z~1=w~_H⁢L-1⁢TH︸y_⁢y=lMa⁢v1||h~a||⁢vmaxH[y_a,May_e](91) IV. Appendix D—with Estimated RTF Vector Once again, it will be convenient to re-fame the problem in the pre-whitened-transformed domain similarly to (78): minR^x,r⁢1||R_yy-R_nn)-L-1⁢TH(Φ^x,r⁢1[q^aq^e][q^aH⁢q^eH])⁢TL-H||F2(92) In this case however, the problem cannot be reduced to a lower order as the entire RTF vector is being estimated. Hence the solution follows from an EVD onRyy: Ryy=QΣQH(93) where Q is a (Ma+Me)×(Ma+Me) unitary matrix of eigenvectors and Σ is a diagonal matrix with the associated eigenvalues in descending order. The estimated RTF vector is then given by the principal (first in this case) eigenvector, qmax: h^=[qˆaqˆe]=T⁢L⁢qmaxηq(94) where ηq=ex1TTL qmaxand ex1=[1 0 . . . 0|0 . . . 0]T. The estimated RIF vector can therefore be used as an alternative to {tilde over (h)} for the MVDRa,e: w^=Rn⁢n-l⁢hˆhˆH⁢Rn⁢n-1⁢hˆ(95) This filter based on estimated quantities can also be reformulated in the pre-whitened-transformed domain. Starting with the definition for the pre-whitened-transformed version of this estimated RTF: h^_=L-1⁢TH⁢hˆ=qmaxηq(96) Hence (95) becomes: ŵ=TL−Hŵ(97) where wˆ¯=hˆ¯hˆ¯H⁢hˆ¯=ηq*⁢qmax(98) The corresponding speech estimate using the estimated RTF vector is therefore: z^1=w~_H⁢L-1⁢TH⁢y︸y_(99)=ηq⁢qmaxH⁢y_
73,523
11943591
DETAILED DESCRIPTION Hereinafter, various embodiments of the present invention are shown and described. Particular embodiments are exemplified herein and are used to describe and convey to a person skilled in the art, particular structural, configurational and/or functional, operational aspects of the invention. The present invention may be altered/modified and embodied in various other forms, and thus, is not limited to any of the embodiments set forth. The present invention should be interpreted to include all alterations/modifications, substitutes, and equivalents that are within the spirit and technical scope of the present invention. Terms such as “first,” “second,” “third,” etc. herein may be used to describe various elements and/or parts but the elements and/or parts should not be limited by these terms. These terms are used only to distinguish one element and/or part from another. For instance, a first element may be termed a second element and vice versa, without departing from the spirit and scope of the present invention. When one element is described as being “joined” or “connected” etc. to another element, the one element may be interpreted as “joined” or “connected” to that another element directly or indirectly via a third element, unless the language clearly specifies. Likewise, such language as “between,” “immediately between,” “neighboring,” “directly neighboring” etc. should be interpreted as such. Terminology used herein is for the purpose of describing particular exemplary embodiments only and is not intended to limit the present invention. As used herein, singular forms (e.g., “a,” “an”) include the plural forms as well, unless the context clearly indicates otherwise. The language “comprises,” “comprising,” “including,” “having,” etc. are intended to indicate the presence of described features, numbers, steps, operations, elements, and/or components, and should not be interpreted as precluding the presence or addition of one or more of other features, numbers, steps, operations, elements, and/or components, and/or grouping thereof. Unless otherwise defined, all terms (including technical and scientific terms) used herein have same meaning as those commonly understood by a person with ordinary skill in the art to which this invention pertains. Terms, such as those defined in commonly used dictionaries, should be interpreted as having meaning that is consistent with their meaning in the context of the relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Hereafter, various embodiments of the present invention are described in more detail with reference to the accompanying drawings. Same reference numerals are used for the same elements in the drawings, and duplicate descriptions are omitted for the same elements or features. FIG.1shows a block diagram of a system for automatic detection of music listening reactions, according to an embodiment. ReferencingFIG.1, the system for automatic detection of music listening reactions may comprise a sensor (100), mobile device (200), and information manager (unit) (300). For example, the sensor (100) may be a wearable sensor attached to or worn at a user's ears. The wearable sensor (100) may be a wireless earphone type (e.g., earbud). The wearable sensor (100) may comprise an inertial sensor and microphone; the inertial sensor may be an accelerometer, gyroscope, magnetometer, with3axes. The wearable sensor (100) may output a first sensing data (S1) of the inertial sensor and a second sensing data S2of the microphone to the mobile device (200). The first sensing data (S1) may include an inertial signal. The second sensing data (S2) may include a sound signal. The mobile device (200) may receive the first sensing data (S1) and the second sensing data (S2) from the wearable sensor (100). The mobile device (200) may receive music information (MI) from the information manager (300). The mobile device (200) may determine a vocal reaction based on the first sensing data (S1), the second sensing data (S2), and the music information (MI). The mobile device (200) may determine a motion reaction based on the first sensing data (S1) and the music information (MI). For example, the vocal reaction may include a singing (along) event, a humming event, and a whistling event. For example, the motion reaction may include a (head) nodding event. The information manager (300) may generate the music information (MI) by analyzing a/the music (MUSIC), and may output the music information (MI) to the mobile device (200). For example, the information manager (300) may be located in a server. Alternatively, the information manager (300) may be a unit included in the mobile device (200). The information manager (300) may operate in real-time or offline, a/the process of analyzing the music (MUSIC) and generating the music information (MI). The music information (MI) may include pitch (of a note or tone highness or lowness of sound) and beat of a sound. The system for automatic detection of music listening reactions may further comprise an application (400). The application (400) may receive reaction information (RI) from the mobile device (200). The reaction information (RI) may include the vocal reaction and the motion reaction. The application (400) may output a request (RQ) for the reaction information (RI) to the mobile device (200). Also, when the application (400) is a music (re)play/playback application, music play information (PI) may be output to the mobile device (200). For example, the application (400) may be an automatic music rating application. A conventional music player may determine a user's preference for a song by relying on the user's manual input and simple statistics such as the number of plays. The application (400) may predict and apply a music rating based on the automatically generated reaction information (RI) in the mobile device (200). For example, the application (400) may be a reaction-based music recommendation application. Streaming service providers may value the creation of user-defined and -personalized playlists to attract more consumers and make it easier for consumers to find music that suits their tastes. The application (400) may perform a music recommendation using how the user participates in the song or music being listened to, for example, to which song he/she frequently responds, which part of the song he/she responds to, what kind of reaction he/she shows, etc. That is, the application (400) may predict a user's taste or preference in detail based on the reaction information (RI), perform the music recommendation based on the reaction information (RI), and create a playlist. For example, the application (400) may be a remote communication enhancement application for a musician. Many musicians are conducting live online concerts due to COVID-19. Fans can still enjoy the music of their favorite musicians through the online platform. However, it is difficult to show a reaction to musicians as at an offline concert. At offline concerts, the fans often sing to the music or move their body with a light stick, and musicians can see their fans doing them. However, this type of interaction is very limited at online concerts, and the fans can only express their feelings by sending chat texts or emoticons. The application (400) may be used to enrich interactions between musicians and remote fans. The application (400) may detect and collect reactions of the fans watching live performances online. The application (400) may provide the collected reaction to the musician. For example, the application (400) may synthesize the vocal reaction and the motion reaction into a concert video and deliver it to the musician. FIG.2shows a block diagram of the mobile device (200) ofFIG.1, according to an embodiment.FIG.3shows a block diagram of a filter (unit) (220) ofFIG.2, according to an embodiment.FIG.4shows a block diagram of a classifier (unit) (230) ofFIG.2, according to an embodiment.FIG.5shows a block diagram of a post-processing unit (240) ofFIG.2, according to an embodiment. AndFIG.6shows a block diagram of the information manager (unit) ofFIG.1, according to an embodiment. ReferencingFIG.2throughFIG.6, the mobile device (200) may comprise the filter unit (220), classifier (unit) (230), and post-processing unit (240). The mobile device (200) may further comprise a network interface (210), data receiver or reception unit (250), API (application programing interface) (260), and scheduler (unit) (270). The network interface (210) may be an interface unit for communication with the wearable sensor (100). The network interface (210) may receive the first sensing data (S1) and the second sensing data (S2) from the wearable sensor (100). The network interface (210) may output the first sensing data (S1) and the second sensing data (S2) to the filter unit (220). The filter unit (220) may determine non-reaction event the first sensing data (S1) and the second sensing data (S2). The filter unit (220) may bypass the classifier unit (230) for or with respect to the non-reaction event. The filter unit (220) may provide an event other than the non-reaction event to the classifier unit (230). The non-reaction event may be a potential vocal reaction event or a potential motion reaction event. The filter (220) may filter early at start, or pre-filter non-reaction event, and accordingly, processing cost may be reduced. The filter unit (220) may comprise an inertial signal filter (222) and a sound signal filter (224). The inertial signal filter (222) may filter the inertial signal of the first sensing data (S1). The sound signal filter (224) may filter the sound signal of the second sensing data (S2). The classifier unit (230) may classify the vocal reaction and the motion reaction based on the potential vocal reaction event and the potential motion reaction signal received through the filter unit (220). The classifier unit (230) may comprise a vocal reaction classifier (232), a motion reaction classifier (234), and a music information cache (236). The vocal reaction classifier (232) and the motion reaction classifier (234) may classify the vocal reaction and the motion reaction by using deep learning. The vocal reaction classifier (232) may determine the vocal reaction by comparing the pitch of the music stored in the music information cache (236) with the pitch of the second sensing data (S2) transmitted from the wearable sensor (100). For example, the vocal reaction may include the singing along event, the humming event, and the whistling event. The motion reaction classifier (234) may determine the motion reaction by comparing the beat of the music stored in the music information cache (236) with a/the signal of the first sensing data (S1) transmitted from the wearable sensor (100). For example, the motion reaction may include the nodding event. The post-processing unit (240) may correct or revise the vocal reaction of the vocal reaction classifier (232) and the motion reaction of the motion reaction classifier (234). For example, when the sing-along event is determined to have occurred for a long time, and then the non-reaction event is determined for a brief time, and then the singing along event is determined to have occurred for a long time again, the post-processing unit (240) may determine that the sing along event has continuously occurred without a pause by using a smoothing algorithm. The post-processing unit (240) may comprise a smoothing unit (242) and an analyzing unit (244). The smoothing unit (242) may generate a final vocal reaction result by smoothing vocal reaction result of the vocal reaction classifier (232). The smoothing unit (242) may generate a final motion reaction result by smoothing motion reaction result of the motion reaction classifier (234). The smoothing unit (242) may utilize a Hidden Markov Model (HMM) technique. Alternatively, the smoothing unit (242) may utilize a majority voting technique. The vocal reaction information and the motion reaction information may include a reaction type, a start time and an end time, etc. The analyzing unit (244) may generate additional information about the vocal reaction and the motion reaction. For example, the analyzing unit (244) may aggregate events, such as which part of a song the user sang the most. For example, the analyzing unit (244) may grade/score the vocal reaction according to a degree of similarity by comparing the user's audio signal with the song played. The information manager (300) may comprise a music information analyzer unit (320) and a database (340). The music information analyzer unit (320) may analyze pitch information and beat information of the music. The database (340) may store the pitch information and the beat information. As disclosed, analyzing the information may be time-consuming, and thus the analyzing may not be performed during play/playback of the music, or the analyzing may be performed prior to the play/playback. For example, the music information analyzer unit (320) may analyze the pitch information and the beat information for the user's playlist and store in the database (340). The data reception unit (250) may receive the music information (MI) including the pitch information and the beat information from the database (340) of the information manager (300). The data reception unit (250) may output the music information (MI) to the music information cache (236) of the classifier unit (230). The API (260) may be an application for the mobile device (200) for communicating with the application (400). The API (260) may output the reaction information (RI) to the application (400). The API (260) may receive the request (RQ) for the reaction information (RI) and the music play information (PI) from the application (400). The scheduler (270) may control overall operation of the mobile device (200) based on the request (RQ). For Example, when the request (RQ) demands only the vocal reaction, operations related to the motion reaction of the mobile device (200) may be inactivated. For example, when the request (RQ) demands only the motion reaction, operations related to the vocal reaction of the mobile device (200) may be inactivated. For example, when the request (RQ) is not received, operations related to the vocal reaction and the motion reaction of the mobile device (200) may be inactivated. FIG.7shows a conceptual diagram showing a sensing pipeline of vocal reaction detection of the system for automatic detection of music listening reactions ofFIG.1, according to an embodiment.FIG.8shows a graph showing a cumulative distribution (density) function of a sound level for a vocal reaction, according to an embodiment.FIG.9shows a graph showing a cumulative distribution (density) function of a motion level for a vocal reaction, according to an embodiment.FIG.10shows a log-Mel spectrogram pattern used in (Step2-1) ofFIG.7, according to an embodiment.FIG.11shows table showing a process of mapping a label of (Step2-1) ofFIG.7to a label of (Step2-2) ofFIG.7, according to an embodiment. ReferencingFIG.1toFIG.15, the system may search or acquire music information (MI) of a song and maintain the music information (MI) in the database (340) of the information manager (unit) (300). When a listener starts listening to a song, the system may activate audio/sound and motion detection at the wearable sensor (100), and send the sensing data (S1, S2) to the mobile device (200). According to an embodiment, the present system's operation is as follows.(1) As a first step, a/the data segment which is reliably classifiable as the non-reaction event may be filtered by analyzing the features of the sensing data (S1, S2).(2) For an uncertain data segment other than the non-reaction event, the system may identify a reaction event using the classifier (230).(3) The present system may improve the classification performance of the classifier unit (230) by using the music information (MI) retrieved from the song being played. The system may correct a classified label according to a similarity calculated between the sensing data (S1, S2) and the music information (MI).(4) Based on the detected reaction event, the system may perform post-processing to determine a rate/speed of the nod and the similarity between the user's singing along to melody of the song replayed. In addition, the system may apply smoothing in (the) post-processing of the vocal reactions. When the vocal reaction is generated, the vocal reaction may not continue (e.g, without a pause) within a session and may appear sporadically, sometimes alternatingly with other reactions. For example, the listener may pause the song for a short time to breathe while singing along. Also, the listener may often alternate between different types of vocal reactions. For example, there are many incidents where: when the listener does not know the lyrics while singing along, he or she may momentarily hum or whistle, and when the listener knows the lyrics, he/she may sing along to the song again. To solve such and other problems, presently disclosed is a pipeline for efficient and robust, reliable detection of the vocal reactions. The present system may apply an early-stage filtering operation to reduce cost ((Step1) ofFIG.7). In the early-stage filtering operation, data segment which may be reliably classifiable as the non-reaction event may be identified, and a relatively burdensome classification operation may not be performed on the corresponding segment. Identification logic is developed considering the following two criteria. First, since a distance between the microphone of the wearable sensor (100) and the listener's mouth is close, the listener's voice reaction may generate a sound event of a specific volume or of a higher volume. Accordingly, no sound, or a sound below a certain volume (sound threshold), such as background noise, may be labeled as the vocal non-reaction event. FIG.8shows the cumulative distribution (density) function of 1-second decibel for or with respect to reaction event and non-reaction event. As shown inFIG.8, the level (decibel) of sound is relatively small for non-reaction, but relatively large in the singing along, humming, and whistling events. For example, when the sound threshold is set to 30 decibels and sound data of less than 30 decibels is processed as the non-reaction event, approx. 60% of the non-reaction events may not pass through the classifier unit (230), and more than approx. 95% of the reaction events may pass through the classifier unit (230). Second, the listener's voice reaction may also generate a specific level of movement in the wearable sensor (100). When the listener's mouth movement causes a/the vocal reaction, an impulse response may be generated to the inertial signal of the wearable sensor (100) by activating the cheekbone muscle located between the mouth and the ear. Also, when no movement is detected by the wearable sensor (100), this means that there is little possibility that the audio signal belongs to the reaction event. FIG.9shows a graph showing a cumulative distribution (density) function of a motion level for the vocal reaction, according to an embodiment. ReferencingFIG.9, the listener's large movement may be known to be related to his/her non-reaction. This is because, while the listener makes big movements, such as walking and jumping, the listener rarely makes the vocal reaction. Accordingly, when the listener's movement is less than a first movement threshold (or there is little movement), it may be labeled as the vocal non-reaction event. When the listener's movement is greater than a second movement threshold (or when the movement is a big movement), it may be labeled as the vocal non-reaction event. Consequently, the filter unit (220) may determine the vocal non-reaction event by using both the first sensing data (S1), which includes the inertial signal of the wearable sensor (100), and the second sensing data (S2), which includes the sound signal. Based on the above results, a two-stage filtering component configuration may be designed. First, a segment in which the motion level, which is defined by a standard deviation of an accelerometer size, is out of a range of preset interval (e.g., between a first motion threshold of 0.006 and a second motion threshold of 0.1) may be determined as the vocal non-reaction event. Then for an unfiltered segment, if the volume (decibel) of the corresponding audio signal is smaller than the sound threshold, the data segment may be filtered as the vocal non-reaction event. Motion-based filtering (inertial signal filter222) is superior to sound-based filtering (sound signal filter224) because motion-based filtering (inertial signal filter222) is lighter than sound-based filtering (sound signal filter224). can be done first. The filtered segments may be labeled as non-reactive and transmitted to the post-processing unit240without performing classification. For example, in a/the sound event classification, three types of the vocal reactions of singing/humming, whistling, and non-reaction may be determined as target events. Here, singing along and humming are often observed alternately, and as shown inFIG.10, the spectrogram patterns may be very similar and it may difficult to actually distinguish them, so they can be combined into one class. A first part of the sound event classification ((Step2-1) inFIG.7) may include two tasks: Feature extraction and Sound classification. Feature Extraction: For example, audio data of the wearable sensor (100) may be resampled at 16 kHz and divided into segments having a length of 1 second. Then, the segment may be transformed into a spectrogram using a Short Time Fourier Transform with a periodic Hann window. The window size and window hop may be set to 25 ms and 10 ms, respectively. The log-Mel spectrogram may then be calculated by mapping the spectrogram to 64 mel bins in a range of 125 to 7,500 Hz and applying the logarithm. Finally, the feature may be framed into a matrix of 96*64. Here, 96 may mean 96 frames having 10 ms, and 64 may mean 64 mel bands of each frame. Classification and Label Mapping: The class of sound classification in (Step2.1) may have hum, music, chant, song, whistle, speech, bird sound, dog sound, silence, and the like. Since the class of (Step2.1) may not exactly match a final class of the present system, the label may be mapped through the mapping table shown inFIG.11. As shown inFIG.11, hum, music, chant, and song among the sound classification classes of (Step2.1) may be mapped to the singing along and humming events of the final class, and of the sound classification of (Step2.1), the whistle in the class can be mapped to the whistling event in the final class. Since it is ambiguous which event of the final class is matched with speech of the class of sound classification in Step2.1, further investigation may be performed. The speech may be labeled “ambiguous.” Of the sound classification class in Step2.1, classes except the hum, music, chant, song, whistle and speech may be mapped to the non-reaction event of the final class. The non-reaction events may be directly transferred to the post-processing unit (240) without the classifying operation of the classifier unit (230). As a next step, the music information (MI) of the song being played may be used to wrap up the ambiguous label by. More specifically, the label “ambiguous” may be modified as the sing along/humming event or the non-reaction event based on a similarity between the audio signal received from the wearable sensor (100) and the song being played. Calculation of Similarity: To measure a similarity between the vocal signal and the song, a melody, which is a linear sequence of musical tones, may be considered. A key intuition of the present system is that the vocal reactions follow order of the notes of the song being played, whereas the non-reaction voice signals do not. (Step2.2) ofFIG.7shows a detailed procedure for such process. In order to extract the order of the notes, the pitch information may be extracted first. For example, the pitch information may be pitch information, which is frequency information at intervals of 0.1 second. The pitch information may be extracted based on a deep convolutional neural network which operates directly on a time domain waveform input. Then, the pitch information (frequency information) may be converted into note information having an octave number. Since the listener often sings along, hums, or whistles a song, often sounding an octave higher or lower than the music being played, the note information having the octave number may be converted again to 12-note chromatic scale without an octave number. For the audio file of the song being played, vocal extraction may be performed prior to pitch extraction in order to focus on dominant melody line of the music. This is because the vocal reaction mainly follows voice (singing voice) rather than musical instruments. Accordingly, vocal source may be separated from the song being played. Finally, the similarity between two note sequences (one from the user's vocal signal and the other from the song being played) may be calculated to reach a final decision. In such case, 12 notes (C, C #, D, D #, E, F, F #, G, G #, A, A #, B) may be mapped to 12 integer values (0 to 11).FIG.12toFIG.15show examples of note patterns. In the sing along event, humming event, and whistle event ofFIG.12,FIG.13andFIG.14, the note pattern of the song played and the note pattern of the user's vocal signal have a high correlation, but for the case of non-reaction event as shown inFIG.15, there is no correlation between the note pattern of the song played and the note pattern of the user's vocal signal. Dynamic time warping (DTW) may be considered as a similarity measurement function, because speed of two note patterns may be different. When the similarity is less than a threshold, it is finally labeled as non-reaction, but otherwise, it is labeled as sing along/humming. In the post-processing (Step3), the classification result may be smoothed by using the Hidden Markov Model (HMM). A core idea is to train the HMM model on the classification output sequence in the training dataset and use the trained HMM model for smoothing the output. Smoothing may be performed by defining an observation sequence as a sequence of (the) classification outputs and estimating an optimal sequence of hidden states that may be mapped to a smoothed sequence of the reaction events. To efficiently calculate a maximum probability, the Viterbi algorithm may be applied and a 6-second window (that is, last6classification output sequences) may be used as input sequence. When the application (400) prefers real-time output for an interactive service, the/a smoothing operation may be omitted. FIG.16shows a graph showing sensing data for a (head) nodding, according to an embodiment.FIG.17shows a graph showing sensing data and music information data for a motion non-reaction event, according to an embodiment.FIG.18shows a conceptual diagram showing a sensing pipeline of motion reaction detection of a system for automatic detection of music listening reactions, according to an embodiment.FIG.19shows a graph showing a cumulative distribution (density) function of a motion level for a motion reaction, according to an embodiment. AndFIG.20shows a graph showing a method of determining a window size for a motion reaction based on music genre, according to an embodiment. ReferencingFIG.1throughFIG.20, motion reaction has a tendency to appear sporadically while a song is playing. Therefore, if the sensing pipeline of motion reaction detection is continuously executed, unnecessary computation cost may occur, which may interfere with user experience. Also, motion reaction for music may be repeated at a similar cycle. For example, nodding may continue in a regular pattern for a certain period of time to generate a signal waveform having a certain level of periodicity as shown inFIG.16. On the other hand, the non-reaction may not exhibit a periodic peak pattern as shown inFIG.17. However, there may be a case in which repetitive movements are shown even when not a reaction to music. For example, some listeners may habitually shake their legs, and these movements are independent of the music but may cause repetitive fluctuations in the inertial signal. Similarly, even when the listener is walking or running while listening to music, the listener's movement is independent of the music, but may cause repetitive fluctuations in the inertial signal. In addition, the motion reaction may represent various patterns for each person. Movements and behaviors of the motion reaction may vary greatly from person to person. For example, some people may move their heads up and down while some people may move their head left and right, while listening to music Also, amount of the movement may vary from person to person. The motion reaction also may represent a different pattern according to genre of music. For example, the nodding movement tends to be more regular and more frequent for faster tempo music. To solve such problem, the pipeline for the motion reaction detection may utilize characteristics of observed motion reaction (e.g., continuity of the reaction, periodicity, correlation with the beat of a song, difference according to the genre of music). As shown inFIG.18, the pipeline of the motion reaction detection may perform two main functions or operations of Filtering (Step1) and Classification (Step2). First, a simple movement of the listener may be filtered, in order to avoid unnecessary processing for (the) classifying operation. A label of input data which is unlikely to correspond to a motion reaction, such as no motion or too large motion, may initially be determined as a non-reaction. Subsequently, feature extraction may be performed to capture features of the motion reaction. Also, the pipeline may include two classification models, and one of them may be selectively used according to the genre of music. In the Filtering (Step1) may be performed based on a threshold, in order to reduce cost. For example, the non-reaction data may be classified based on the motion level of a 1-second segment of accelerometer signal. If there is not any listener movement, the accelerometer signal is close to zero (0). Conversely, if a walking or running motion occurs without a motion reaction to music, the accelerometer signal may fluctuate very significantly. Accordingly, in the Filtering (Step1), a third motion threshold for filtering out the one with a lower motion level and a fourth motion threshold for filtering the one with a higher motion level may be used. For example, when the motion level of the listener in the inertial signal is less than the third motion threshold, the corresponding data segment is determined as a motion non-reaction event; and when the motion level of the listener in the inertial signal is greater than the fourth motion threshold ion threshold, the corresponding data segment may be determined as a motion non-reaction event. For effective filtering, it is important to set a threshold which may filter out as many non-reactions as possible without missing many motion reactions.FIG.19shows a graph showing a the first-order cumulative distribution (density) function of the standard deviation of the 1-second accelerometer magnitude used as a standard for the motion level. InFIG.19, the third movement threshold, which is the lower threshold, may be 0.008 g, and the fourth movement threshold, which is the higher threshold, may be 0.075 g. For or with respect to data not filtered in the Filtering (Step1), the motion reaction Classification (Step2) may be performed thereon. The Classification (Step2) may include two steps. The first step may be to search the music information (MI) analyzed offline by the information manager (unit) (300). The beat of music may be used to derive beat-related features, and the genre of music may be used to select an appropriate classification model. To extract the beat of the music, an audio signal processing library may be used; the audio signal processing library may output a list of beat times for the music played. In the Classification (Step2), pre-processing, (the) feature extraction, and classification may be performed. First, the accelerometer and gyroscope data sampled at 70 Hz may be segmented using a sliding window. Autocorrelation may be calculated using such data segments to capture the periodicity of the signal. Then, peaks and valleys may be detected in raw IMU signal and the computed autocorrelation signal. Here, the peak may be an upper vertex of the signal in the graphs ofFIG.16andFIG.17, and the valley may be a lower vertex of the signal in the graphs ofFIG.16andFIG.17. Second, features of three groups may be calculated using the preprocessed data. The features of the three groups may include a feature which encodes the periodicity of a motion, a feature which captures the magnitude of the motion, and a feature which is related to the beat. The feature which encodes the periodicity of motion are as follows.Number of Autocorrelation Peaks: Repetitive motion may show more peaks.Maximum Value of Autocorrelation Peaks: Higher autocorrelation peak values may indicate higher periodicity.Time interval between First Autocorrelation Peak and Last Autocorrelation Peak: Periodic signals may have a longer time interval.Number of Zero Crossings: Repetitive motion may indicate more zero crossings. Here, the zero crossing denotes the waveform passing through 0 in the graphs ofFIG.16andFIG.17.Mean of Time Intervals between Consecutive Peaks Periodic signals may have a smaller mean, average.Standard Deviation of Time Interval between Consecutive Peaks: Periodic signals may have a smaller standard deviation. The features for capturing the magnitude of the movement may include a maximum/minimum/rms/mean value of the magnitude, a mean and standard deviation of the peak values, and a difference between the maximum peak value and the minimum peak value. The beat-related features are as follows. Ratio of Number of Peaks to Number of Beats in a Window: The ratio of the number of peaks to the number of beats in a window may tend to be constant in motion reaction as compared to non-reaction.Ratio of Median of Time Interval between Successive Peaks to Median of Time between Consecutive Beats: Periodic motion reactions may tend to have peak time intervals similar to those of beats.Ratio of Standard Deviation of Time Interval between Consecutive Peaks to Mean of Time Interval between Consecutive Beats: Motion reactions may have more consistent intervals, and accordingly, the standard deviation may be smaller. In the Classification (Step2), an appropriate, balanced window size for not missing a reaction may be determined with short time duration while sufficiently capturing the periodicity of the signal. When the window size is too short, the periodicity of the signal may not be sufficiently captured. Whereas, if the window size is too long, a/the window of data may include motion reaction data and non-reaction data at a same time, thereby reducing performance. Motion reaction to a slow-tempo song tends to be slow, whereas the motion reaction to a fast-tempo song tends to be fast. Accordingly, the window size may be determined according to the genre of the music or the tempo of the music. As shown inFIG.20, generally as the window size increases, the F1score may increase. However, the F1score may reach a saturation point at a specific point, and the saturation point may vary depending on the tempo of the music. In a fast-tempo song, the saturation point may be 5 seconds, and in a slow-tempo song, the saturation point may be 9 seconds. Accordingly, the system may include two classification models having different window sizes according to the genre (or tempo) of the music After the feature extraction, a classification model matching the genre of the music played may be executed. The extracted features may be provided to the selected model to obtain a final classification output. For example, in one embodiment, a random forest (RF) may be used as the classifier unit (230). Compared with SVM, Logistics Regressions and lightBGM models, RF model may show similar or better performance. The system may combine the output of the classifier unit (230) and the non-reaction filtering result to provide a final inference output. Here, (the) reaction information (RI) generally provided by the system is whether a motion reaction event has occurred. Also, the system may provide information on the number of motion reactions to the music played, parts the listener moves the most, and songs the listener frequently shows motion. According an embodiment of the present disclosure, the system may automatically detect music listening reaction (e.g., user's singing along, humming, whistling, (head) nodding, etc.), using the wearable sensor (100) worn by the user on the ears. By utilizing musical structures such as pitch and beat, a listener's reaction may be accurately detected. Also, since non-reaction event is initially (i.e., at an early stage) filtered by the filter unit (220) of the mobile device (200), unnecessary processing cost may be reduced. Exemplary embodiments have been described in detail with references to the accompanying drawings, for illustrative purposes (and) to solve technical problems. Although the description above contains much specificity, these should not be construed as limiting the scope of the exemplary embodiments. The exemplary embodiments may be modified and implemented in various forms and should not be interpreted as thus limited. A person skilled in the art will understand that various modifications and alterations may be made without departing from the spirit and scope of the description and that such modifications and alterations are within the scope of the accompanying claims. REFERENCE NUMERALS:100: Wearable Sensor200: Mobile Device210: Network Interface220: Filter (Unit)222: Inertial Signal Filter224: Sound Signal Filter230: Classifier (Unit)232: Vocal Reaction Classifier234: Motion Reaction Classifier236: Music Information Cache240: Post-Processing Unit242: Smoothing Unit244: Analyzing (Unit)250: Data Receiver or Reception Unit260: API270: Scheduler Unit300: Information Manager (Unit)400: Application
38,782
11943592
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The making and using of the presently preferred embodiments are discussed in detail below. It should be appreciated, however, that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the invention, and do not limit the scope of the invention. In the following detailed description, reference is made to the accompanying drawings, which form a part hereof and in which are shown by way of illustrations specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present invention. For example, features illustrated or described for one embodiment can be used on or in conjunction with other embodiments to yield yet a further embodiment. It is intended that the present invention includes such modifications and variations. The examples are described using specific language, which should not be construed as limiting the scope of the appending claims. The drawings are not scaled and are for illustrative purposes only. For clarity, the same or similar elements have been designated by corresponding references in the different drawings if not stated otherwise. Single-ended MEMs microphone signals can be converted to differential signals to ensure good power supply rejection (PSR) performance in the digital microphone. According to embodiment ideas, PGA gain is increased in the “frontend” of the digital microphone (as close to the input pin as possible) in order to maximize SNR. This is accomplished using an extremely low noise PGA amplifier that is described in detail below, according to embodiments. In operation, the low noise PGA amplifier combines a capacitive feedback source-follower-based PGA, together with an inverting stage which drives the “ground” of a charge pump filtering capacitance. In this way, the noise of the inverter stage becomes common mode and is cancelled out in a differential processing. In embodiments, the inverter stage can be a low-power inverter stage. FIG.1shows a simplified schematic of an exemplary single-ended source-follower-based amplifier100. Amplifier100includes a P-channel transistor P1used in a source-follower configuration. The gate of transistor P1is coupled to input node104, the source of transistor P1is coupled to the output node106, and the drain of transistor P1is coupled to ground or other DC voltage source. A bias current source IBIAS is coupled to output node106for providing a bias current to transistor P1. The gate of transistor P1is coupled to diodes D1and D2for clamping the input voltage of amplifier100. A MEMS device108, which is represented by a capacitor CMEMSand an analog input voltage VIN, is coupled between node102and input node104. A feedback capacitor CFBis coupled between node102and output node106, and a charge pump capacitor CCHPis coupled between node102and ground or another DC voltage source. Capacitor CCHPis used to filter out the charge-pump biasing voltage ripple of the unfiltered VMIC biasing voltage. Capacitors CFBand CCHPdetermine the gain of amplifier100, which is given by the equation VOUT=(1+CFB/CCHP)*VIN. While amplifier100shown inFIG.1provide good performance for amplifying the analog signal of the MEMS device108, the single-ended output can be converted to a differential output having improved noise performance in a differential PGA described in further detail below, according to embodiments. The differential PGA can be used in a digital microphone product having an improved SNR. FIG.2is a schematic diagram of an improved noise differential PGA200having low noise performance, according to an embodiment. In PGA200A, charge pump capacitor CCHPis boosted with the inverted output of an inverting amplifier in order to improve noise performance as will be explained in further detail below. Subsequent stages can be AC or DC coupled, can be programmable, and can be fully or pseudo-differential. While a single transistor source follower stage is shown inFIG.2, other source follow topologies can be used for the source follower. PGA200includes a P-channel transistor PSF in a source-follower configuration having a source coupled to a first output node204for providing the Vout_p output voltage, a gate coupled to input node202, which is biased to the VASICDC bias voltage, and a drain coupled to ground or to another DC voltage source. The source of transistor PSFis biased by the current provided by current source205. Capacitors CFBand CCHPdetermine the gain of amplifier100, which is given by the equation Vout_p=(1+CFB/CCHP)*VIN. Capacitor220(CFB) can be made adjustable, in an embodiment. Capacitor220is coupled between nodes204and203, and charge pump capacitor218(CCHP) is coupled between nodes203and nodes206. A MEMS device216, shown as a MEMS capacitor CMEMSand an input voltage VIN, is coupled between nodes202and203. A first biasing resistor RB1223is coupled between node203and a source of bias voltage VB1225. The VB1bias voltage represents an unfiltered charge pump voltage VMIC, and the voltage at node203represents the filtered charge pump voltage VMIC. A second biasing resistor RB2is coupled between node202and a source of bias voltage VB2224. The VB2bias voltage represents an unfiltered charge pump voltage VASIC, and the voltage at node202represents the filtered charge pump voltage VASIC. The Vout_p output voltage is amplified by a unity gain inverting operational amplifier208having a gain determined by the ratio of resistors210and212. Since the value of resistors210and212having the same value of R1, the overall gain is negative one. Operational amplifier has a negative input coupled to resistors210and212, and a positive input coupled to a reference voltage source VREF. The output of operational amplifier208provides the Vout_n output voltage at node206. Nodes204and206form a first differential output of PGA comprises voltages Vout_p and Vout_n. PGA200includes a differential output circuit comprising a differential coupling circuit, a differential biasing circuit, and a differential PGA stage236. The differential coupling circuit comprises CACcoupling capacitor226coupled between node204and the negative input of PGA stage236, and CACcoupling capacitor228coupled between node206and the positive input of PGA stage236. The differential biasing circuit comprises a third bias resistor RBIAS232coupled between the negative input of PGA stage236and a source of VBIASbias voltage230, and a fourth bias resistor RBIAS234coupled between the positive input of PGA236and the source of VBIASbias voltage230. PGA236provides a second buffered differential output at node238and node240. While a fully differential output circuit is shown inFIG.2, other output circuits can be provided as previously discussed, including other fully differential output circuits, pseudo-differential output circuits, and even single-ended output circuits in embodiments. FIG.3is a schematic diagram of an equivalent circuit300corresponding to the differential PGA ofFIG.2including a unity gain noninverting amplifier316having an output coupled to VP node304, and an inverting amplifier318coupled between VP node304and VN node306. The biasing resistors, MEMS device, and differential output circuit are omitted inFIG.3for the sake of clarity. Equivalent circuit300also includes a first impedance Z1310that corresponds to charge pump capacitor CCHPpreviously discussed, and a second impedance Z2308that corresponds to feedback capacitor CFBpreviously discussed. First impedance Z1310is coupled between VN node306and node312, and second impedance Z2308is coupled between node312and node314. A current “i” flows through first impedance Z1310and second impedance Z2308. In equivalent circuit300, voltage source VN1302represents the input voltage and voltage source VN2320represents the noise voltage associated with inverting amplifier318. The gain of the equivalent circuit is thus given by the equation (VP−VN)=VN1*(1+Z2/Z1). From an inspection ofFIG.3it can be noted that the noise voltage VN2is subtracted out from the differential output voltage (VP-VN) as the noise voltage is a common mode voltage component, such that the overall noise component is lowered with respect to the amplifier100shown inFIG.1. FIG.4is a schematic diagram400of a charge pump402and a floating filter including resistor223and capacitor218used in the differential PGA200ofFIG.2, according to an embodiment. Charge pump402can comprise a Dickson charge pump circuit comprising a plurality of charge pump stages402A,402B, and402C. While three such stages are shown any number of stages can be used in embodiments. The charge pump stages can comprise either diode or transistor charge pump stages. Other types of charge pumps can also be used. An unfiltered charge pump output voltage is provided at node225. A filtered charge pump voltage is provided at node203by the action of the low pass floating filter comprising biasing resistor RB1223and the charge pump capacitor CCHP218, which can also be referred to herein as a “charge pump output capacitor” since it is coupled to the output of charge pump402through biasing resistor RB1223. FIGS.5A and5Bare schematic diagrams of alternative biasing arrangements for a MEMS device used with the differential PGA200shown inFIG.2, according to embodiments. While the schematics shown inFIGS.5A and5Bgenerally correspond to the differential PGA200shown inFIG.2, many of the components of differential PGA200are omitted for the sake of clarity. A first biasing arrangement500is shown inFIG.5A, wherein a first voltage VMICis applied to a first biasing resistor RB1506at node502, and a second voltage VASICis applied to a second biasing resistor RB2510at node504. The first voltage VMIC and the second voltage VASICcan be supplied by charge pumps of the type shown inFIG.4. Biasing resistor506is coupled to node508, which corresponds to node203inFIG.2, and biasing resistor510is coupled to node514, which corresponds to node202inFIG.2. A MEMS device512, represented by capacitor CMMSand voltage source VM, is coupled between nodes508and514. A second biasing arrangement550is shown inFIG.5B, wherein the first biasing resistor RB1is replaced by two biasing resistors RB1A516and RB1B520in series between nodes502and508. A capacitor CMIC is coupled to the common node518between biasing resistors516and520. The second biasing arrangement shown inFIG.5Bprovides additional filtering, and can also be used with respect to the second biasing resistor RB2510, in some embodiments. FIG.6is a schematic diagram of alternative source follower stage600including a transconductance stage612, according to an embodiment, that can be used in the differential PGA200shown inFIG.2. Source follower stage600includes P-channel transistor P1having a source coupled to node604for providing an output voltage VOUT, a gate coupled to node602for receiving an input voltage VIN, and a drain coupled to node606. A current source608is also coupled to node606for providing a bias current. A transconductance stage612has a negative input coupled to node606, a positive input coupled to a reference voltage node610, and an output coupled to node604. An equivalent model614is shown inFIG.6, including a current source gm*Vgs in parallel with a resistor R0. The gain of source follower stage600is thus given by the equation VOUT/VIN=gm*R0/(gm*R0+1), which is approximately unity gain. FIG.7is a flowchart of a low noise amplification method700, according to an embodiment, including, in a converter including a converter input, a first converter output, a second converter output, and an internal node, wherein the first converter output and the second converter output comprise a differential output, sensing an input voltage at the converter input at step702; generating a noninverting first output voltage at the first converter output in response to the sensed input voltage704; sensing the noninverting first output voltage at the first converter output at step706; generating an inverting second output voltage at the second converter output in response to the noninverting first output voltage at step708; coupling a charge pump output capacitor between the second converter output and the internal node at step710; and coupling a feedback capacitor between the first converter output and the internal node at step712. FIG.8is a block diagram of a digital microphone product800including a low noise differential PGA as described above, according to an embodiment. Digital microphone product800includes MEMS device802and ASIC804. MEMS device can comprise a capacitive MEMS device that generates an analog voltage in response to received sound waves. ASIC804can comprise the low noise differential PGA, an ADC, as well as other signal processing circuitry. The MEMS device802and ASIC804are in communication via bidirectional bus810. MEMS device802and ASIC804can be packaged together to form a single digital product, such as a digital microphone. In some embodiments, digital microphone product800can also include other digital and analog components806, such as additional filters, amplifiers, and other similar components. The other digital and analog components806can communicate with MEMS device802through bidirectional bus812. In some embodiments, digital microphone product800can also include a microprocessor808, which can communicate with ASIC804and the other digital and analog components806through bidirectional bus814and bidirectional bus816. For example, microprocessor808can generate clock signals and receive data from ASIC804. In other embodiments, microprocessor808can provide the functionality of digital or software components that would otherwise be resident on ASIC804. Example embodiments of the present invention are summarized here. Other embodiments can also be understood from the entirety of the specification and the claims filed herein. Example 1. According to an embodiment, a single-ended to differential converter includes a converter input, a first converter output, a second converter output, and an internal node, wherein the first converter output and the second converter output include a differential output; a non-inverting amplifier having an input coupled to the converter input, and an output coupled to the first converter output; an inverting amplifier having an input coupled to the first converter output, and an output coupled to the second converter output; a charge pump having a charge pump output capacitor coupled between the second converter output and the internal node; and a feedback capacitor coupled between the first converter output and the internal node. Example 2. The single-ended to differential converter of Example 1, wherein the non-inverting amplifier includes a source follower. Example 3. The single-ended to differential converter of any of the above examples, wherein the source follower includes a P-channel transistor. Example 4. The single-ended to differential converter of any of the above examples, further including a current source coupled to the first converter output. Example 5. The single-ended to differential converter of any of the above examples, further including a transconductance component coupled to the source follower. Example 6. The single-ended to differential converter of any of the above examples, wherein the inverting amplifier includes an operational amplifier having a first resistor coupled to an inverting input of the operational amplifier, and a second resistor coupled between the inverting input of the operational amplifier and an output of the operational amplifier. Example 7. The single-ended to differential converter of any of the above examples, wherein the first resistor and the second resistor have equal values. Example 8. The single-ended to differential converter of any of the above examples, wherein the charge pump is coupled to the internal node through a first bias resistor. Example 9. The single-ended to differential converter of any of the above examples, wherein the feedback capacitor includes an adjustable value capacitor. Example 10. The single-ended to differential converter of any of the above examples, further including a MEMS device coupled between the internal node and the converter input. Example 11. The single-ended to differential converter of any of the above examples, wherein the MEMS device includes a capacitive MEMS device configured for converting sound waves into an analog voltage. Example 12. The single-ended to differential converter of any of the above examples, further including a second bias resistor coupled to the converter input. Example 13. The single-ended to differential converter of any of the above examples, further including a differential bias circuit coupled between the first converter output and the second converter output. Example 14. The single-ended to differential converter of any of the above examples, wherein the differential bias circuit includes a third bias resistor and a fourth bias resistor, and wherein the third bias resistor and the fourth bias resistor have equal values. Example 15. The single-ended to differential converter of any of the above examples, further including a differential output amplifier coupled between the first converter output and the second converter output. Example 16. The single-ended to differential converter of any of the above examples, wherein the differential output amplifier includes a programmable gain amplifier. Example 17. According to an embodiment, an integrated circuit includes a first converter pin, a first converter output, a second converter output, and a second converter pin, wherein the first converter output and the second converter output include a first differential output; a non-inverting amplifier having an input coupled to the first converter pin, and an output coupled to the first converter output; an inverting amplifier having an input coupled to the first converter output, and an output coupled to the second converter output; a charge pump having a charge pump output capacitor coupled between the second converter output and the second converter pin; and a feedback capacitor coupled between the first converter output and the second converter pin. Example 18. The integrated circuit of Example 17, further including a differential input coupled between the first converter output and the second converter output, and a second differential output responsive to the differential input coupled between a third converter pin and a fourth converter pin. Example 19. According to an embodiment, a conversion method includes in a converter including a converter input, a first converter output, a second converter output, and an internal node, wherein the first converter output and the second converter output include a differential output, sensing an input voltage at the converter input; generating a noninverting first output voltage at the first converter output in response to the sensed input voltage; sensing the noninverting first output voltage at the first converter output; generating an inverting second output voltage at the second converter output in response to the noninverting first output voltage; coupling a charge pump output capacitor between the second converter output and the internal node; and coupling a feedback capacitor between the first converter output and the internal node. Example 20. The conversion method of Example 19, further including coupling a MEMS device between the second converter output and the internal node. While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
20,281
11943593
DETAILED DESCRIPTION It will be readily understood that the instant components, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of at least one of a method, apparatus, non-transitory computer readable medium and system, as represented in the attached figures, is not intended to limit the scope of the application as claimed, but is merely representative of selected embodiments. The instant features, structures, or characteristics as described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments”, “some embodiments”, or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. Thus, appearances of the phrases “example embodiments”, “in some embodiments”, “in other embodiments”, or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In addition, while the term “message” may have been used in the description of embodiments, the application may be applied to many types of network data, such as, packet, frame, datagram, etc. The term “message” also includes packet, frame, datagram, and any equivalents thereof. Furthermore, while certain types of messages and signaling may be depicted in exemplary embodiments they are not limited to a certain type of message, and the application is not limited to a certain type of signaling. Example embodiments provide a system that includes a controller or central computer system to manage an audio configuration including, for example, a plurality of microphones, paging devices and/or loudspeakers, and to provide audio paging in a particular environment. An audio system may include tuning procedures to tune parameters to control various levels, equalization, speaker power level (SPL), compression, etc., which may include multiple microphones, loudspeakers and/or zones, and which may use components normally needed by an audio system and without other types of measurement instrumentation. A paging system may be overlaid on the audio system for added functionality. FIG.1Aillustrates a controlled speaker and microphone environment according to example embodiments. Referring toFIG.1A, the illustration100demonstrates an audio-controlled environment112which may have any number of speakers114and microphones116to detect audio, play audio, replay audio, adjust audio output levels, etc., via an automated tuning procedure. The configuration100may include various different areas130-160separated by space, walls and/or floors. The controller128may be in communication with all the audio elements and may include a computer, a processor, a software application, etc., setup to receive and produce audio, etc. In this example, a chirp response measurement technique may be used to acquire a frequency response by measurement of a loudspeaker. Also, the controller128may administer paging audio to one or more areas130-160via the one or more loudspeakers “speakers”114. With regard to a setup process, a launch button (e.g., auto setup+auto tuning) on a user interface may provide a way to test the sound profile of the room including the speakers and microphones and their respective operations. Network discovery can be used to find devices plugged-in and included in a list of system devices, and to provide them with a baseline configuration to initiate operation. The audio system may be realized in a graphical user interface format during a device discovery process, the operator can then drag and drop data in a user interface for a more customizable experience, or reset to a factory default level. If the system did not adequately tune to a certain level, then an alert can be generated and any miswirings can be discovered as well by a testing signal sent to all known devices. The audio environments normally include various components and devices such as microphones, amplifiers, loudspeakers, DSP devices, etc. After installation, the devices need to be configured to act as a system. The software may be used to configure certain functions performed by each device. The controller128or central computing device may store a configuration file which can be updated during the installation process to include a newly discovered audio profile. One approach to performing the automated tuning process may include permitting auto tune algorithms to execute on a device that also contains custom DSP processing procedures. To enable this combined feature, the code would discover the appropriate signal injection and monitoring points within the custom configuration. With the injection and monitoring points identified, a selected DSP processing layout would be tuning compatible. Some operations in the auto tune process will send test signals out of each speaker one at a time, which increases total measurement time when many speakers are present. FIG.1Billustrates a data network configuration for managing paging operations according to example embodiments. Referring toFIG.1B, the data management process may be managed by a paging controller device174, such as a computer, server or similar device. The controller174may provide a user interface that permits a user to login or authorize their credentials, select paging modes of operation, and provide paging codes which may have a priority assignment and which are identified as being associated with a live or a pre-recorded audio signal. The code entered may be identified by the page priority process178which may be a process associated with a database of pages codes172and page audio182stored in a local memory or a network cloud of a remote network180. When the code is identified, the code may be placed in a paging queue176of pending pages, which may include zero or more other pages which are pending to be played by one or more zones. Each page in a paging queue may include a code that corresponds to the page audio and a paging priority. The code may be further identified by a type of page (e.g., live, pre-recorded voice, emergency signal, etc.). The queue may include the code, the identifier of the audio to play and a location in the queue. When the page code is at the top of the queue, the codes may be used to select the audio and the specific zones to play the paging audio. For example, a page code of ‘348’ may indicate a routine recording ‘ABC’ that is part of an airport paging system to remind passengers to secure their belongings for security reasons and report suspicious behavior. The priority of the page may be ‘high’ and may move the page to the next page position in the queue behind the first place position. The page may be identified by a priority level, its recording content and its zone(s) (1-5, etc.). The security page may be an omnibus page that includes all zones at a maximum volume designation as well (1-10). FIG.1Cillustrates a paging configuration for conducting paging operations according to example embodiments. Referring toFIG.1C, the paging station182may include a computer and display interface for a user to operate the paging system. The paging controller184may be the same or a different computer that provides electrical signals to the paging zone devices186and188, which may be the single output device for one or more respective audio devices196and198. For example, a paging station operation may include a selection for a high priority page to a specific area managed by the paging zone188. The paging controller may store the paging code and associated audio in a page queue and forward the audio to the paging zone188when the page has matured in the queue (reached a top position) and/or at a particular time. The audio devices198linked to the paging zone188will then produce the audio at the appropriate time. FIG.2illustrates a data network configuration for managing paging operations according to example embodiments. Referring toFIG.2, the environment150that may be subject to paging operations and may include various different zones102-110(e.g., zones 1-5) which are defined by temporary walls, permanent structures or just general areas without walls or defining structures. The zones may include any number of speakers which are in communication with the paging network. The speakers may be wireless or wired and may receive paging audio signals at certain designated speakers depending on the configuration. Certain speakers may receive no signal, other speakers may receive a paging audio signal, such as a pre-recorded audio signal or live voice signal, and others may receive a different signal, such as white noise, background noise, music, or a similar pre-recorded voice audio signal. The objective may include forwarding a paging signal to one or more areas while limiting audio and signals forwarded to other areas. FIG.3illustrates a user interface application with a series of options to select for paging setup according to example embodiments. Referring toFIG.3, the software application may offer a configuration interface300to establish a number of paging input channels for a particular paging mixer along with other options. One input selection may be a page code312, which identifies a priority, a type of audio to play and may include a security feature that permits the page when the code is received. Another option is the zone314, which may include one or more zones available to play the page audio, such as zones ‘1’, ‘2’, ‘3’, ‘ALL ZONES’, etc. Any combination of zones is possible for playing the page audio. Another option is the preamble316which may include an introduction, such as a chime, a harsh alert sound, or nothing at all. The option for a particular action318may include a recorded message, a direct audio speech input ‘live audio’, or other recording. Another option is the recording324, such as a particular file or name, and a priority322, such as 1, 2, 3, 4, or none. The priority assigned to the entered paging code will move the paging audio up or down the paging queue prior to the delivery of the audio. In another menu option, the paging zones326may be selected along with volume, muting and speaker selection options. FIG.4Aillustrates user interfaces for establishing a paging operation according to example embodiments. Referring toFIG.4A, the example interfaces demonstrate a page code identifier412/422that is used to identify a particular page vs. another page, a label418, which is a title that describes the type of page, in this case an announcement for a gate at an airport or similar facility. Another feature may include a code selection option416/426used to identify and browse code identifiers, and an announcement type, such as a recording indicating by a speaker424, or a live announcement indicated by a microphone414. The ‘x’ in the examples414and424indicate a mute option on the microphone414or the speaker. In the second example ofFIG.4A, the label365indicates a white zone message type428. FIG.4Billustrates additional user interfaces for establishing a paging operation according to example embodiments. Referring toFIG.4B, the first menu option now indicates a live microphone434with the same identifier ‘673’432and label438gate lounge announcement. The option to talk now436is a prompt for the user to begin speaking. In the second example, the identifier442is the same white noise message, and in this example, the status is now ‘playing’446and the speaker444is active and not muted. Also, a status bar indicates how much of the message is left to be played for the white zone message example448. FIG.4Cillustrates further user interfaces for establishing a paging operation according to example embodiments. Referring toFIG.4C, the first interface example indicates that the station is locked452until the correct pin number is entered458. The status may indicate a locked symbol454until the correct information is received. The option456to select different codes and options is also provided. In the second example interface, the station is being used to record a new message462, the status is temporarily closed468and the microphone is off464until the message is created and selected466. FIG.5Aillustrates a process for a paging selection operation. Referring toFIG.5A, the process may include receiving a pin number at a paging station interface502, the pin may be selected in real-time by a user or via an automated paging process that cycles certain pin number inputs at certain times of a day. The process also includes responsive to confirming the pin authorization, prompting the interface to initiate a paging function504, receiving a paging function selection, such as one or more of live audio and recorded audio506, providing a zone busy feedback notification when a requested zone is awaiting a free period of communication508, and forwarding one or more of the live audio and the recorded audio after the free period of communication has initiated510. FIG.5Billustrates an example logic diagram of an example process of operating the paging function of the audio configuration according to example embodiments. Referring toFIG.5B, process may include receiving a page code identifier522, determining a priority of the page code identifier524, queuing the page code identifier in a paging queue526, retrieving content associated with the page code528, and forwarding the content to the one or more audio devices when the page code has reached a top of the queue530. The content and the one or more audio devices which are selected may be based on the code. The code may indicate priority, specific zones, speakers to output the announcement, speakers to not output the announcement, importance, the type of preamble to use, the amount of time to store and/or delay the announcement/page, etc. The process may also include determining whether the page code identifier indicates live or recorded audio, and initiating a content retrieval operation when the page code identifier indicates recorded audio. Also, the process may perform determining whether the page code identifier indicates live or recorded audio, and initiating a recording operation when the page code identifier indicates live audio. The process may also include determining the priority of the page code identifier comprises matching the page code identifier with a priority stored in a table. The queuing the page code identifier in the paging queue may include storing the page code identifier with the content to be forwarded to the one or more audio devices. The queuing the page code identifier in the paging queue may include storing the page identifier with the content to be forwarded to the one or more audio devices in a second queue position when the page code identifier is associated with a higher priority than all other page codes stored in the page queue. The process may also include identifying the page code identifier comprises a same priority as one or more other page code identifiers in the page queue, determining a station identifier associated with the page code identifier has a higher priority than station identifiers associated with the one or more page code identifiers, and storing the page identifier with the content to be forwarded to the one or more audio devices in a first queue position when the station code identifier is associated with a higher priority than the station identifiers associated with the one or more page code identifiers. The paging station may satisfy the need for a convenience paging station solution for legacy audio equipment. The paging system may be a networked appliance with a microphone and press-to-talk button for making voice announcements to pre-configured destination paging zones. The types of stations may be constructed as a 4-button or a-10 button (keypad) variant. The buttons will be used to select the destination for voice announcements and recorded messages. A push-to-talk button shall be used to initiate all announcements. An LCD (display) on the front panel of the paging station will be used to select a page code for subsequent paging operations, provide visual prompts to the user regarding page progress, report faults and display system status, provide a user interface for recording announcements to local storage and configure push-to-talk button mode. Up, down and select navigation buttons adjacent to the display shall be used to navigate the system of menus that constitute the user interface for the device. A RESTful application programming interface ‘API’, accessible via the network interface, may provide a mechanism for configuring, controlling and monitoring the operation of the paging station. This API will be used by a server device to configure the paging system when it forms part of the audio network. A priority level shall be associated with each page code representing its relative importance. The priority will be an unsigned integer ranging from 1 to 16, where 16 represents the highest priority. The hardware design shall permit for onboard digital signal processing of the microphone audio input signal. The firmware permits parameters associated with each digital signal processing entity/block, such as the paging controller184and the paging zones186/188to be controlled via the API. The paging station(s) will be discoverable using the industry standard domain name service/service discovery mDNS/DNS-SD protocols. This will enable another network node to determine its IP address and the services it supports. Once the device's IP address is known, the RESTful API can be used to configure, control or monitor the device. When used as part of an audio system, discovery, configuration and monitoring of a device will be managed by the server class device acting as a proxy. The software will be used to create a layout that defines page codes and zones used by the system. The page codes will be allocated to the paging stations. In the case of the four-button configuration the page codes shall be allocated to specific buttons. The paging station shall be capable of both live and recorded voice announcements into the local sound re-enforcement system. The type of page (live or recorded) shall be determined by the page code selected on the front panel of the device. The paging station will be the source for all paging audio associated with announcements initiated from the station whether they are live or recorded. Live announcements will use audio from a gooseneck or hand-held microphone attached to the station. Recorded announcements will be stored on the local file system of the station and be played out upon request. Audio will be delivered to audio output devices via its networked audio port. The push-to-talk button (PTT) on the front panel of the paging station shall operate in one of two modes: momentary or latching. The mode of operation shall be selectable via a menu on the station's front panel display. When the momentary mode is selected the user must hold down the PTT for the duration of the page. The latching mode allows the user to press and release the PTT once to initiate the page then press and release again to terminate it. It shall be possible to disable selection of the latching mode at the station, via a configuration time option. In this case the press-to-talk will always operate in the momentary fashion. All pages may be proceeded by a preamble chime or message. The preamble shall be enabled and the desired message file selected as part of the page code configuration (see Preambles). Five pre-loaded preambles will be provided with the station. Other preambles can be uploaded to the device via the REST API. The REST API shall provide a mechanism for triggering recorded announcement playback. It shall be possible to select the message file, destination zones and the priority of the announcement using this API. A four-digit PIN shall be used to authenticate paging station users. It shall be entered using the page code selection buttons on the front panel of the station. The PIN shall be entered on a four-button variant of the station using a combination of the page code selection buttons (to select the digit to enter) and the Up and Down navigate keys. The 10-button interface provides a numeric keypad for entering the PIN. The convenience paging station shall implement an intuitive wait to talk/ready to talk indication so a user who is waiting to give an announcement can easily understand when it is time to being speaking. The paging station shall provide a “zone busy” feedback notification to ensure the user is aware about zone readiness conflicts. This will be visible as soon as the page code is selected. When two pages with overlapping zone sets occur, the page code with the highest priority shall override the lower priority one. If the higher priority page starts before the lower priority one, then the paging station attempting to initiate the lower priority page will be disallowed—even if only one destination zone overlaps. If the lower priority page starts first, it shall be allowed to continue after the higher priority page starts, so long as it has at least one non-overlapping zone. If two pages with overlapping zones are started and the have the same priority, then the first to start will be treated like a higher priority page. The second will be disallowed. It is the selected page code that determines if a preamble chime will be played prior to the announcement. If the page code preamble enabled attribute is true, then a preamble will be used and the desired preamble message must be specified. The preamble message can be one of the five preloaded chimes or one of the user provided preamble message files uploaded via the REST API. The operations of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a computer program executed by a processor, or in a combination of the two. A computer program may be embodied on a computer readable medium, such as a storage medium. For example, a computer program may reside in random access memory (“RAM”), flash memory, read-only memory (“ROM”), erasable programmable read-only memory (“EPROM”), electrically erasable programmable read-only memory (“EEPROM”), registers, hard disk, a removable disk, a compact disk read-only memory (“CD-ROM”), or any other form of storage medium known in the art. FIG.6is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the application described herein. Regardless, the computing node600is capable of being implemented and/or performing any of the functionality set forth hereinabove. In computing node600there is a computer system/server602, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server602include, but are not limited to, personal computer systems, server computer systems, thin clients, rich clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. Computer system/server602may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server602may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices. As displayed inFIG.6, computer system/server602in cloud computing node600is displayed in the form of a general-purpose computing device. The components of computer system/server602may include, but are not limited to, one or more processors or processing units604, a system memory606, and a bus that couples various system components including system memory606to processor604. The bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus. Computer system/server602typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server602, and it includes both volatile and non-volatile media, removable and non-removable media. System memory606, in one embodiment, implements the flow diagrams of the other figures. The system memory606can include computer system readable media in the form of volatile memory, such as random-access memory (RAM)610and/or cache memory612. Computer system/server602may further include other removable/non-removable, volatile/non-volatile computer system storage media. By way of example only, storage system614can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not displayed and typically called a “hard drive”). Although not displayed, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to the bus by one or more data media interfaces. As will be further depicted and described below, memory606may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of various embodiments of the application. Program/utility616, having a set (at least one) of program modules618, may be stored in memory606by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules618generally carry out the functions and/or methodologies of various embodiments of the application as described herein. As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method, or computer program product. Accordingly, aspects of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present application may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Computer system/server602may also communicate with one or more external devices620such as a keyboard, a pointing device, a display622, etc.; one or more devices that enable a user to interact with computer system/server602; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server602to communicate with one or more other computing devices. Such communication can occur via I/O interfaces624. Still yet, computer system/server602can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter626. As depicted, network adapter626communicates with the other components of computer system/server602via a bus. It should be understood that although not displayed, other hardware and/or software components could be used in conjunction with computer system/server602. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc. One skilled in the art will appreciate that a “system” could be embodied as a personal computer, a server, a console, a personal digital assistant (PDA), a cell phone, a tablet computing device, a smartphone or any other suitable computing device, or combination of devices. Presenting the above-described functions as being performed by a “system” is not intended to limit the scope of the present application in any way but is intended to provide one example of many embodiments. Indeed, methods, systems and apparatuses disclosed herein may be implemented in localized and distributed forms consistent with computing technology. It should be noted that some of the system features described in this specification have been presented as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom very large-scale integration (VLSI) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, graphics processing units, or the like. A module may also be at least partially implemented in software for execution by various types of processors. An identified unit of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions that may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module. Further, modules may be stored on a computer-readable medium, which may be, for instance, a hard disk drive, flash device, random access memory (RAM), tape, or any other such medium used to store data. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. It will be readily understood that the components of the application, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the detailed description of the embodiments is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments of the application. One having ordinary skill in the art will readily understand that the above may be practiced with steps in a different order, and/or with hardware elements in configurations that are different than those which are disclosed. Therefore, although the application has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent. While preferred embodiments of the present application have been described, it is to be understood that the embodiments described are illustrative only and the scope of the application is to be defined solely by the appended claims when considered with a full range of equivalents and modifications (e.g., protocols, hardware devices, software platforms etc.) thereto.
33,164
11943594
The drawings are for the purpose of illustrating example embodiments, but those of ordinary skill in the art will understand that the technology disclosed herein is not limited to the arrangements and/or instrumentality shown in the drawings. DETAILED DESCRIPTION I. Overview SONOS, Inc. has been a consistent innovator in the sound experience space over the past decade. For example, SONOS, Inc. created stereo pair functionality for playback devices that allows two playback devices to be bonded together to form a stereo pair as described in U.S. Pat. No. 8,788,080, issued on Jul. 22, 2014, titled “MULTI-CHANNEL PAIRING IN A MEDIA SYSTEM,” which is incorporated herein by reference in its entirety. After creating stereo pair functionally, SONOS, Inc. went on to create dynamic grouping functionality for playback devices as described in U.S. Pat. No. 9,329,831, issued on May 3, 2016, titled “PLAYBACK EXPANSION,” which is incorporated herein by reference in its entirety. In furtherance of the consistent innovation by SONOS, Inc. in the sound experience space, SONOS, Inc. has developed new techniques for intelligently distributing audio between playback devices based on information about the current operating conditions, such as information regarding a configuration of the players and/or user preferences, to further improve the sound experience in dynamic environments (e.g., households, venues, businesses, etc.) employing, for example, portable players (e.g., being moved relative to each other) and/or a combination of stationary players and portable playback devices (e.g., being moved relative to each other and/or the stationary players). Accordingly, aspects of the present disclosure relate to automatically allocating audio portions (e.g., audio channels, frequency ranges, etc.) in response to a detected trigger and based on retrieved configuration information. For example, a portable playback device can be moved to different locations within a playback system. As the portable playback device changes position, the audio that it is desired to reproduce will also change. While the portable playback device may be reconfigured manually each time it is moved to a new location, this is inconvenient, potentially requiring access to a separate control device and/or interrupting the reproduction of any media. The configuration can also be complex, involving not just grouping playback devices to play media in synchrony, but adjusting audio allocations between those devices. Audio allocation can be based on one or more of channels (such as a left channel, a right channel, etc.) and frequency ranges (such as low frequencies below a predetermined threshold, other frequencies above the predetermined threshold, etc.). Accordingly, aspects of the present disclose relate to automatic configuration of such audio allocation for an improved user experience. For example, techniques are described herein to update the audio allocation responsive to a trigger being detected based on retrieved configuration information to inform the audio allocation. In this way, audio allocations can be updated without requiring user input to provide an improved user experience. In some embodiments, for example, a method of allocating audio data between a first playback device and a second playback device is provided. The audio data comprises a plurality of audio portions and the method comprises: detecting a trigger associated with the first playback device; and responsive to detecting the trigger: retrieving configuration information related to the first playback device and the second playback device; and automatically updating an allocation of the audio portions for reproduction by at least one of the first playback device and the second playback device based on the configuration information. A wide variety of triggers may be used in the method. Example triggers include a detected voice input, an input from a user interface on a control device, an input from a user interface on a playback device such as a button press, or a detection of a change in position of the playback device relative to other playback devices in a playback system. The configuration information may comprise one or more state variables which include information of the devices in the playback system and/or the current configuration of those devices. The audio allocation, such as channel or frequency range for reproduction, is then updated based on the configuration information. This can provide an improved user experience in several ways. When the trigger is associated with a movement of a playback device to a new position a variety of different actions can take place. For example, moving a playback device to a position away from a device that it was previously bonded with to reproduce one channel of a stereo pair may result in that playback device automatically changing its audio allocation to reproduce all channels. Moving a playback device to a position in proximity to another playback device may result in the audio allocation being updated so that the playback device reproduces one channel of a stereo pair or one channel of a surround sound or home theater setup. Additionally or alternately, the updating the audio allocation may involve updating a frequency allocation. For example, changing the position of a device so that it is no longer in proximity to a subwoofer may update the audio allocation to reproduce low frequencies (e.g., the low frequencies previously allocated to the subwoofer). When the trigger is associated with a voice input, the audio allocation can relate to providing the response (e.g., an audible response) from a voice assistant. For example, a response may be provided from a playback device closest to the voice input, from a primary device designated for providing voice responses, from all devices in the vicinity of the voice input and so on, updating the audio allocation as required. This can allow, for example, playback devices which do not include microphones to provide responses to voice inputs detected by other devices. Similarly, a frequency balance or equalization may be adjusted, such as the audio allocation being adjusted so that a subwoofer is not used when providing a response from a voice assistant. It should be appreciated that incorporating configuration information into player grouping may provide any of a variety of benefits over conventional grouping techniques that force users to manually define all aspects of the group (e.g., which players are in the group, which frequencies the players are to reproduce, etc.). By considering retrieved configuration information and updating the audio allocation based on that configuration information, embodiments described herein provide a more seamless user experience because the configuration information can inform how an audio allocation is updated. It is not necessary, for example, for a user to have defined beforehand how a playback device should behave in response to a particular trigger, instead, once the trigger is detected audio allocation is updated automatically based on retrieved configuration information. The trigger may indicate that the first playback device is to be grouped with the second playback device for playback of media. The method may then comprise: further responsive to detecting the trigger, causing the first playback device and the second playback device to join together in a group of playback devices for media playback. The automatically updating the allocation of the audio portions comprises automatically updating the allocation of the audio portions for reproduction of media in synchrony by the first and second playback devices. This allows a synchrony group to be configured automatically and the audio allocations updated as required based on the configuration of the devices. For example, the automatically updating audio allocations may also update channels and/or frequency bands reproduced by one or both of the first and second devices. In one example, the automatically updating the allocation of the audio portions for reproduction of media in synchrony comprises determining that the configuration information indicates that the second playback device is configured to reproduce all the audio portions, and responsively allocating a first subset of the audio portions to the first playback device and a second subset of the audio portions to the second playback device, wherein the first subset and second subset are different. In this way the audio allocation of both first and second devices are updated, for example to adjust one to be a left channel and the other a right channel of stereo audio, or to adjust one to reproduce low frequencies or a low frequency effects channel and the other to reproduce other channels/frequencies, in the case of adding a subwoofer to a playback device than can reproduce full range audio (e.g., a full range of frequencies which can be perceived by a listener). In another example, the automatically updating the allocation of the audio portions for reproduction of media in synchrony comprises determining that the configuration information indicates that both the first playback device and the second playback device have a same associated identifier, and responsively allocating a first subset of the audio portions to the first playback device and a second subset of the audio portions to the second playback device, wherein the first subset and second subset are different. The identifier may a name allocated to the device such as “Living Room”. In both of these examples the audio allocation of the second device is updated along with the first device; the trigger causes not just the first device to join a synchrony group with an allocation of audio portions, but for the audio portions reproduced by the second device to be changed. In some examples, the method may further comprise determining a position of the first playback device relative to the second playback device; and allocating the first and second subsets of the audio portions based on the determined position. This allows the allocating the audio portions to take into account a determined position of the playback devices, such as to allow left and right channels to allocated to a device in the respective position. The position may be determined in various ways. In one example, the determining a position comprises: causing the second playback device to emit a sound; receiving the sound via a microphone array comprising a plurality of microphones provided on the first playback device; and determining the position based on the relative magnitude of the received sound at two or more of the plurality of microphones in the microphone array. In this way the position can be determined without requiring any further user interaction or additional devices. The method can be used regardless of whether the second playback device also comprises a microphone array. For example, the microphone can be directional and location a direction of the received sound relative to the first playback device. The emitted sound could be audible or inaudible. Inaudible sound could be ultrasonic, outside the range of typical human hearing, and/or having a frequency above 20 kHz; providing that the second device can reproduce the sound and the microphone array can detect it. In another example, the determining a position comprises: determining a first proximity of a control device to the first playback device; determining a second proximity of the control device to the second playback device; and determining the position based on the first proximity, the second proximity, and a predetermined position of the control device. This may make use of a known position of a control device to determine the position. For example, a user may be directed to place a control device at a predetermined position (such as near a particular playback device). Alternatively, the position of the control device may already be known, for example a known position of a Network Microphone Device or other network connected device, for example smart devices for security or home automation. This example can work with all playback devices; there is no requirement for at least one of the playback devices to include a microphone. When proximity is determined with reference to a control device, sounds may also be used to determine proximity. The determining the first proximity may comprise: causing the first playback device to emit a first sound and receiving the first sound via at least one microphone on a control device; and the determining the second proximity may comprise causing the second playback device to emit a second sound and receiving the second sound via the at least one microphone on the control device. The first and second sounds can be the same and spaced apart in time, or could be substantially simultaneous and have different characteristics, for example occupying different frequency bands. The proximity can be determined with reference to the loudest sound detected by the control device. For example, if the control device is known to be at a left position then the playback device emitting the loudest detected sound may be determined to be in the left position and the audio allocations updated as appropriate. This allows relative position to be determined without requiring a directional microphone array, which may not be present on the control device. When proximity is determined with reference to a control device, wireless communication may also be used to determine proximity. The determining the first proximity may be based on a wireless communication between the control device and the first playback device; and the determining the second proximity may be based on a wireless communication between the control device and the second playback device. The wireless communication could make use of indications of wireless signal strength to determine proximity. These could be direct indications, such as a Received Signal Strength Indication (RSSI) of a wireless signal at the control device, indirect indications such as the physical data rate of the wireless communication channel at the control device (which is generally inversely proportional to distance all other things being equal), or the Bit Error Rate (BER) at the control device (which is generally proportional to distance all other things being equal), or involve higher level protocols, such as the Bluetooth® proximity profile (PXP) as defined with reference to the Bluetooth® Generic Attribute profiles (GATT). In the methods determining proximity with reference to a control device, no user interaction may be required (when the control device is located at a predetermined position already) or reduced user interaction may be required, for example relocating the control device to the predetermined position. The examples in which position can be determined can be applied to more than first and second devices, for example locating devices in a home theater or surround sound setup, such as three, four, five, six or seven playback devices, possibly also with a subwoofer for low frequencies or a low frequency effects channel (the position of the subwoofer may not be discernable to a listener so determining the position of the subwoofer may be omitted from determined the position). In some examples, the method may comprise retrieving preference data, and the automatically updating the allocation is further based on the preference data. For example, the user may set a default pairing type which overrides other types of audio allocation. This could apply global defaults to give more control over the automatic allocating of audio portions. The user preferences may include whether automatic bonding into a stereo pairing is enabled, or whether a particular playback device should always reproduce particular audio portions, such as all the audio portions or a subset of all the audio portions. Other preferences may define how playback devices when the trigger results in the removal of a device from a group, for example whether one or neither of the playback devices continues to reproduce audio after the audio allocations are updated to reflect the removal of the playback device. The preference data may be stored in a playback device, in a control device, or remotely, such as in an internet accessible server system and be separate from or form part of the configuration information. The automatic allocation of audio portions can be based on the configuration information in further ways in additional examples. In one example, the method comprises determining that the configuration information indicates that the second playback device is configured to reproduce a subset of all channels of audio, and responsively allocating all audio portions to the first playback device. If the second playback device is already allocated a subset of all channels of audio, it is likely that this is for a particular reason, such as the second device already being configured for bonded playback with other devices. In this case, the automatic allocating allocates all audio portions to the first device, so that these can be reproduced in addition rather than disrupting existing settings. In another example, the method comprises determining that the configuration information indicates that the first playback device is operating on battery power and that a remaining battery life of the first playback device is below a predetermined threshold, and responsively allocating all audio portions to the first playback device. The threshold may be expressed as a percentage, for example less than 50% or less than 25% of battery power remaining; as a time, for example less than 2 hours, less than 1 hour or less than 30 minutes; or as absolute value, such as less than 15 Watt-hours (Wh), less than 10 Wh, less than 5 Wh or less than 1 Wh. In this case the automatic allocating assigns all audio portions to the first playback device to provide an improved listening experience should the battery run out during reproduction. A sudden loss of some portions of audio may be less likely to be perceived by the listener at the point the battery runs out. One of the triggers for the method may be a voice input which is received by a microphone array on the first playback device, the automatically updating the allocation of audio portions for reproduction then comprises determining at least one playback device to respond to the voice input. This can allow the most appropriate device to respond to voice input. The determine of the allocation may depend on the nature of the response, for example using both first and second playback devices for music reproduction and a single one for information or a reply from a voice assistant (so that the updating the allocation then includes allocating one of the playback devices no audio portions). The allocating may also depend on what system setup is determined from the configuration information, such as updating the audio allocations so that a primary device reproduces the response to the voice input (which may be a soundbar or soundbase in a home theater or surround setup, or defined by a user in preference information). Where the trigger is a voice input, the voice input may be further received by a microphone array on the second playback device, and the automatically updating the allocation of audio portions can be further based on the voice input received by the first playback device and the voice input received by the second playback device. This may allow the device closest to the user to respond to the voice input, using the recorded sound pressure by the microphone at each device to determine which received the loudest sound and is therefore closest to the user. In this way a particular playback device or devices may be allocated audio portions for a response from a voice assistant. Relative volumes may be adjusted amongst the playback devices to account for a position of the listener (assuming the voice input was received from the listening position). In some examples, the allocating the audio portions may be for a particular time period, so that the updated allocation does not remain in place indefinitely. For example, when the allocation of the audio portions is triggered by a voice input, the updating the audio allocations may be for the duration of the response to the voice input and revert back to the previous audio allocations once the response is complete. In another embodiment, a playback device comprises: a wireless interface configured to receive audio data comprising a plurality of audio portions; a speaker for reproducing at least one of the plurality of audio portions received via the wireless interface; a storage; and a processing system. The storage comprises computer-readable instructions, such as non-transitory computer-readable instructions that, when executed by the processing system, instruct the playback device to carry out a method as described above. In another embodiment, a playback device comprises: a wireless interface configured to receive audio data comprising a plurality of audio portions; a speaker for reproducing at least one of the plurality of audio portions received via the wireless interface; a storage; and a processing system. The storage comprises non-transitory computer-readable instructions that, when executed by the processor instruct the playback device to: responsive to a trigger associated with the playback device and indicating that the playback device is to be grouped with another playback device for playback of media: retrieve configuration information related to the playback device and the another playback device; cause the playback device and the another playback device to join together in a group for synchronous media playback; and automatically update an allocation of the audio portions for reproduction by the playback device based on the configuration information. Such a playback device can be grouped with another playback device in response to a trigger, reducing user input and simplifying setup of a playback systems in which playback devices are grouped. For example, grouping can be achieved without requiring a separate control device, user input, or pre-configuration of the grouped devices. The non-transitory computer-readable instructions, when executed by the processor, may instruct the playback device to: determine either: (i) that the configuration information indicates that the another playback device is configured to reproduce all the audio portions, or (ii) that the configuration information indicates that both the playback device and the another playback device have a same associated identifier, and responsively allocate a first subset of the audio portions to the playback device and a second subset of the audio portions to the another playback device, wherein the first subset and second subset are different. This can allow automatic allocation so that the playback device forms part of a bonded setup in which different playback devices reproduce different channels of audio, such as a left and right stereo setup between two devices. The playback device may comprise a microphone array. The non-transitory computer-readable instructions, when executed by the processor, can instruct the playback device to: cause the second playback device to emit a sound; receiving the sound via the microphone array; and determine a position of the playback device relative to the another playback device based on the received sound and the first subset and the second subset are based on the position. Such a device can automatically determine whether it is positioned as the left or right device in stereo pair, for example, and update the audio allocation to reflect this. The playback device may comprise a battery. The non-transitory computer-readable instructions, when executed by the processor, can instruct the playback device to determine that the playback device is operating on battery power and that a remaining battery life of the playback device is below a predetermined threshold, and responsively allocating all audio portions to the playback device. This can provide a less disruptive experience should the battery of the playback device subsequently run out during media playback. According to another embodiment, a playback device comprises: a wireless interface configured to receive audio data comprising a plurality of audio portions; a speaker for reproducing at least one of the plurality of audio portions received via the wireless interface; a microphone array; a storage; and a processing system. The storage comprises non-transitory computer-readable instructions that, when executed by the processor instruct the playback device to: responsive a voice input received by the microphone array: retrieve configuration information related to the playback device and another playback device; and automatically update an allocation of the audio portions for the playback device to reproduce the response to the voice input based on the configuration information. Such a playback device can allocate audio as appropriate for the response to the voice input, such as allocating a playback device closest to a user, or playback devices suitable for the nature of the response. The non-transitory computer-readable instructions, when executed by the processor, can instruct the playback device to: determine that the configuration information indicates the playback device is configured to reproduce a first subset of the audio portions in synchrony with the another playback device and responsively updating the allocation of audio portions between the playback device and the another playback device such that the response to the voice input is reproduced by the another playback device and not the playback device. This can allow a most appropriate device to respond, which may be another device than the one that received the voice input. While some examples described herein may refer to functions performed by given actors such as “users,” “listeners,” and/or other entities, it should be understood that this is for purposes of explanation only. The claims should not be interpreted to require action by any such example actor unless explicitly required by the language of the claims themselves. In the Figures, identical reference numbers identify generally similar, and/or identical, elements. To facilitate the discussion of any particular element, the most significant digit or digits of a reference number refers to the Figure in which that element is first introduced. For example, element110ais first introduced and discussed with reference toFIG.1A. Many of the details, dimensions, angles and other features shown in the Figures are merely illustrative of particular embodiments of the disclosed technology. Accordingly, other embodiments can have other details, dimensions, angles and features without departing from the spirit or scope of the disclosure. In addition, those of ordinary skill in the art will appreciate that further embodiments of the various disclosed technologies can be practiced without several of the details described below. II. Suitable Operating Environment FIG.1Ais a partial cutaway view of a media playback system100distributed in an environment101(e.g., a house). The media playback system100comprises one or more playback devices110(identified individually as playback devices110a-n), one or more network microphone devices (“NMDs”),120(identified individually as NMDs120a-c), and one or more control devices130(identified individually as control devices130aand130b). As used herein the term “playback device” can generally refer to a network device configured to receive, process, and output data of a media playback system. For example, a playback device can be a network device that receives and processes audio content. In some embodiments, a playback device includes one or more transducers or speakers powered by one or more amplifiers. In other embodiments, however, a playback device includes one of (or neither of) the speaker and the amplifier. For instance, a playback device can comprise one or more amplifiers configured to drive one or more speakers external to the playback device via a corresponding wire or cable. Moreover, as used herein the term NMD (i.e., a “network microphone device”) can generally refer to a network device that is configured for audio detection. In some embodiments, an NMD is a stand-alone device configured primarily for audio detection. In other embodiments, an NMD is incorporated into a playback device (or vice versa). The term “control device” can generally refer to a network device configured to perform functions relevant to facilitating user access, control, and/or configuration of the media playback system100. Each of the playback devices110is configured to receive audio signals or data from one or more media sources (e.g., one or more remote servers, one or more local devices) and play back the received audio signals or data as sound. The one or more NMDs120are configured to receive spoken word commands, and the one or more control devices130are configured to receive user input. In response to the received spoken word commands and/or user input, the media playback system100can play back audio via one or more of the playback devices110. In certain embodiments, the playback devices110are configured to commence playback of media content in response to a trigger. For instance, one or more of the playback devices110can be configured to play back a morning playlist upon detection of an associated trigger condition (e.g., presence of a user in a kitchen, detection of a coffee machine operation). In some embodiments, for example, the media playback system100is configured to play back audio from a first playback device (e.g., the playback device100a) in synchrony with a second playback device (e.g., the playback device100b). Interactions between the playback devices110, NMDs120, and/or control devices130of the media playback system100configured in accordance with the various embodiments of the disclosure are described in greater detail below with respect toFIGS.1B-3F. In the illustrated embodiment ofFIG.1A, the environment101comprises a household having several rooms, spaces, and/or playback zones, including (clockwise from upper left) a master bathroom101a, a master bedroom101b, a second bedroom101c, a family room or den101d, an office101e, a living room101f, a dining room101g, a kitchen101h, and an outdoor patio101i. While certain embodiments and examples are described below in the context of a home environment, the technologies described herein may be implemented in other types of environments. In some embodiments, for example, the media playback system100can be implemented in one or more commercial settings (e.g., a restaurant, mall, airport, hotel, a retail or other store), one or more vehicles (e.g., a sports utility vehicle, bus, car, a ship, a boat, an airplane), multiple environments (e.g., a combination of home and vehicle environments), and/or another suitable environment where multi-zone audio may be desirable. The media playback system100can comprise one or more playback zones, some of which may correspond to the rooms in the environment101. The media playback system100can be established with one or more playback zones, after which additional zones may be added, or removed to form, for example, the configuration shown inFIG.1A. Each zone may be given a name according to a different room or space such as the office101e, master bathroom101a, master bedroom101b, the second bedroom101c, kitchen101h, dining room101g, living room101f, and/or the balcony101i. In some aspects, a single playback zone may include multiple rooms or spaces. In certain aspects, a single room or space may include multiple playback zones. In the illustrated embodiment ofFIG.1A, the master bathroom101a, the second bedroom101c, the office101e, the living room101f, the dining room101g, the kitchen101h, and the outdoor patio101ieach include one playback device110, and the master bedroom101band the den101dinclude a plurality of playback devices110. In the master bedroom101b, the playback devices110land110mmay be configured, for example, to play back audio content in synchrony as individual ones of playback devices110, as a bonded playback zone, as a consolidated playback device, and/or any combination thereof. Similarly, in the den101d, the playback devices110h-jcan be configured, for instance, to play back audio content in synchrony as individual ones of playback devices110, as one or more bonded playback devices, and/or as one or more consolidated playback devices. Additional details regarding bonded and consolidated playback devices are described below with respect toFIGS.1B,1E and1I-1M. In some aspects, one or more of the playback zones in the environment101may each be playing different audio content. For instance, a user may be grilling on the patio101iand listening to hip hop music being played by the playback device110cwhile another user is preparing food in the kitchen101hand listening to classical music played by the playback device110b. In another example, a playback zone may play the same audio content in synchrony with another playback zone. For instance, the user may be in the office101elistening to the playback device110fplaying back the same hip hop music being played back by playback device110con the patio101i. In some aspects, the playback devices110cand110fplay back the hip hop music in synchrony such that the user perceives that the audio content is being played seamlessly (or at least substantially seamlessly) while moving between different playback zones. Additional details regarding audio playback synchronization among playback devices and/or zones can be found, for example, in U.S. Pat. No. 8,234,395 entitled, “System and method for synchronizing operations among a plurality of independently clocked digital data processing devices,” which is incorporated herein by reference in its entirety. To facilitate synchronous playback, the playback device(s) described herein may, in some embodiments, be configurable to operate in (and/or switch between) different modes such as a group coordinator mode and/or a group member mode. While operating in the group coordinator mode, the playback device may be configured to coordinate playback within the group by, for example, performing one or more of the following functions: (i) receiving audio content from an audio source, (ii) using a clock (e.g., a physical clock or a virtual clock) in the playback device to generate playback timing information for the audio content, (iii) transmitting portions of the audio content and playback timing for the portions of the audio content to at least one other playback device (e.g., at least one other playback device operating in a group member mode), and/or (iv) playing back the audio content in synchrony with the at least one other playback device using the generated playback timing information. While operating in the group member mode, the playback device may be configured to perform one or more of the following functions: (i) receiving audio content and playback timing for the audio content from the at least one other device (e.g., a playback device operating in a group coordinator mode); and/or (ii) playing the audio content in synchrony with at least the other playback device using the playback timing for the audio content. a. Suitable Media Playback System FIG.1Bis a schematic diagram of the media playback system100and a cloud network102. For ease of illustration, certain devices of the media playback system100and the cloud network102are omitted fromFIG.1B. One or more communication links103(referred to hereinafter as “the links103”) communicatively couple the media playback system100and the cloud network102. The links103can comprise, for example, one or more wired networks, one or more wireless networks, one or more wide area networks (WAN) (e.g., the Internet), one or more local area networks (LAN) (e.g., one or more WIFI networks), one or more personal area networks (PAN) (e.g., one or more BLUETOOTH networks, Z-WAVE networks, wireless Universal Serial Bus (USB) networks, ZIGBEE networks, and/or IRDA networks), one or more telecommunication networks (e.g., one or more Global System for Mobiles (GSM) networks, Code Division Multiple Access (CDMA) networks, Long-Term Evolution (LTE) networks, 5G communication network networks, and/or other suitable data transmission protocol networks), etc. The cloud network102is configured to deliver media content (e.g., audio content, video content, photographs, social media content) to the media playback system100in response to a request transmitted from the media playback system100via the links103. In some embodiments, the cloud network102is further configured to receive data (e.g. voice input data) from the media playback system100and correspondingly transmit commands and/or media content to the media playback system100. The cloud network102comprises computing devices106(identified separately as a first computing device106a, a second computing device106b, and a third computing device106c). The computing devices106can comprise individual computers or servers, such as, for example, a media streaming service server storing audio and/or other media content, a voice service server, a social media server, a media playback system control server, etc. In some embodiments, one or more of the computing devices106comprise modules of a single computer or server. In certain embodiments, one or more of the computing devices106comprise one or more modules, computers, and/or servers. Moreover, while the cloud network102is described above in the context of a single cloud network, in some embodiments the cloud network102comprises a plurality of cloud networks comprising communicatively coupled computing devices. Furthermore, while the cloud network102is shown inFIG.1Bas having three of the computing devices106, in some embodiments, the cloud network102comprises fewer (or more than) three computing devices106. The media playback system100is configured to receive media content from the networks102via the links103. The received media content can comprise, for example, a Uniform Resource Identifier (URI) and/or a Uniform Resource Locator (URL). For instance, in some examples, the media playback system100can stream, download, or otherwise obtain data from a URI or a URL corresponding to the received media content. A network104communicatively couples the links103and at least a portion of the devices (e.g., one or more of the playback devices110, NMDs120, and/or control devices130) of the media playback system100. The network104can include, for example, a wireless network (e.g., a WiFi network, a Bluetooth, a Z-Wave network, a ZigBee, and/or other suitable wireless communication protocol network) and/or a wired network (e.g., a network comprising Ethernet, Universal Serial Bus (USB), and/or another suitable wired communication). As those of ordinary skill in the art will appreciate, as used herein, “WiFi” can refer to several different communication protocols including, for example, Institute of Electrical and Electronics Engineers (IEEE) 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ad, 802.11af, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax, 802.11ay, 802.15, etc. transmitted at 2.4 Gigahertz (GHz), 5 GHz, and/or another suitable frequency. In some embodiments, the network104comprises a dedicated communication network that the media playback system100uses to transmit messages between individual devices and/or to transmit media content to and from media content sources (e.g., one or more of the computing devices106). In certain embodiments, the network104is configured to be accessible only to devices in the media playback system100, thereby reducing interference and competition with other household devices. In other embodiments, however, the network104comprises an existing household communication network (e.g., a household WiFi network). In some embodiments, the links103and the network104comprise one or more of the same networks. In some aspects, for example, the links103and the network104comprise a telecommunication network (e.g., an LTE network, a 5G network). Moreover, in some embodiments, the media playback system100is implemented without the network104, and devices comprising the media playback system100can communicate with each other, for example, via one or more direct or indirect connections, PANs, LANs, telecommunication networks, and/or other suitable communication links. In some embodiments, audio content sources may be regularly added or removed from the media playback system100. In some embodiments, for example, the media playback system100performs an indexing of media items when one or more media content sources are updated, added to, and/or removed from the media playback system100. The media playback system100can scan identifiable media items in some or all folders and/or directories accessible to the playback devices110, and generate or update a media content database comprising metadata (e.g., title, artist, album, track length) and other associated information (e.g., URIs, URLs) for each identifiable media item found. In some embodiments, for example, the media content database is stored on one or more of the playback devices110, network microphone devices120, and/or control devices130. In the illustrated embodiment ofFIG.1B, the playback devices110land110mcomprise a group107a. The playback devices110land110mcan be positioned in different rooms in a household and be grouped together in the group107aon a temporary or permanent basis based on user input received at the control device130aand/or another control device130in the media playback system100. When arranged in the group107a, the playback devices110land110mcan be configured to play back the same or similar audio content in synchrony from one or more audio content sources. In certain embodiments, for example, the group107acomprises a bonded zone in which the playback devices110land110mcomprise left audio and right audio channels, respectively, of multi-channel audio content, thereby producing or enhancing a stereo effect of the audio content. In some embodiments, the group107aincludes additional playback devices110. In other embodiments, however, the media playback system100omits the group107aand/or other grouped arrangements of the playback devices110. Additional details regarding groups and other arrangements of playback devices are described in further detail below with respect toFIGS.1-I through1M. The media playback system100includes the NMDs120aand120b, each comprising one or more microphones configured to receive voice utterances from a user. In the illustrated embodiment ofFIG.1B, the NMD120ais a standalone device and the NMD120bis integrated into the playback device110n. The NMD120a, for example, is configured to receive voice input121from a user123. In some embodiments, the NMD120atransmits data associated with the received voice input121to a voice assistant service (VAS) configured to (i) process the received voice input data and (ii) transmit a corresponding command to the media playback system100. In some aspects, for example, the computing device106ccomprises one or more modules and/or servers of a VAS (e.g., a VAS operated by one or more of SONOS®, AMAZON®, GOOGLE® APPLE®, MICROSOFT®). The computing device106ccan receive the voice input data from the NMD120avia the network104and the links103. In response to receiving the voice input data, the computing device106cprocesses the voice input data (i.e., “Play Hey Jude by The Beatles”), and determines that the processed voice input includes a command to play a song (e.g., “Hey Jude”). The computing device106caccordingly transmits commands to the media playback system100to play back “Hey Jude” by the Beatles from a suitable media service (e.g., via one or more of the computing devices106) on one or more of the playback devices110. b. Suitable Playback Devices FIG.1Cis a block diagram of the playback device110acomprising an input/output111. The input/output111can include an analog I/O111a(e.g., one or more wires, cables, and/or other suitable communication links configured to carry analog signals) and/or a digital I/O111b(e.g., one or more wires, cables, or other suitable communication links configured to carry digital signals). In some embodiments, the analog I/O111ais an audio line-in input connection comprising, for example, an auto-detecting 3.5 mm audio line-in connection. In some embodiments, the digital I/O111bcomprises a Sony/Philips Digital Interface Format (S/PDIF) communication interface and/or cable and/or a Toshiba Link (TOSLINK) cable. In some embodiments, the digital I/O111bcomprises an High-Definition Multimedia Interface (HDMI) interface and/or cable. In some embodiments, the digital I/O111bincludes one or more wireless communication links comprising, for example, a radio frequency (RF), infrared, WiFi, Bluetooth, or another suitable communication protocol. In certain embodiments, the analog I/O111aand the digital I/O111bcomprise interfaces (e.g., ports, plugs, jacks) configured to receive connectors of cables transmitting analog and digital signals, respectively, without necessarily including cables. The playback device110a, for example, can receive media content (e.g., audio content comprising music and/or other sounds) from a local audio source105via the input/output111(e.g., a cable, a wire, a PAN, a Bluetooth connection, an ad hoc wired or wireless communication network, and/or another suitable communication link). The local audio source105can comprise, for example, a mobile device (e.g., a smartphone, a tablet, a laptop computer) or another suitable audio component (e.g., a television, a desktop computer, an amplifier, a phonograph, a Blu-ray player, a memory storing digital media files). In some aspects, the local audio source105includes local music libraries on a smartphone, a computer, a networked-attached storage (NAS), and/or another suitable device configured to store media files. In certain embodiments, one or more of the playback devices110, NMDs120, and/or control devices130comprise the local audio source105. In other embodiments, however, the media playback system omits the local audio source105altogether. In some embodiments, the playback device110adoes not include an input/output111and receives all audio content via the network104. The playback device110afurther comprises electronics112, a user interface113(e.g., one or more buttons, knobs, dials, touch-sensitive surfaces, displays, touchscreens), and one or more transducers114(referred to hereinafter as “the transducers114”). The electronics112is configured to receive audio from an audio source (e.g., the local audio source105) via the input/output111, one or more of the computing devices106a-cvia the network104(FIG.1B), amplify the received audio, and output the amplified audio for playback via one or more of the transducers114. In some embodiments, the playback device110aoptionally includes one or more microphones115(e.g., a single microphone, a plurality of microphones, a microphone array) (hereinafter referred to as “the microphones115”). In certain embodiments, for example, the playback device110ahaving one or more of the optional microphones115can operate as an NMD configured to receive voice input from a user and correspondingly perform one or more operations based on the received voice input. In the illustrated embodiment ofFIG.1C, the electronics112comprise one or more processors112a(referred to hereinafter as “the processors112a”), memory112b, software components112c, a network interface112d, one or more audio processing components112g(referred to hereinafter as “the audio components112g”), one or more audio amplifiers112h(referred to hereinafter as “the amplifiers112h”), and power112i(e.g., one or more power supplies, power cables, power receptacles, batteries, induction coils, Power-over Ethernet (POE) interfaces, and/or other suitable sources of electric power). In some embodiments, the electronics112optionally include one or more other components112j(e.g., one or more sensors, video displays, touchscreens, battery charging bases). The processors112acan comprise clock-driven computing component(s) configured to process data, and the memory112bcan comprise a computer-readable medium (e.g., a tangible, non-transitory computer-readable medium, data storage loaded with one or more of the software components112c) configured to store instructions for performing various operations and/or functions. The processors112aare configured to execute the instructions stored on the memory112bto perform one or more of the operations. The operations can include, for example, causing the playback device110ato retrieve audio data from an audio source (e.g., one or more of the computing devices106a-c(FIG.1B)), and/or another one of the playback devices110. In some embodiments, the operations further include causing the playback device110ato send audio data to another one of the playback devices110aand/or another device (e.g., one of the NMDs120). Certain embodiments include operations causing the playback device110ato pair with another of the one or more playback devices110to enable a multi-channel audio environment (e.g., a stereo pair, a bonded zone). The processors112acan be further configured to perform operations causing the playback device110ato synchronize playback of audio content with another of the one or more playback devices110. As those of ordinary skill in the art will appreciate, during synchronous playback of audio content on a plurality of playback devices, a listener will preferably be unable to perceive time-delay differences between playback of the audio content by the playback device110aand the other one or more other playback devices110. Additional details regarding audio playback synchronization among playback devices can be found, for example, in U.S. Pat. No. 8,234,395, which was incorporated by reference above. In some embodiments, the memory112bis further configured to store data associated with the playback device110a, such as one or more zones and/or zone groups of which the playback device110ais a member, audio sources accessible to the playback device110a, and/or a playback queue that the playback device110a(and/or another of the one or more playback devices) can be associated with. The stored data can comprise one or more state variables that are periodically updated and used to describe a state of the playback device110a. The memory112bcan also include data associated with a state of one or more of the other devices (e.g., the playback devices110, NMDs120, control devices130) of the media playback system100. In some aspects, for example, the state data is shared during predetermined intervals of time (e.g., every 5 seconds, every 10 seconds, every 60 seconds) among at least a portion of the devices of the media playback system100, so that one or more of the devices have the most recent data associated with the media playback system100. The network interface112dis configured to facilitate a transmission of data between the playback device110aand one or more other devices on a data network such as, for example, the links103and/or the network104(FIG.1B). The network interface112dis configured to transmit and receive data corresponding to media content (e.g., audio content, video content, text, photographs) and other signals (e.g., non-transitory signals) comprising digital packet data including an Internet Protocol (IP)-based source address and/or an IP-based destination address. The network interface112dcan parse the digital packet data such that the electronics112properly receives and processes the data destined for the playback device110a. In the illustrated embodiment ofFIG.1C, the network interface112dcomprises one or more wireless interfaces112e(referred to hereinafter as “the wireless interface112e”). The wireless interface112e(e.g., a suitable interface comprising one or more antennae) can be configured to wirelessly communicate with one or more other devices (e.g., one or more of the other playback devices110, NMDs120, and/or control devices130) that are communicatively coupled to the network104(FIG.1B) in accordance with a suitable wireless communication protocol (e.g., WiFi, Bluetooth, LTE). In some embodiments, the network interface112doptionally includes a wired interface112f(e.g., an interface or receptacle configured to receive a network cable such as an Ethernet, a USB-A, USB-C, and/or Thunderbolt cable) configured to communicate over a wired connection with other devices in accordance with a suitable wired communication protocol. In certain embodiments, the network interface112dincludes the wired interface112fand excludes the wireless interface112e. In some embodiments, the electronics112excludes the network interface112daltogether and transmits and receives media content and/or other data via another communication path (e.g., the input/output111). The audio components112gare configured to process and/or filter data comprising media content received by the electronics112(e.g., via the input/output111and/or the network interface112d) to produce output audio signals. In some embodiments, the audio processing components112gcomprise, for example, one or more digital-to-analog converters (DAC), audio preprocessing components, audio enhancement components, digital signal processors (DSPs), and/or other suitable audio processing components, modules, circuits, etc. In certain embodiments, one or more of the audio processing components112gcan comprise one or more subcomponents of the processors112a. In some embodiments, the electronics112omits the audio processing components112g. In some aspects, for example, the processors112aexecute instructions stored on the memory112bto perform audio processing operations to produce the output audio signals. The amplifiers112hare configured to receive and amplify the audio output signals produced by the audio processing components112gand/or the processors112a. The amplifiers112hcan comprise electronic devices and/or components configured to amplify audio signals to levels sufficient for driving one or more of the transducers114. In some embodiments, for example, the amplifiers112hinclude one or more switching or class-D power amplifiers. In other embodiments, however, the amplifiers include one or more other types of power amplifiers (e.g., linear gain power amplifiers, class-A amplifiers, class-B amplifiers, class-AB amplifiers, class-C amplifiers, class-D amplifiers, class-E amplifiers, class-F amplifiers, class-G and/or class H amplifiers, and/or another suitable type of power amplifier). In certain embodiments, the amplifiers112hcomprise a suitable combination of two or more of the foregoing types of power amplifiers. Moreover, in some embodiments, individual ones of the amplifiers112hcorrespond to individual ones of the transducers114. In other embodiments, however, the electronics112includes a single one of the amplifiers112hconfigured to output amplified audio signals to a plurality of the transducers114. In some other embodiments, the electronics112omits the amplifiers112h. The transducers114(e.g., one or more speakers and/or speaker drivers) receive the amplified audio signals from the amplifier112hand render or output the amplified audio signals as sound (e.g., audible sound waves having a frequency between about 20 Hertz (Hz) and 20 kilohertz (kHz)). In some embodiments, the transducers114can comprise a single transducer. In other embodiments, however, the transducers114comprise a plurality of audio transducers. In some embodiments, the transducers114comprise more than one type of transducer. For example, the transducers114can include one or more low frequency transducers (e.g., subwoofers, woofers), mid-range frequency transducers (e.g., mid-range transducers, mid-woofers), and one or more high frequency transducers (e.g., one or more tweeters). As used herein, “low frequency” can generally refer to audible frequencies below about 500 Hz, “mid-range frequency” can generally refer to audible frequencies between about 500 Hz and about 2 kHz, and “high frequency” can generally refer to audible frequencies above 2 kHz. In certain embodiments, however, one or more of the transducers114comprise transducers that do not adhere to the foregoing frequency ranges. For example, one of the transducers114may comprise a mid-woofer transducer configured to output sound at frequencies between about 200 Hz and about 5 kHz. By way of illustration, SONOS, Inc. presently offers (or has offered) for sale certain playback devices including, for example, a “SONOS ONE,” “PLAY:1,” “PLAY:3,” “PLAY:5,” “PLAYBAR,” “PLAYBASE,” “CONNECT:AMP,” “CONNECT,” and “SUB.” Other suitable playback devices may additionally or alternatively be used to implement the playback devices of example embodiments disclosed herein. Additionally, one of ordinary skilled in the art will appreciate that a playback device is not limited to the examples described herein or to SONOS product offerings. In some embodiments, for example, one or more playback devices110comprises wired or wireless headphones (e.g., over-the-ear headphones, on-ear headphones, in-ear earphones). The headphone may comprise a headband coupled to one or more earcups. For example, a first earcup may be coupled to a first end of the headband and a second earcup may be coupled to a second end of the headband that is opposite the first end. Each of the one or more earcups may house any portion of the electronic components in the playback device, such as one or more transducers. Further, the one or more of earcups may include a user interface for controlling operation of the headphone such as for controlling audio playback, volume level, and other functions. The user interface may include any of a variety of control elements such as buttons, knobs, dials, touch-sensitive surfaces, and/or touchscreens. An ear cushion may be coupled each of the one or more earcups. The ear cushions may provide a soft barrier between the head of a user and the one or more earcups to improve user comfort and/or provide acoustic isolation from the ambient (e.g., provide passive noise reduction (PNR)). Additionally (or alternatively), the headphone may employ active noise reduction (ANR) techniques to further reduce the user's perception of outside noise during playback. In some embodiments, one or more of the playback devices110comprise a docking station and/or an interface configured to interact with a docking station for personal mobile media playback devices. In certain embodiments, a playback device may be integral to another device or component such as a television, a lighting fixture, or some other device for indoor or outdoor use. In some embodiments, a playback device omits a user interface and/or one or more transducers. For example,FIG.1Dis a block diagram of a playback device110pcomprising the input/output111and electronics112without the user interface113or transducers114. FIG.1Eis a block diagram of a bonded playback device110qcomprising the playback device110a(FIG.1C) sonically bonded with the playback device110i(e.g., a subwoofer) (FIG.1A). In the illustrated embodiment, the playback devices110aand110iare separate ones of the playback devices110housed in separate enclosures. In some embodiments, however, the bonded playback device110qcomprises a single enclosure housing both the playback devices110aand110i. The bonded playback device110qcan be configured to process and reproduce sound differently than an unbonded playback device (e.g., the playback device110aofFIG.1C) and/or paired or bonded playback devices (e.g., the playback devices110land110mofFIG.1B). In some embodiments, for example, the playback device110ais full-range playback device configured to render low frequency, mid-range frequency, and high frequency audio content, and the playback device110iis a subwoofer configured to render low frequency audio content. In some aspects, the playback device110a, when bonded with the first playback device, is configured to render only the mid-range and high frequency components of a particular audio content, while the playback device110irenders the low frequency component of the particular audio content. In some embodiments, the bonded playback device110qincludes additional playback devices and/or another bonded playback device. Additional playback device embodiments are described in further detail below with respect toFIGS.2A-3D. c. Suitable Network Microphone Devices (NMDs) FIG.1Fis a block diagram of the NMD120a(FIGS.1A and1B). The NMD120aincludes one or more voice processing components124(hereinafter “the voice components124”) and several components described with respect to the playback device110a(FIG.1C) including the processors112a, the memory112b, and the microphones115. The NMD120aoptionally comprises other components also included in the playback device110a(FIG.1C), such as the user interface113and/or the transducers114. In some embodiments, the NMD120ais configured as a media playback device (e.g., one or more of the playback devices110), and further includes, for example, one or more of the audio components112g(FIG.1C), the amplifiers114, and/or other playback device components. In certain embodiments, the NMD120acomprises an Internet of Things (IoT) device such as, for example, a thermostat, alarm panel, fire and/or smoke detector, etc. In some embodiments, the NMD120acomprises the microphones115, the voice processing124, and only a portion of the components of the electronics112described above with respect toFIG.1B. In some aspects, for example, the NMD120aincludes the processor112aand the memory112b(FIG.1B), while omitting one or more other components of the electronics112. In some embodiments, the NMD120aincludes additional components (e.g., one or more sensors, cameras, thermometers, barometers, hygrometers). In some embodiments, an NMD can be integrated into a playback device.FIG.1Gis a block diagram of a playback device110rcomprising an NMD120d. The playback device110rcan comprise many or all of the components of the playback device110aand further include the microphones115and voice processing124(FIG.1F). The playback device110roptionally includes an integrated control device130c. The control device130ccan comprise, for example, a user interface (e.g., the user interface113ofFIG.1B) configured to receive user input (e.g., touch input, voice input) without a separate control device. In other embodiments, however, the playback device110rreceives commands from another control device (e.g., the control device130aofFIG.1B). “Additional NMD embodiments are described in further detail below with respect toFIGS.3A-3F.” Referring again toFIG.1F, the microphones115are configured to acquire, capture, and/or receive sound from an environment (e.g., the environment101ofFIG.1A) and/or a room in which the NMD120ais positioned. The received sound can include, for example, vocal utterances, audio played back by the NMD120aand/or another playback device, background voices, ambient sounds, etc. The microphones115convert the received sound into electrical signals to produce microphone data. The voice processing124receives and analyzes the microphone data to determine whether a voice input is present in the microphone data. The voice input can comprise, for example, an activation word followed by an utterance including a user request. As those of ordinary skill in the art will appreciate, an activation word is a word or other audio cue that signifying a user voice input. For instance, in querying the AMAZON® VAS, a user might speak the activation word “Alexa.” Other examples include “Ok, Google” for invoking the GOOGLE® VAS and “Hey, Siri” for invoking the APPLE® VAS. After detecting the activation word, voice processing124monitors the microphone data for an accompanying user request in the voice input. The user request may include, for example, a command to control a third-party device, such as a thermostat (e.g., NEST® thermostat), an illumination device (e.g., a PHILIPS HUE® lighting device), or a media playback device (e.g., a Sonos® playback device). For example, a user might speak the activation word “Alexa” followed by the utterance “set the thermostat to 68 degrees” to set a temperature in a home (e.g., the environment101ofFIG.1A). The user might speak the same activation word followed by the utterance “turn on the living room” to turn on illumination devices in a living room area of the home. The user may similarly speak an activation word followed by a request to play a particular song, an album, or a playlist of music on a playback device in the home. Additional description regarding receiving and processing voice input data can be found in further detail below with respect toFIGS.3A-3F. d. Suitable Control Devices FIG.1His a partially schematic diagram of the control device130a(FIGS.1A and1B). As used herein, the term “control device” can be used interchangeably with “controller” or “control system.” Among other features, the control device130ais configured to receive user input related to the media playback system100and, in response, cause one or more devices in the media playback system100to perform an action(s) or operation(s) corresponding to the user input. In the illustrated embodiment, the control device130acomprises a smartphone (e.g., an iPhone™, an Android phone) on which media playback system controller application software is installed. In some embodiments, the control device130acomprises, for example, a tablet (e.g., an iPad™), a computer (e.g., a laptop computer, a desktop computer), and/or another suitable device (e.g., a television, an automobile audio head unit, an IoT device). In certain embodiments, the control device130acomprises a dedicated controller for the media playback system100. In other embodiments, as described above with respect toFIG.1G, the control device130ais integrated into another device in the media playback system100(e.g., one more of the playback devices110, NMDs120, and/or other suitable devices configured to communicate over a network). The control device130aincludes electronics132, a user interface133, one or more speakers134, and one or more microphones135. The electronics132comprise one or more processors132a(referred to hereinafter as “the processors132a”), a memory132b, software components132c, and a network interface132d. The processor132acan be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system100. The memory132bcan comprise data storage that can be loaded with one or more of the software components executable by the processor302to perform those functions. The software components132ccan comprise applications and/or other executable software configured to facilitate control of the media playback system100. The memory112bcan be configured to store, for example, the software components132c, media playback system controller application software, and/or other data associated with the media playback system100and the user. The network interface132dis configured to facilitate network communications between the control device130aand one or more other devices in the media playback system100, and/or one or more remote devices. In some embodiments, the network interface132is configured to operate according to one or more suitable communication industry standards (e.g., infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G, LTE). The network interface132dcan be configured, for example, to transmit data to and/or receive data from the playback devices110, the NMDs120, other ones of the control devices130, one of the computing devices106ofFIG.1B, devices comprising one or more other media playback systems, etc. The transmitted and/or received data can include, for example, playback device control commands, state variables, playback zone and/or zone group configurations. For instance, based on user input received at the user interface133, the network interface132dcan transmit a playback device control command (e.g., volume control, audio playback control, audio content selection) from the control device304to one or more of the playback devices100. The network interface132dcan also transmit and/or receive configuration changes such as, for example, adding/removing one or more playback devices100to/from a zone, adding/removing one or more zones to/from a zone group, forming a bonded or consolidated player, separating one or more playback devices from a bonded or consolidated player, among others. Additional description of zones and groups can be found below with respect toFIGS.1-I through1M. The user interface133is configured to receive user input and can facilitate ‘control of the media playback system100. The user interface133includes media content art133a(e.g., album art, lyrics, videos), a playback status indicator133b(e.g., an elapsed and/or remaining time indicator), media content information region133c, a playback control region133d, and a zone indicator133e. The media content information region133ccan include a display of relevant information (e.g., title, artist, album, genre, release year) about media content currently playing and/or media content in a queue or playlist. The playback control region133dcan include selectable (e.g., via touch input and/or via a cursor or another suitable selector) icons to cause one or more playback devices in a selected playback zone or zone group to perform playback actions such as, for example, play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode, etc. The playback control region133dmay also include selectable icons to modify equalization settings, playback volume, and/or other suitable playback actions. In the illustrated embodiment, the user interface133comprises a display presented on a touch screen interface of a smartphone (e.g., an iPhone™, an Android phone). In some embodiments, however, user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system. The one or more speakers134(e.g., one or more transducers) can be configured to output sound to the user of the control device130a. In some embodiments, the one or more speakers comprise individual transducers configured to correspondingly output low frequencies, mid-range frequencies, and/or high frequencies. In some aspects, for example, the control device130ais configured as a playback device (e.g., one of the playback devices110). Similarly, in some embodiments the control device130ais configured as an NMD (e.g., one of the NMDs120), receiving voice commands and other sounds via the one or more microphones135. The one or more microphones135can comprise, for example, one or more condenser microphones, electret condenser microphones, dynamic microphones, and/or other suitable types of microphones or transducers. In some embodiments, two or more of the microphones135are arranged to capture location information of an audio source (e.g., voice, audible sound) and/or configured to facilitate filtering of background noise. Moreover, in certain embodiments, the control device130ais configured to operate as a playback device and an NMD. In other embodiments, however, the control device130aomits the one or more speakers134and/or the one or more microphones135. For instance, the control device130amay comprise a device (e.g., a thermostat, an IoT device, a network device) comprising a portion of the electronics132and the user interface133(e.g., a touch screen) without any speakers or microphones. e. Suitable Playback Device Configurations FIGS.1-I through1M show example configurations of playback devices in zones and zone groups. Referring first toFIG.1M, in one example, a single playback device may belong to a zone. For example, the playback device110gin the second bedroom101c(FIG.1A) may belong to Zone C. In some implementations described below, multiple playback devices may be “bonded” to form a “bonded pair” which together form a single zone. For example, the playback device110l(e.g., a left playback device) can be bonded to the playback device110m(e.g., a right playback device) to form Zone B. Bonded playback devices may have different playback responsibilities (e.g., channel responsibilities). In another implementation described below, multiple playback devices may be merged to form a single zone. For example, the playback device110h(e.g., a front playback device) may be merged with the playback device110i(e.g., a subwoofer), and the playback devices110jand110k(e.g., left and right surround speakers, respectively) to form a single Zone D. In another example, the playback devices110gand110hcan be merged to form a merged group or a zone group108b. The merged playback devices110gand110hmay not be specifically assigned different playback responsibilities. That is, the merged playback devices110hand110imay, aside from playing audio content in synchrony, each play audio content as they would if they were not merged. Each zone in the media playback system100may be provided for control as a single user interface (UI) entity. For example, Zone A may be provided as a single entity named Master Bathroom. Zone B may be provided as a single entity named Master Bedroom. Zone C may be provided as a single entity named Second Bedroom. Playback devices that are bonded may have different playback responsibilities, such as responsibilities for certain audio channels. For example, as shown inFIG.1-I, the playback devices110land110mmay be bonded so as to produce or enhance a stereo effect of audio content. In this example, the playback device110lmay be configured to play a left channel audio component, while the playback device110kmay be configured to play a right channel audio component. In some implementations, such stereo bonding may be referred to as “pairing.” Additionally, bonded playback devices may have additional and/or different respective speaker drivers. As shown inFIG.1J, the playback device110hnamed Front may be bonded with the playback device110inamed SUB. The Front device110hcan be configured to render a range of mid to high frequencies and the SUB device110ican be configured render low frequencies. When unbonded, however, the Front device110hcan be configured render a full range of frequencies. As another example,FIG.1Kshows the Front and SUB devices110hand110ifurther bonded with Left and Right playback devices110jand110k, respectively. In some implementations, the Right and Left devices110jand102kcan be configured to form surround or “satellite” channels of a home theater system. The bonded playback devices110h,110i,110j, and110kmay form a single Zone D (FIG.1M). Playback devices that are merged may not have assigned playback responsibilities, and may each render the full range of audio content the respective playback device is capable of. Nevertheless, merged devices may be represented as a single UI entity (i.e., a zone, as discussed above). For instance, the playback devices110aand110nin the master bathroom have the single UI entity of Zone A. In one embodiment, the playback devices110aand110nmay each output the full range of audio content each respective playback devices110aand110nare capable of, in synchrony. In some embodiments, an NMD is bonded or merged with another device so as to form a zone. For example, the NMD120bmay be bonded with the playback device110e, which together form Zone F, named Living Room. In other embodiments, a stand-alone network microphone device may be in a zone by itself. In other embodiments, however, a stand-alone network microphone device may not be associated with a zone. Additional details regarding associating network microphone devices and playback devices as designated or default devices may be found, for example, in previously referenced U.S. patent application Ser. No. 15/438,749. Zones of individual, bonded, and/or merged devices may be grouped to form a zone group. For example, referring toFIG.1M, Zone A may be grouped with Zone B to form a zone group108athat includes the two zones. Similarly, Zone G may be grouped with Zone H to form the zone group108b. As another example, Zone A may be grouped with one or more other Zones C-I. The Zones A-I may be grouped and ungrouped in numerous ways. For example, three, four, five, or more (e.g., all) of the Zones A-I may be grouped. When grouped, the zones of individual and/or bonded playback devices may play back audio in synchrony with one another, as described in previously referenced U.S. Pat. No. 8,234,395. Playback devices may be dynamically grouped and ungrouped to form new or different groups that synchronously play back audio content. In various implementations, the zones in an environment may be the default name of a zone within the group or a combination of the names of the zones within a zone group. For example, Zone Group108bcan have be assigned a name such as “Dining+Kitchen”, as shown inFIG.1M. In some embodiments, a zone group may be given a unique name selected by a user. Certain data may be stored in a memory of a playback device (e.g., the memory112cofFIG.1C) as one or more state variables that are periodically updated and used to describe the state of a playback zone, the playback device(s), and/or a zone group associated therewith. The memory may also include the data associated with the state of the other devices of the media system, and shared from time to time among the devices so that one or more of the devices have the most recent data associated with the system. In some embodiments, the memory may store instances of various variable types associated with the states. Variables instances may be stored with identifiers (e.g., tags) corresponding to type. For example, certain identifiers may be a first type “a1” to identify playback device(s) of a zone, a second type “b1” to identify playback device(s) that may be bonded in the zone, and a third type “c1” to identify a zone group to which the zone may belong. As a related example, identifiers associated with the second bedroom101cmay indicate that the playback device is the only playback device of the Zone C and not in a zone group. Identifiers associated with the Den may indicate that the Den is not grouped with other zones but includes bonded playback devices110h-110k. Identifiers associated with the Dining Room may indicate that the Dining Room is part of the Dining+Kitchen zone group108band that devices110band110dare grouped (FIG.1L). Identifiers associated with the Kitchen may indicate the same or similar information by virtue of the Kitchen being part of the Dining+Kitchen zone group108b. Other example zone variables and identifiers are described below. In yet another example, the media playback system100may variables or identifiers representing other associations of zones and zone groups, such as identifiers associated with Areas, as shown inFIG.1M. An area may involve a cluster of zone groups and/or zones not within a zone group. For instance,FIG.1Mshows an Upper Area109aincluding Zones A-D, and a Lower Area109bincluding Zones E-I. In one aspect, an Area may be used to invoke a cluster of zone groups and/or zones that share one or more zones and/or zone groups of another cluster. In another aspect, this differs from a zone group, which does not share a zone with another zone group. Further examples of techniques for implementing Areas may be found, for example, in U.S. application Ser. No. 15/682,506 filed Aug. 21, 2017 and titled “Room Association Based on Name,” and U.S. Pat. No. 8,483,853 filed Sep. 11, 2007, and titled “Controlling and manipulating groupings in a multi-zone media system.” Each of these applications is incorporated herein by reference in its entirety. In some embodiments, the media playback system100may not implement Areas, in which case the system may not store variables associated with Areas. III. Example Systems and Devices FIG.2Ais a front isometric view of a playback device210configured in accordance with aspects of the disclosed technology.FIG.2Bis a front isometric view of the playback device210without a grille216e.FIG.2Cis an exploded view of the playback device210. Referring toFIGS.2A-2Ctogether, the playback device210comprises a housing216that includes an upper portion216a, a right or first side portion216b, a lower portion216c, a left or second side portion216d, the grille216e, and a rear portion216f. A plurality of fasteners216g(e.g., one or more screws, rivets, clips) attaches a frame216hto the housing216. A cavity216j(FIG.2C) in the housing216is configured to receive the frame216hand electronics212. The frame216his configured to carry a plurality of transducers214(identified individually inFIG.2Bas transducers214a-f). The electronics212(e.g., the electronics112ofFIG.1C) is configured to receive audio content from an audio source and send electrical signals corresponding to the audio content to the transducers214for playback. The transducers214are configured to receive the electrical signals from the electronics112, and further configured to convert the received electrical signals into audible sound during playback. For instance, the transducers214a-c(e.g., tweeters) can be configured to output high frequency sound (e.g., sound waves having a frequency greater than about 2 kHz). The transducers214d-f(e.g., mid-woofers, woofers, midrange speakers) can be configured output sound at frequencies lower than the transducers214a-c(e.g., sound waves having a frequency lower than about 2 kHz). In some embodiments, the playback device210includes a number of transducers different than those illustrated inFIGS.2A-2C. For example, as described in further detail below with respect toFIGS.3A-3C, the playback device210can include fewer than six transducers (e.g., one, two, three). In other embodiments, however, the playback device210includes more than six transducers (e.g., nine, ten). Moreover, in some embodiments, all or a portion of the transducers214are configured to operate as a phased array to desirably adjust (e.g., narrow or widen) a radiation pattern of the transducers214, thereby altering a user's perception of the sound emitted from the playback device210. In the illustrated embodiment ofFIGS.2A-2C, a filter216iis axially aligned with the transducer214b. The filter216ican be configured to desirably attenuate a predetermined range of frequencies that the transducer214boutputs to improve sound quality and a perceived sound stage output collectively by the transducers214. In some embodiments, however, the playback device210omits the filter216i. In other embodiments, the playback device210includes one or more additional filters aligned with the transducers214band/or at least another of the transducers214. FIGS.3A and3Bare front and right isometric side views, respectively, of an NMD320configured in accordance with embodiments of the disclosed technology.FIG.3Cis an exploded view of the NMD320.FIG.3Dis an enlarged view of a portion ofFIG.3Bincluding a user interface313of the NMD320. Referring first toFIGS.3A-3C, the NMD320includes a housing316comprising an upper portion316a, a lower portion316band an intermediate portion316c(e.g., a grille). A plurality of ports, holes or apertures316din the upper portion316aallow sound to pass through to one or more microphones315(FIG.3C) positioned within the housing316. The one or more microphones315are configured to received sound via the apertures316dand produce electrical signals based on the received sound. In the illustrated embodiment, a frame316e(FIG.3C) of the housing316surrounds cavities316fand316gconfigured to house, respectively, a first transducer314a(e.g., a tweeter) and a second transducer314b(e.g., a mid-woofer, a midrange speaker, a woofer). In other embodiments, however, the NMD320includes a single transducer, or more than two (e.g., two, five, six) transducers. In certain embodiments, the NMD320omits the transducers314aand314baltogether. Electronics312(FIG.3C) includes components configured to drive the transducers314aand314b, and further configured to analyze audio data corresponding to the electrical signals produced by the one or more microphones315. In some embodiments, for example, the electronics312comprises many or all of the components of the electronics112described above with respect toFIG.1C. In certain embodiments, the electronics312includes components described above with respect toFIG.1Fsuch as, for example, the one or more processors112a, the memory112b, the software components112c, the network interface112d, etc. In some embodiments, the electronics312includes additional suitable components (e.g., proximity or other sensors). Proximity sensors may comprise, for example, one or more sensors configured to detect movement such as accelerometers, gyroscopes, and/or inertial measurement units (IMUs). Referring toFIG.3D, the user interface313includes a plurality of control surfaces (e.g., buttons, knobs, capacitive surfaces) including a first control surface313a(e.g., a previous control), a second control surface313b(e.g., a next control), and a third control surface313c(e.g., a play and/or pause control). A fourth control surface313dis configured to receive touch input corresponding to activation and deactivation of the one or microphones315. A first indicator313e(e.g., one or more light emitting diodes (LEDs) or another suitable illuminator) can be configured to illuminate only when the one or more microphones315are activated. A second indicator313f(e.g., one or more LEDs) can be configured to remain solid during normal operation and to blink or otherwise change from solid to indicate a detection of voice activity. In some embodiments, the user interface313includes additional or fewer control surfaces and illuminators. In one embodiment, for example, the user interface313includes the first indicator313e, omitting the second indicator313fMoreover, in certain embodiments, the NMD320comprises a playback device and a control device, and the user interface313comprises the user interface of the control device. Referring toFIGS.3A-3Dtogether, the NMD320is configured to receive voice commands from one or more adjacent users via the one or more microphones315. As described above with respect toFIG.1B, the one or more microphones315can acquire, capture, or record sound in a vicinity (e.g., a region within 10 m or less of the NMD320) and transmit electrical signals corresponding to the recorded sound to the electronics312. The electronics312can process the electrical signals and can analyze the resulting audio data to determine a presence of one or more voice commands (e.g., one or more activation words). In some embodiments, for example, after detection of one or more suitable voice commands, the NMD320is configured to transmit a portion of the recorded audio data to another device and/or a remote server (e.g., one or more of the computing devices106ofFIG.1B) for further analysis. The remote server can analyze the audio data, determine an appropriate action based on the voice command, and transmit a message to the NMD320to perform the appropriate action. For instance, a user may speak “Sonos, play Michael Jackson.” The NMD320can, via the one or more microphones315, record the user's voice utterance, determine the presence of a voice command, and transmit the audio data having the voice command to a remote server (e.g., one or more of the remote computing devices106ofFIG.1B, one or more servers of a VAS and/or another suitable service). The remote server can analyze the audio data and determine an action corresponding to the command. The remote server can then transmit a command to the NMD320to perform the determined action (e.g., play back audio content related to Michael Jackson). The NMD320can receive the command and play back the audio content related to Michael Jackson from a media content source. As described above with respect toFIG.1B, suitable content sources can include a device or storage communicatively coupled to the NMD320via a LAN (e.g., the network104ofFIG.1B), a remote server (e.g., one or more of the remote computing devices106ofFIG.1B), etc. In certain embodiments, however, the NMD320determines and/or performs one or more actions corresponding to the one or more voice commands without intervention or involvement of an external device, computer, or server. FIG.3Eis a functional block diagram showing additional features of the NMD320in accordance with aspects of the disclosure. The NMD320includes components configured to facilitate voice command capture including voice activity detector component(s)312k, beam former components312l, acoustic echo cancellation (AEC) and/or self-sound suppression components312m, activation word detector components312n, and voice/speech conversion components312o(e.g., voice-to-text and text-to-voice). In the illustrated embodiment ofFIG.3E, the foregoing components312k-312oare shown as separate components. In some embodiments, however, one or more of the components312k-312oare subcomponents of the processors112a. The beamforming and self-sound suppression components312land312mare configured to detect an audio signal and determine aspects of voice input represented in the detected audio signal, such as the direction, amplitude, frequency spectrum, etc. The voice activity detector activity components312kare operably coupled with the beamforming and AEC components312land312mand are configured to determine a direction and/or directions from which voice activity is likely to have occurred in the detected audio signal. Potential speech directions can be identified by monitoring metrics which distinguish speech from other sounds. Such metrics can include, for example, energy within the speech band relative to background noise and entropy within the speech band, which is measure of spectral structure. As those of ordinary skill in the art will appreciate, speech typically has a lower entropy than most common background noise. The activation word detector components312nare configured to monitor and analyze received audio to determine if any activation words (e.g., wake words) are present in the received audio. The activation word detector components312nmay analyze the received audio using an activation word detection algorithm. If the activation word detector312ndetects an activation word, the NMD320may process voice input contained in the received audio. Example activation word detection algorithms accept audio as input and provide an indication of whether an activation word is present in the audio. Many first- and third-party activation word detection algorithms are known and commercially available. For instance, operators of a voice service may make their algorithm available for use in third-party devices. Alternatively, an algorithm may be trained to detect certain activation words. In some embodiments, the activation word detector312nruns multiple activation word detection algorithms on the received audio simultaneously (or substantially simultaneously). As noted above, different voice services (e.g. AMAZON's ALEXA®, APPLE's SIRI®, or MICROSOFT's CORTANA®) can each use a different activation word for invoking their respective voice service. To support multiple services, the activation word detector312nmay run the received audio through the activation word detection algorithm for each supported voice service in parallel. The speech/text conversion components312omay facilitate processing by converting speech in the voice input to text. In some embodiments, the electronics312can include voice recognition software that is trained to a particular user or a particular set of users associated with a household. Such voice recognition software may implement voice-processing algorithms that are tuned to specific voice profile(s). Tuning to specific voice profiles may require less computationally intensive algorithms than traditional voice activity services, which typically sample from a broad base of users and diverse requests that are not targeted to media playback systems. FIG.3Fis a schematic diagram of an example voice input328captured by the NMD320in accordance with aspects of the disclosure. The voice input328can include a activation word portion328aand a voice utterance portion328b. In some embodiments, the activation word557acan be a known activation word, such as “Alexa,” which is associated with AMAZON's ALEXA®. In other embodiments, however, the voice input328may not include a activation word. In some embodiments, a network microphone device may output an audible and/or visible response upon detection of the activation word portion328a. In addition or alternately, an NMD may output an audible and/or visible response after processing a voice input and/or a series of voice inputs. The voice utterance portion328bmay include, for example, one or more spoken commands (identified individually as a first command328cand a second command328e) and one or more spoken keywords (identified individually as a first keyword328dand a second keyword328f). In one example, the first command328ccan be a command to play music, such as a specific song, album, playlist, etc. In this example, the keywords may be one or words identifying one or more zones in which the music is to be played, such as the Living Room and the Dining Room shown inFIG.1A. In some examples, the voice utterance portion328bcan include other information, such as detected pauses (e.g., periods of non-speech) between words spoken by a user, as shown inFIG.3F. The pauses may demarcate the locations of separate commands, keywords, or other information spoke by the user within the voice utterance portion328b. In some embodiments, the media playback system100is configured to temporarily reduce the volume of audio content that it is playing while detecting the activation word portion557a. The media playback system100may restore the volume after processing the voice input328, as shown inFIG.3F. Such a process can be referred to as ducking, examples of which are disclosed in U.S. patent application Ser. No. 15/438,749, incorporated by reference herein in its entirety. IV. Example Allocation of Audio Based on Retrieved Configuration Information In the discussion below, reference is made herein to “portable devices” including “portable playback devices” and “portable network microphone devices.” Such “portable devices” may be devices that comprise an internal power source (e.g., one or more batteries). On the other hand, “stationary devices,” such as “stationary playback devices” and “stationary network microphone devices,” may be devices that operate using an external power source, although such devices may in fact be moved around a home or other environment. Further, a “playback device associated with a room” may be a playback device which is intended to remain in substantially the same position after configuration. The reference to “room” in this context is not limited only to rooms but is used to refer an associated playback location may not be a room in a conventional sense, such as a patio or a deck, or be a combination of two or physical rooms. In certain examples, audio is allocated automatically in response to a trigger based on retrieved configuration information. The configuration information in these examples comprises one or more state variables which are either stored locally on a playback device or a control device, retrieved from another playback device or control device, or retrieved from a remote server system such as a server system accessible via the internet. By using the configuration information, data of the current playback system configuration can be determined audio portions can be allocated amongst speakers in a more intuitive and easier to use way, with minimal or no user interaction required. Such automatic allocation is beneficial when playback devices are moved, because the configuration of the playback system as a whole may be required to be updated following the movement. This is particularly the case for playback devices which are designed to be portable and moved to different locations. Referring now toFIG.4, a method is disclosed in which audio allocations are updated automatically in response to a trigger. The method can be implemented by a playback device and/or a control device as discussed above. First, at block402, a trigger is detected. The trigger can take several forms. For example, it may be a received input, such as a voice input, an input received via a user interface, a touch screen display, or a button press using a button on a playback device or control device. Where the trigger is a button press, a long button press—in which a button is held pressed for a predetermined time such as 1, 2 or 3 seconds—may be distinguished from shorter button presses. A long button press may be determined as a trigger while shorter button presses carry out the usual function of the button, such as play/pause. The trigger may also be an event indicating that the playback device is (or is not) in proximity to another device (e.g., another playback device, a control device, etc.). Examples of such triggers indicating proximity (or lack of proximity) include the establishment or disestablishment of a wireless connection, such as establishing a Near Field Communication (NFC) or Bluetooth® connection with another playback device and/or control device or detecting a Bluetooth® beacon (e.g., a Bluetooth® low energy (BLE) beacon) emitted by another playback device and/or control device. As described herein, the trigger may take the form of an event indicating proximity (or loss of proximity) to another device (e.g., another playback device, a control device, etc.). Any of a variety of components in the playback device may be employed to detect such an event including, for example, network interface component(s) (e.g., detect the establishment/disestablishment of a wireless connection, detect a wireless beacon emitted by another device, etc.), sensor(s) configured to detect movement (e.g., accelerometers, gyroscopes, IMUS, etc.), and/or microphones (e.g., to detect acoustic waves emitted by another device). In some embodiments, the trigger event indicating proximity (or loss of proximity) may be based on the output of multiple different sensors and/or a sequence of outputs from the multiple different sensors. For example, the trigger event indicating that the playback device is proximate another device may first require that the playback device detects that the playback device was moved (e.g., based on the output of a sensor configured to detect movement). Then, after the movement is detected, the trigger event may require that the playback device initiate (and pass) a proximity test with the other device. Conversely, the trigger event indicating that the playback device is not proximate another device may first require that the playback device detects that the playback device was moved (e.g., based on the output of a sensor configured to detect movement). Then, after the movement is detected, the trigger event may require that the playback device initiate (and fail) a proximity test with the other device. The proximity test may take a variety of forms. For example, the proximity test may involve transmission of a wireless signal between the playback device and the other playback device. In this example, the proximity test with the other device may be passed when the wireless signal is detected by one of the playback device and the other playback device in accordance with one or more criteria indicative of proximity (e.g., the detected wireless signal has a signal strength above a threshold). Otherwise, the proximity test with the other device may fail. In another example, the proximity test may involve transmission of an acoustic signal (e.g., an ultrasonic signal) between the playback device and the other playback device. In this example, the proximity test with the other device may be passed when the acoustic signal is detected by one of the playback device and the other playback device in accordance with one or more criteria indicative of proximity (e.g., the detected acoustic signal has a signal strength above a threshold). Otherwise, the proximity test with the other device may fail. Additional techniques for proximity detection using acoustic waves are described in U.S. Patent Publication No. 2019/0253154, published on Aug. 15, 2019, titled “Method and System for Acoustic Communication of Data” and U.S. Patent Publication No. 2019/0237091, published on Aug. 1, 2019, titled “A Method and System for Acoustic Communication of Data,” each of which is incorporated herein by reference in its entirety. Once the trigger has been detected, the method proceeds to block404, where configuration information is retrieved. By retrieving configuration information, the status of other devices in the playback system can be determined to influence how the audio portions are allocated to playback devices. The configuration information may be saved as one or more state variables which are shared amongst playback devices and control devices forming the playback system. The state variables may be stored as a single data structure or stored as multiple data structures. Various information may be obtained from the configuration information including one or more of:The identifies of devices in the system, both at a network level such as a MAC address or IP address, and at a higher level, such as a name assigned to the device by a user, or automatically assigned when the system was first set up. For example, the configuration information may store that devices110land110mboth have the name “Master Bedroom”;Current grouping status of devices and the allocations of audio assigned to the devices. For example the configuration information may indicate that devices110land110mare bonded as left and right speakers of a stereo pair and further grouped with110afor synchronous playback;Current playback status, including any media being reproduced and a position within the media;Group coordinators for any groups of devices. A group coordinator is a device which is responsible for coordinating playback within the group of devices (e.g., group members). It may also be responsible for sourcing and distributing media data to the devices in the group;Playback queues associated with playback devices and/or groups; andAn orientation of one or more of the playback devices (e.g., for playback devices that support playback in multiple orientations such as a horizontal orientation where the playback device lays horizontally on a surface and a vertical orientation where the playback device sits vertically on a surface). Next, at block406user preferences are retrieved (e.g., retrieved from memory or obtained from a user via an interface). These preferences may be stored as part of the configuration information or separately. The preference information can be shared amongst devices in a similar way to the configuration information and stored locally or remotely. Preference data may indicate preferences to be applied to playback system as a whole, or preferences which are specific to a particular user. Where preferences are specific to a particular user, they may be stored on a control device associated with the user or only retrievable with credentials associated with the user. Example preferences include preferences for primary devices to provide voice assistant feedback and preferences for automatic audio allocation (e.g., when playback devices are grouped or ungrouped automatically, how playback devices are grouped such as whether the playback devices playback the same audio channels or playback a subset (e.g., form a stereo pair), etc.). In instances where a user has not specified a particular preference, a default preference may be employed. As mentioned above, the retrieval of the user preference may comprise requesting input from the user (e.g., via an interface on the playback device or a control device in communication with the playback device). In some embodiments, the playback device may cause a graphical user interface (GUI) on a control device to be modified to present one or more playback options to a user. Examples of playback options that may be presented to the user include: (1) an option to stereo pair two or more playback devices; (2) an option to unpair two stereo paired playback devices; (3) an option to group two or more playback devices; and (4) an option to ungroup two or more options. In turn, the selection (and/or absence of selection) of a given playback option by the user (as detected by the control device) may be transmitted from the control device to the playback device (e.g., as user preference information). The GUI of the control device may be updated in any of a variety of ways to obtain input from a user regarding the preferences of the user.FIGS.7A and7Bshow an example of such a GUI that may be employed to obtain input from a user regarding a preference of how audio channels should be distributed between two playback devices (e.g., portable playback devices) that have been brought in proximity with each other (e.g., proximity identified as part of the trigger in block402). FIG.7Ashows an example screen700A of a GUI presented by the control device when two playback devices are grouped together for synchronous playback and reproduce the same audio channels (e.g., both players reproduce the left and right audio channels). In particular, the screen700A includes a region702A that shows: (1) the players the grouped together for synchronous playback (e.g., Kitchen and Portable); (2) a battery state of those players in the group that are battery powered portable players (e.g., Kitchen and Portable); and (3) metadata regarding the media currently being played back (e.g., album art, artist, audio track name, etc.). After a playback device in the synchrony group (e.g., Kitchen and/or Portable in screen700A) detects that the playback device has come in proximity to the other playback device in the group (e.g., proximity identified as part of the trigger in block402), the playback device may (e.g., as part of retrieving preference information in block406) cause the GUI shown on the control device to be updated from screen700A inFIG.7Ato screen700B inFIG.7B. For example, the transition from screen700A to700B may be caused by transmission (e.g., by the playback device) of at least one message to the control device indicating that two playback devices playing back audio in synchrony have come into proximity of each other. Relative to the screen700A, the screen700B updates the region702A to702B by adding at least one playback option shown as a selectable slider704. Upon activation of the selectable slider704(e.g., by a user), the playback devices in the group (e.g., Kitchen and Portable) may form a stereo pair (e.g., a left channel is assigned to Kitchen and a right channel is assigned to Portable or vice versa). For example, the control device may detect activation of the slider704and transmit at least one message to the playback devices indicating a request from the user to stereo pair the two playback devices has been detected. In this example, the playback device may use such preference information from the user in block408of updating the audio allocation between the two playback devices (e.g., so as to form a stereo pair by assigning a left channel to one playback device in the group and a right channel to another playback device in the group). Once the configuration information and preference information have been retrieved, the audio allocation is updated out block408. Updating the audio allocation may comprise one or more of the following: (1) updating a distribution of audio portions (e.g., audio channels, frequency ranges, etc.) between playback devices; (2) updating one or more equalization settings of one or more playback devices; and/or (3) updating which playback device(s) are designated as a group coordinator (e.g., change the mode of operation of one or more of the playback devices from a group coordinator mode to a group member mode or from a group member mode to a group coordinator mode). The audio allocation can be updated in various ways and can be further based on the retrieved preference information and the nature of the trigger itself. As a result, the audio allocation may be updated based on any combination of the following: (1) the trigger, (2) the configuration information, and (3) the preference information. It should be appreciated that, in some embodiments, preference information may be omitted altogether. In such embodiments, the method400may omit block406of retrieving user preferences. As mentioned above, updating the audio allocation may comprise updating one or more equalization settings of one or more of the playback devices. Examples of equalization settings that may be updated include one or more of: (1) bass level; (2) mid-range level; (3) treble level; (4) left-right balance; and (5) front-rear balance. The equalization settings of one or more of the playback devices may be updated in any of a variety of ways. In some embodiments, a playback device may update equalization settings by playing audio and detecting the sound (e.g., reflected from objects in the environment during playback of the audio) using one or more microphones in the playback device (or another device such as a control device). The playback device (and/or a control device) may analyze the sound to gain insights regarding the acoustics of the environment and modify the equalization settings to suit the acoustics of the environment as described in U.S. Pat. No. 9,219,460, issued Dec. 22, 2015, titled “Audio Settings based on Environment,” which is hereby incorporated herein by reference in its entirety. It should be appreciated that, in some instances, the playback device may modify the equalization settings as part of updating the audio allocation after detection of only certain trigger events. For example, the playback device may perform a routine to update one or more equalization settings after detection of a first set of trigger events (e.g., detection of movement and/or proximity to another device) and not after detection of a second, different set of trigger events (e.g., detected voice input). Thus, the playback device may, in these certain instances, only modify the equalization settings after detection of a trigger indicating that the environment in which the playback device is operating has changed (e.g., the playback device has been moved within a room). Otherwise, the playback device may continue to use the same equalization settings. In other instances, the playback device may update the one or more equalization settings after detection of any trigger event. Examples of updating the audio allocation based on various different combinations of configuration information, preference information and trigger will now be set out. Other examples are also possible, playback system may implement some or all of these automatic allocations. Examples of Grouping Devices and Allocating Audio Based on Configuration Information Example 1: Trigger: A trigger is detected which is associated with a portable playback device. The trigger can be an input in a control application, a wireless communication connection being established between the portable playback device and a second playback device associated with a room, or a long button press on the portable playback device. Configuration information: the configuration information shows that the portable playback device is not currently reproducing media. Audio allocation: The portable playback device is updated to be grouped with the second playback device. The portable playback device is allocated all portions of audio. Where more two or more devices in the playback system are reproducing media independently, the trigger may indicate which one to select for grouping, for example the second playback device may be one with which a wireless communication was established, or which was indicated in the input. Coordinator: The second playback device associated with the room can be designated as group coordinator to reduce the possibility of playback being interrupted should the portable playback device be moved or run out of power. Example 2: Trigger: A trigger is detected which is associated with a portable playback device which. The trigger can be an input in a control application, a wireless communication connection being established between the portable playback device and a second playback device associated with a room or a long button press on the portable playback device. Configuration information: the configuration information shows the portable playback device is currently reproducing first media and the second playback device is currently reproducing second media different from the first. Audio Allocation: As result the playback device is updated to be grouped with the second device and reproducing second media. The playback device is allocated all portions of audio of the second media, so the portable playback device begins reproducing the second media. Where more two or more devices in the playback system are reproducing media independently, such as second and third media respectively, the trigger may indicate which one to select for grouping, for example the second playback device may be one with which a wireless communication was established, or which was indicated in the input. Coordinator: The second playback device associated with the room can be designated as group coordinator to reduce the possibility of playback being interrupted should the portable playback device be moved or run out of power. Example 3: Trigger: A trigger is detected which is associated with a first portable playback device which. The trigger can be an input in a control application, a wireless communication connection being established between the first portable playback device and a second portable playback device, a long button press on the first portable playback device, or establishing NFC communication with the second portable playback device. Configuration information: the configuration information shows the first portable playback device is not currently reproducing media but the second portable playback device is reproducing media Audio Allocation: As result the first portable playback device is updated to be grouped with the second portable playback device. The first portable playback device is allocated all portions of audio and begins reproducing the media in synchrony with the second portable playback device. The second portable playback device may be indicated in the trigger, for example the one with which a wireless communication was established, or which was indicated in the input. Coordinator: The second portable playback device can be designated as group coordinator to reduce the possibility of playback being interrupted during a transfer of responsibility to the first device. In some circumstances the first playback device may be designated the coordinator, for example (i) when the first portable playback device is charging and the second is not, (ii) when both devices are on battery and the second portable devices remaining battery is lower than the first playback device's remaining battery, (iii) when both devices are on battery and the second portable device's battery is below a first threshold and the first portable device's battery is above a second threshold higher than the first threshold or (iv) when both devices are on battery and the first portable device's remaining battery is a predetermined amount higher than the second portable device's remaining battery. When the coordinator is to be changed, the coordinator may be changed at the next media change, such as between songs, to reduce perceptible interruption. Example 4: Trigger: A trigger is detected which is associated with a first portable playback device. The trigger can be an input in a control application, a wireless communication connection being established between the first portable playback device and a second portable playback device, a long button press on the portable playback device, or establishing NFC communication with the second portable playback device. Configuration information: The configuration information shows that the first portable playback device is currently reproducing first media and the second portable playback device is reproducing second, different media. Audio Allocation: As result the first portable playback device is updated to be grouped with the second portable playback device. The configuration information is used to determine which of the first and second portable playback devices began playing most recently (for example by examining a variable storing a local time at which playback was started at each device). Whichever playback device started earlier has its audio allocation updated to reproduce all portions of audio of the media reproduced by the other device. In other words the most device which started playing most recently continues reproduction and the other device joins it. Coordinator: Whichever device is not updated is made the coordinator. In some examples the other device may be made coordinator, such as in the same circumstances as explained for example 3 above, when the other device has a more reliable power source or greater power reserves. Example 5: Trigger: A trigger is detected which is associated with a first portable playback device. The trigger can be an input in a control application, a wireless communication connection being established between the first portable playback device and a second portable playback device, a long button press on the portable playback device, or establishing NFC communication with the second portable playback device. Configuration Information: The configuration information shows the first portable playback device is currently reproducing live media, such as radio, and that the second portable playback device is not reproducing media. Audio Allocation: As result the first portable playback device is updated to be grouped with the second portable playback device. The second portable playback device is allocated all portions of audio and begins reproducing the media in synchrony with the first portable playback device. The second portable playback device may be indicated in the trigger, for example the one with which a wireless communication was established, or which was indicated in the input. Coordinator: The first portable playback device can be designated as group coordinator to reduce the possibility of playback being interrupted during a transfer of responsibility to the second device. In some circumstances the second playback device may be designated the coordinator, such as discussed above for Example 3 when the second portable playback device has a more reliable power source or greater power reserves than the first playback device. Example 6: Trigger: A trigger is detected which is associated with a first portable playback device. The trigger can be an input in a control application, a wireless communication connection being established between the first portable playback device and a second playback device associated with a room, or a long button press on the first portable playback device. Configuration Information: The configuration information shows that the first portable playback device is currently reproducing live media, such as radio, and the second playback device associated with a room is not reproducing media. Audio Allocation: As result the first portable playback device is updated to be grouped with the second playback device. The second playback device is allocated all portions of audio and begins reproducing the media in synchrony with the first portable playback device. The second playback device may be indicated in the trigger, for example the one with which a wireless communication was established, or which was indicated in the input. Coordinator: The first portable playback device can be designated as group coordinator to reduce the possibility of playback being interrupted during a transfer of responsibility to the second playback device. In some circumstances the second playback device may be designated the coordinator, such as when the first portable playback device is operating on battery power and has a remaining battery life below a threshold, such as 15%. Example 7: Trigger: A trigger is detected for a playback device associated with a room which the configuration information shows is not currently reproducing media. The trigger can be an input in a control application, a wireless communication connection being established between the playback device and a portable playback device or a long button press on the playback device. Configuration information: The configuration information shows that the playback device associated with a room is not currently reproducing media and the portable playback device is reproducing media. Audio allocation: As result the playback device is updated to be grouped with the portable playback device and allocated all portions of audio. Coordinator: The playback device associated with the room can be designated as group coordinator to reduce the possibility of playback being interrupted should the portable playback device be moved or run out of power. Example 8: Trigger: A trigger is detected for a playback device associated with a room. The trigger can be an input in a control application, a wireless communication connection being established between the playback device and a portable playback device or a long button press on the playback device. Configuration Information: The configuration information shows that the playback device associated with a room is currently reproducing first media and the portable playback device is reproducing second, different media. Audio allocation: As result the playback device is updated to be grouped with the portable playback device and the portable playback device is updated play the second media in synchrony with the playback device and to be allocated all portions of the audio. Coordinator: The playback device associated with the room is designated as group coordinator to reduce the possibility of playback being interrupted should the portable playback device be moved or run out of power. Example 9: Trigger: A trigger is detected for a playback device associated with a room. The trigger can be an input in a control application, a wireless communication connection being established between the playback device and a portable playback device or a long button press on the playback device. Configuration Information: The configuration information shows that the playback device associated with a room is currently reproducing live media, such as radio, and the portable playback device is not reproducing media. Audio allocation: As result the playback device is updated to be grouped with the portable playback device and the portable playback device is updated play the media in synchrony with the playback device and to be allocated all portions of the audio. Coordinator: The playback device associated with the room is designated as group coordinator to reduce the possibility of playback being interrupted should the portable playback device be moved or run out of power. In all of the examples 1 to 9 above, whichever playback device was updated was allocated all portions of the audio (for example all channels and frequencies). For example, a playback device may be updated to be allocated all portions of audio when the configuration information indicates at least one of:the device is a portable playback which is operating on battery power, optionally operating on battery with below a predetermined threshold of battery life remaining. This reduces perception of interruption should the portable playback device run out of battery. In other cases a playback device may be updated to be allocated a subset of less than all portions of audio, such as a particular channel; andthe configuration information indicates that one of the devices is already part of a bonded group and/or is already allocated a subset of audio portions which indicates that it is part of a bonded group. In further examples, the audio allocation may be updated to a subset of less than all of the audio portions based on the configuration information and possibly also preference information and the nature of the trigger. As discussed above, playback devices may be bonded to reproduce particular subsets of audio, such as a particular channel (left, right, and additional channels for surround or home theater such as rear left and rear right) or a particular frequency (frequencies below a cut off frequency, such as 100 Hz, for a subwoofer). Configuring playback devices in this way can be time consuming and involve multiple steps for a user. According to embodiments, one or more playback devices are automatically allocated respective subsets of audio to simply this configuration. One possible scenario is the automatic bonding of two playback devices based on a trigger and configuration information to form a stereo pair with one device allocated a left channel and the other device allocated a right channel. Another scenario is the automatic bonding of three devices to form a Home Theater setup. A first device, such as soundbar or soundbase, is allocated front audio channels, and second and third devices are allocated rear left and rear right channels respectively. Automatic bonding where subsets of audio portions are allocated to different devices can be carried out, for example, when the configuration information indicates at least one of:All the devices have the same identifier, such as the same room name. This may facilitate bonding when a portable playback device is returned to a room it was in before it was moved.One of the devices is already allocated all of the portions of audio, indicating that it is not already bonded with another device; When playback devices are allocated subsets of audio based on channels, it is required to determine which playback device should be allocated which channel, for example, which playback device is positioned on a right side and which is positioned on a left side. It is desirable if this could also be carried out automatically or with a minimum of user input, so that configuration is quicker and less prone to human error in assigning channels to playback devices.FIG.5shows a method by which a playback device including a microphone array can determine its physical location within a playback area relative to other playback devices. Allocation of audio channels can then be based on that determination. First, at block502a second playback device is caused to emit a sound, for example a command or instruction causes the second playback device to emit a sound or tone. The sound may be audible or inaudible, for example it can be ultrasonic, provided that the microphone array can detect it. The emitted sound is received by the first playback device at block504, where it is recorded by the microphone array. The direction of the audio is determined, for example, as discussed above, the beamforming and self-sound suppression components312land312mof a NMD can detect the direction of a received sound. The sound signal may be chosen so that it is unlikely to be identified as voice input. Additional example techniques to identify the direction of the audio using a microphone array include: (1) identifying the microphone from a plurality of microphones in the microphone array that received the sound first (e.g., on the basis that the microphone that detected the sound first is likely the closest microphone to the sound source); and/or (2) identifying the microphone from the plurality of microphones that detected the sound emitted by the second playback devices with the highest pressure level, such as a highest peak pressure level and/or a highest average pressure level during detection of the sound (e.g., on the basis that the microphone that detected the highest pressure level is likely the closest microphone to the sound source). At block506, the direction of the received sound is processed to determine the relative position of the first and second playback devices. For example, in stereo configuration if the sound is determined as coming from the left side relative to the front of the playback device then the playback device is likely positioned on the right side relative to the listening position. Similarly, if the sound is determined as coming from the right side relative to front of the playback device then the playback device is likely positioned on the left side relative to the listening position. In a surround sound or home theater configuration a front device, such as a soundbar or soundbase, may emit the sound generally from the center. If the sound is determined as coming generally from a right side relative to a front of the device then the playback device is located at the rear left position relative to the listening position. Similarly, if the sound is determined as coming generally from a left side relative to a front of the device then the playback device is located at the rear right position relative to a listening position. At block508, audio portions allocated to the playback devices are updated based on the determined relative position. More specifically, particular audio channels such as left, right, left rear and right rear, are allocated to the playback devices based on their determined relative positions. While the method has been described from the point of view of the first device being one of the playback devices for which the position is to be determined, a similar method can be used with another playback device, a control device or any network connected device having a microphone and a predetermined position.FIG.6shows a method in which the relative position of the playback devices can be determined using a control device. At block602the control device is positioned in a predetermined position. For example, a user may be directed by an indication on the display of the control device to “Position this control device near the left speaker”. Once in position, at blocks604and606the distance of the control device from the first playback device and the second playback device, respectively, is determined. This may be done by causing each playback device to emit sound (e.g., at the same volume) and measuring the intensity of sound received by a microphone of the control device. For example, the first playback device may emit sound at a given volume for a first period of time while the second playback device is silent (e.g., not playing sound) and, after the first period of time, the second playback device may emit sound at the same volume for a second period of time while the first playback device is silent. In another example, the first playback device and the second playback device may emit sound simultaneously at different frequencies, such as different frequency tones, such that the control device can distinguish between sound from the first playback device and sound from the second playback device. Whichever playback device's sound was recorded with the highest intensity by the microphone is closest to the control device. If the control device was near a left playback device then the device with the highest intensity received sound is the left device. In this way, a position can be determined without requiring a directional microphone array to determine a direction, which may not be present on a control device. Other methods of determining the distance can be used which do not use sound. For example, a wireless communication signal may be used to determine a distance. In a similar way to the sound example discussed above, an intensity of a wireless signal from the playback device will be greater the closer the playback device is to the control device. This can be measured directly using RSSI, or more indirectly by reading the physical communication rate of the channel (which is proportional to signal strength) or the bit error rate (which is inversely proportional to signal strength). Other methods such as Bluetooth proximity profile (PXP) may also be used. Whichever way the distance is determined in blocks604and606these blocks may be carried out simultaneously or sequentially. If carried out simultaneously different sounds or wireless communication signals may be used to allow the playback devices to be distinguished from each other. At block608the allocation of audio portions to the playback devices is automatically updated based on the determined distances. The method ofFIG.6can be used by other devices separate from the devices to be positioned as well as control devices. These other devices include a further playback device or a Internet of Things device including a microphone and having a predetermined position. If the device cannot easily be moved, it can remain at its present location (assuming that it is near enough the playback area to determine the position of the playback devices to be located). When the device is not moved its location may already be known or may be received as an input. As discussed above, various techniques are described to automatically identify relative positions of playback devices (e.g., in bonded zones such as stereo pair and home theater configurations) so as to intelligently assign audio portions to the playback devices. It should be appreciated that the playback device(s) and/or the control device may refuse the automatic assignment of audio portions in cases where the relative positions of the playback devices were identified with a low degree of confidence. For example, the playback device(s) and/or the control device may generate a confidence value for the identified relative positions of the playback devices indicative of the confidence in the accuracy of the identified relative positions. In this example, the playback device(s) and/or the control device may compare the confidence value with a threshold and refuse the automatic assignment of audio portions when the confidence value does not exceed the threshold (e.g., the confidence is low). Additionally, the playback device(s) and/or the control device may prompt the user to intervene (e.g., via one or more audible and/or visual instructions) by, for example, requesting the user to manually indicate which playback device is at a particular relative position (e.g., which speaker is the left speaker in a stereo pair, which speaker is the right speaker in a stereo pair, which speaker is the left rear satellite in a home theater setup, which speaker is a right rear satellite in a home theater setup, etc.). Alternatively, the playback device(s) and/or the control device may, for example, simply assign all of the audio portions to all of the playback devices in instances where the relative confidence of the determined relative position does not exceed the threshold so as not to require user input. In this example, the playback device(s) and/or the control device may notify the user (e.g., via an audible and/or visual message) that the automatic assignment of audio portions based on a determined relative position was refused and the playback devices are simply each reproducing all of the audio portions. Updating Audio Allocation Automatically when Playback Devices are Removed from a Group. The examples discussed above all dealt with joining playback devices together and responsively updating the audio allocation. Further examples will now be described in which playback devices are removed and the audio allocation is updated automatically. Example 10. Configuration information: The configuration information shows that a playback device associated with a room and a portable playback device are reproducing media in synchrony. Trigger: A long press is received at the portable playback device or an input is received to remove the portable playback device from a control device. Updated allocation: The portable playback device is ungrouped and all audio allocation is removed. The playback device associated with the room continues to reproduce the media. If the playback device associated with the room was previously reproducing a subset of less than all audio portions the audio allocation can be updated to include all audio portions. Example 11. Configuration information: The configuration information shows that a playback device associated with a room and a portable playback device are reproducing media in synchrony. Trigger: A long press is received at the playback device associated with a room or an input is received from a control device to remove the playback device associated with a room. Updated allocation: The playback device associated with a room is ungrouped and all audio allocation is removed. The portable playback device continues to reproduce the media. If the portable playback device was previously reproducing a subset of less than all audio portions the audio allocation can be updated to include all audio portions. Example 12. Configuration information: The configuration information shows that a playback device associated with a room and a portable playback device are reproducing media in synchrony. Trigger: Wireless communication indicates that the devices are no longer in proximity, for example a Bluetooth connection between them is lost or indicates a separation distance above a predetermined threshold. Updated allocation: The portable playback device is ungrouped and all audio allocation is removed. The playback device associated with the room continues to reproduce the media. If the playback device associated with the room was previously reproducing a subset of less than all audio portions the audio allocation can be updated to include all audio portions. Example 13. Configuration information: The configuration information shows that a first portable playback device and a second portable playback device are reproducing media in synchrony with both reproducing all audio portions Trigger: Wireless communication indicates that the devices are no longer in proximity, for example a Bluetooth connection between them is lost or indicates a distance is greater than a predetermined threshold. Updated allocation: Whatever portable playback device was the group coordinator in the group continues reproducing media while the other portable playback device stops playing music and is updated to be allocated no audio portions. In the alternative, both portable playback devices could stop reproducing media and are allocated no audio portions. Which of these alternatives happens could be determined from the preference information. Example 14. Configuration information: The configuration information shows that a first portable playback device and a second portable playback device are reproducing media in synchrony as a bonded group, with each reproducing different audio portions. Trigger: Wireless communication indicates that the devices are no longer in proximity, for example a Bluetooth connection between them is lost or indicates a distance is greater than a predetermined threshold. Updated allocation: Whatever portable playback device was the group coordinator in the group continues reproducing media and its allocation is updated to all audio portions while the other portable playback device stops playing music and is updated to be allocated no audio portions. In the alternative, both portable playback devices could stop reproducing media and are allocated no audio portions. In yet another alternative, both portable playback devices start reproducing all of the audio portions (e.g., the pair of portable playback devices transition from being in a bonded group such as a stereo pair to each reproducing all audio portions in synchrony). Which of these alternatives happens could be determined from the preference information. In examples 13 and 14, the range at which the portable playback devices ungroup may be different from the range at which the portable playback devices group. For example, the portable playback devices may need to be within approximately 3n (10 feet) for Bluetooth proximity to provide a trigger to group the devices, while the devices may need to be separated by at least about 7.6 m (25 feet) for Bluetooth proximity to provide an ungrouping trigger. Updating Audio Allocation in Response to Voice Input In further examples, the audio allocation may be updated to respond to voice input. In one example a portable playback device including a microphone, such as that described above with reference toFIG.3, may detect the voice command and push the command to the cloud. The voice command may be processed in the cloud (or locally in instances where the playback device has a local natural language understanding (NLU) engine) and the voice input further forms a trigger to update the audio allocations to other playback devices when providing a response to the voice input. Conventionally, responses to a given voice commands are always provided by one network microphone device that is determined to be closest to the user when the voice command was uttered. Such a rigid system, however, provides an unintuitive user experience in households with multiple network microphone devices. For example, a user may issue a voice command while sitting on the couch and surrounded by a home theater system comprising three network microphone devices (e.g., in the form of a soundbar, a left rear satellite, and a right rear satellite). In this example, a conventional system may determine that the left rear satellite is the closest to the user and issue the voice response from the left rear satellite. Such a response to the voice command from the left rear satellite is unexpected to the user at least because most of the audible speech during media content playback comes from the soundbar instead of the rear satellites. Accordingly, the techniques described above to intelligently allocate audio portions based on configuration information and/or preference information may be readily applied to network microphone devices to improve the user experience. In some embodiments, a portable network microphone device may be grouped with one or more stationary network microphone devices (e.g., as indicated in the configuration information). In these embodiments, the audio portions associated with the response to the voice input detected by one or more network microphones within the group may be preferentially provided by the stationary playback devices instead of the portable playback device unless particular conditions are met. Such preferential allocation to the stationary playback devices in the group may make the voice response easier for the user to hear given the larger dimensions and/or power budget of the stationary playback devices. Example conditions where the audio portions associated with the response may be allocated to the portable network microphone device instead include conditions where the user is far away from the stationary network microphone devices (e.g., the sound pressure level of the voice command detected by the stationary network microphone devices is below a threshold). In another example, a portable playback device is configured as left rear playback device in a surround or home theater setup and this reflected in the configuration information. This device may detect a voice command and push the command to the cloud. In this example, the audio allocations are adjusted so that the soundbar at the front of the home theater system reproduces the response as the primary device in the home theater configuration. In other examples, in addition to updating the audio allocation, microphones on portable playback devices may be deactivated altogether when the configuration information indicates that they are being grouped with other devices which also include a microphone. This may be indicated by reference to a model number which is known to include a microphone, or by a specific variable or property which indicates whether a device includes a microphone. For example, the Beam and One commercially available from Sonos, Inc include a microphone array and this could be determined with reference to the model name or a model number corresponding the name. Controlling Internet of Things Devices In some examples, the trigger may indicate that an Internet of Things (IoT) device, such as a smart lightbulb, power switch or thermostat is in proximity to a portable playback device. For example the trigger may be an input from a control application, wireless proximity detection, such as using Bluetooth proximity profile or a long button press. Responsively the portable playback associates itself with the IoT devices so that voice inputs which do not specify a location of an IoT device are application the IoT device automatically. In one example, a portable playback device could be brought into a room with a smart bulb and bond with the smart bulb. As a result, a voice command “turn off the lights” received by the portable playback device is associated with the smart bulb. The portable playback device triggers the bonded smart bulb to turn off (instead of another smart bulb in another room). The methods described above can be carried out by playback devices, control devices or even by remote devices, such as a remote server system on the internet. The device which runs the process may be the device which determines the trigger (such as receiving a long button press or NFC activation) or another device, such as a cloud server processing a received voice input. Embodiments also include computer programs comprising computer program code that when executed by a processing system caused the processing system to implement the method. A non-transitory computer readable medium may have computer program code embodied thereon that, when executed by a processing system, causes the processing system to implement the method. V. Conclusion The above discussions relating to playback devices, controller devices, playback zone configurations, and media content sources provide only some examples of operating environments within which functions and methods described below may be implemented. Other operating environments and configurations of media playback systems, playback devices, and network devices not explicitly described herein may also be applicable and suitable for implementation of the functions and methods. Responsive to a trigger, audio allocations to one or more playback devices can be automatically updated based on configuration information. This can simplify system configuration and allow easier set up of a playback system as playback devices are moved and/or added. The description above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software aspects or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only ways) to implement such systems, methods, apparatus, and/or articles of manufacture. Additionally, references herein to “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one example embodiment of an invention. The appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. As such, the embodiments described herein, explicitly and implicitly understood by one skilled in the art, can be combined with other embodiments. The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain embodiments of the present disclosure can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the foregoing description of embodiments. When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware. Example Features (Feature 1) A playback device comprising: a wireless interface configured to receive audio data comprising a plurality of audio portions; a speaker for reproducing at least one of the plurality of audio portions received via the wireless interface; a storage; and a processing system; wherein the storage comprises non-transitory computer-readable instructions that, when executed by the processing system, instruct the playback device to: responsive to a trigger associated with the playback device and indicating that the playback device is to be grouped with another playback device for playback of media: retrieve configuration information related to the playback device and the another playback device; cause the playback device and the another playback device to join together in a group for synchronous media playback; and automatically update an allocation of the audio portions for reproduction by the playback device based on the configuration information. (Feature 2) The playback device of feature 1, wherein the non-transitory computer-readable instructions, when executed by the processor, instruct the playback device to: determine either: that the configuration information indicates that the another playback device is configured to reproduce all the audio portions, or that the configuration information indicates that both the playback device and the another playback device have a same associated identifier, and responsively allocate a first subset of the audio portions to the playback device and a second subset of the audio portions to the another playback device, wherein the first subset and second subset are different. (Feature 3) The playback device of feature 2, further comprising: a microphone array; and wherein the non-transitory computer-readable instructions, when executed by the processor, instruct the playback device to: cause the another playback device to emit a sound; receiving the sound via the microphone array; and determine a position of the playback device relative to the another playback device based on the received sound and the first subset and the second subset are based on the position. (Feature 4) The playback device of feature 1, further comprising: a battery; wherein the non-transitory computer-readable instructions, when executed by the processor, instruct the playback device to: determine that the playback device is operating on battery power and that a remaining battery life of the playback device is below a predetermined threshold, and responsively allocating all audio portions to the playback device. (Feature 5) A playback device comprising: a wireless interface configured to receive audio data comprising a plurality of audio portions; a speaker for reproducing at least one of the plurality of audio portions received via the wireless interface; a microphone array; a storage; and a processing system; wherein the storage comprises non-transitory computer-readable instructions that, when executed by the processor instruct the playback device to: responsive a voice input received by the microphone array: retrieve configuration information related to the playback device and another playback device; and automatically update an allocation of the audio portions for the playback device to reproduce the response to the voice input based on the configuration information. (Feature 6) The playback device of feature 5, wherein the non-transitory computer-readable instructions, when executed by the processor, further instruct the playback device to: determine that the configuration information indicates the playback device is configured to reproduce a first subset of the audio portions in synchrony with the another playback device and responsively updating the allocation of audio portions between the playback device and the another playback device such that the response to the voice input is reproduced by the another playback device and not the playback device. (Feature 7) A method of allocating audio data between a first playback device and a second playback device, wherein the audio data comprises a plurality of audio portions, the method comprising: detecting a trigger associated with the first playback device; responsive to detecting the trigger: retrieving configuration information related to the first playback device and the second playback device; and automatically updating an allocation of the audio portions for reproduction by at least one of the first playback device and the second playback device based on the configuration information. (Feature 8) The method of feature 7, wherein the trigger indicates that the first playback device is to be grouped with the second playback device for playback of media, the method further comprising: further responsive to detecting the trigger, causing the first playback device and the second playback device to join together in a group of playback device for media playback; and wherein the automatically updating the allocation of the audio portions comprises automatically updating the allocation of the audio portions for reproduction of media in synchrony by the first and second playback devices. (Feature 9) The method of feature 8, wherein the automatically updating the allocation of the audio portions for reproduction of media in synchrony comprises: determining that the configuration information indicates that the second playback device is configured to reproduce all the audio portions, and responsively allocating a first subset of the audio portions to the first playback device and a second subset of the audio portions to the second playback device, wherein the first subset and second subset are different. (Feature 10) The method of feature 8, wherein the automatically updating the allocation of the audio portions for reproduction of media in synchrony comprises: determining that the configuration information indicates that both the first playback device and the second playback device have a same associated identifier, and responsively allocating a first subset of the audio portions to the first playback device and a second subset of the audio portions to the second playback device, wherein the first subset and second subset are different. (Feature 11) The method of feature 9, further comprising: determining a position of the first playback device relative to the second playback device; and allocating the first and second subsets of the audio portions based on the determined position. (Feature 12) The method of feature 11, wherein the determining a position comprises: causing the second playback device to emit a sound; receiving the sound via a microphone array comprising a plurality of microphones provided on the first playback device; and determining the position based on the relative magnitude of the received sound at two or more of the plurality of microphones in the microphone array. (Feature 13) The method of feature 11, wherein the determining a position comprises: determining a first proximity of a control device to the first playback device; determining a second proximity of the control device to the second playback device; and determining the position based on the first proximity, the second proximity, and a predetermined position of the control device. (Feature 14) The method of feature 13, wherein: the determining the first proximity comprises causing the first playback device to emit a first sound and receiving the first sound via at least one microphone on a control device; and the determining the second proximity comprises causing the second playback device to emit a second sound and receiving the second sound via the at least one microphone on the control device. (Feature 15) The method of feature 13, wherein the determining the first proximity is based on a wireless communication between the control device and the first playback device; and the determining the second proximity is based on a wireless communication between the control device and the second playback device. (Feature 16) The method of feature 8, further comprising: retrieving preference data, wherein the automatically updating the allocation of audio portions is further based on the preference data. (Feature 17) The method of feature 8, further comprising: determining that the configuration information indicates that the second playback device is configured to reproduce a subset of all channels of audio, and responsively allocating all audio portions to the first playback device. (Feature 18) The method of feature 8, further comprising: determining that the configuration information indicates that the first playback device is operating on battery power and that a remaining battery life of the first playback device is below a predetermined threshold, and responsively allocating all audio portions to the first playback device. (Feature 19) The method of feature 7, wherein the trigger is a voice input received by a microphone array on the first playback device, and the automatically updating the allocation of audio portions for reproduction comprises determining at least one playback device to respond to the voice input. (Feature 20) The method of feature 19, wherein the voice input is further received by a microphone array on the second playback device, and the automatically updating the allocation of audio portions is further based on the voice input received by the first playback device and the voice input received by the second playback device. (Feature 21) A playback device comprising: a communication interface configured to facilitate communication via one or more data networks; at least one audio amplifier configured to drive at least one speaker; at least one processor; at least one non-transitory computer-readable medium comprising program instructions that are executable by the at least one processor such that the playback device is configured to: reproduce one or more first audio channels of audio content in synchrony with reproduction of one or more second audio channels of the audio content by another playback device; detect a trigger event indicating that the playback device is in proximity of the other playback device, wherein the trigger event comprises detection of a change in position of the playback device relative to the other playback device; after detection of the trigger event, retrieve configuration information related to the playback device and the other playback device; retrieve preference information indicating a preference of at least one user; based on the configuration information and the preference information, cause an allocation of audio channels between the playback device and the other playback device to be updated; and reproduce one or more third audio channels of the audio content based on the updated allocation of the audio channels. (Feature 22) The playback device of feature 21, wherein the audio content comprises a left channel and a right channel, wherein the one or more first audio channels comprises the left audio channel and the right audio channel, and wherein the one or more second audio channels comprises the left audio channel and the right audio channel. (Feature 23) The playback device of feature 22, wherein the one or more third audio channels comprises one of: the left audio channel and the right audio channel. (Feature 24) The playback device of feature 23, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to cause the allocation of audio channels between the playback device and the other playback device to be updated comprises program instructions that are executable by the at least one processor such that the playback device is configured to: determine a position of the playback device relative to the other playback device; and based on the determined position of the playback device relative to the other playback device, assign the playback device one of the left audio channel and the right audio channel for reproduction. (Feature 25) The playback device of feature 24, wherein the playback device comprises a plurality of microphones and wherein the program instructions that are executable by the at least one processor such that the playback device is configured to determine a position of the playback device relative to the other playback device comprises program instructions that are executable by the at least one processor such that the playback device is configured to: cause the other playback device to emit a sound; detect the acoustic signal using the microphone array; and based on the detected acoustic signal, determine a position of the playback device relative to the other playback device. (Feature 26) The playback device of any of features 21-25, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to retrieve the preference information comprises program instructions that are executable by the at least one processor such that the playback device is configured to: causing a computing device to present one or more playback options; and receive, from the computing device, an indication of at least one selection from the one or more playback options. (Feature 27) The playback device of any of features 21-26, further comprises at least one sensor configured to sense movement of the playback device. (Feature 28) The playback device of feature 27, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to detect the trigger event comprises program instructions that are executable by the at least one processor such that the playback device is configured to: detect movement of the playback device by the at least one sensor. (Feature 29) The playback device of feature 28, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to detect the trigger event comprises program instructions that are executable by the at least one processor such that the playback device is configured to: after detection of movement of the playback device, cause the other playback device to emit a wireless signal; detect, using the communication interface, the wireless signal; and based on the detected wireless signal, determine whether the playback device is in proximity to the other playback device. (Feature 30) The playback device of feature 28, wherein the playback device comprises at least one microphone and wherein the program instructions that are executable by the at least one processor such that the playback device is configured to detect the trigger event comprises program instructions that are executable by the at least one processor such that the playback device is configured to: after detection of movement of the playback device, cause the other playback device to emit an acoustic signal; detect, using the at least one microphone, the acoustic signal; and based on the detected acoustic signal, determine whether the playback device is in proximity to the other playback device. (Feature 31) The playback device of feature 30, wherein the acoustic signal comprises an ultrasonic signal. (Feature 32) A method performed by a playback device, the method comprising: reproducing one or more first audio channels of audio content in synchrony with reproduction of one or more second audio channels of the audio content by another playback device; detecting a trigger event indicating that the playback device is in proximity of the other playback device, wherein the trigger event comprises detection of a change in position of the playback device relative to the other playback device; after detecting the trigger event, retrieving configuration information related to the playback device and the other playback device; retrieving preference information indicating a preference of at least one user; based on the configuration information and the preference information, causing an allocation of audio channels between the playback device and the other playback device to be updated; and reproducing one or more third audio channels of the audio content based on the updated allocation of the audio channels. (Feature 33) The method of feature 32, wherein the audio content comprises a left channel and a right channel, wherein the one or more first audio channels comprises the left audio channel and the right audio channel, and wherein the one or more second audio channels comprises the left audio channel and the right audio channel. (Feature 34) The method of feature 33, wherein the one or more third audio channels comprises one of: the left audio channel and the right audio channel. (Feature 35) The method of feature 34, wherein causing the allocation of audio channels between the playback device and the other playback device to be updated comprises: determining a position of the playback device relative to the other playback device; and based on the determined position of the playback device relative to the other playback device, assigning the playback device one of the left audio channel and the right audio channel for reproduction. (Feature 36) The method of feature 35, wherein determining the position of the playback device relative to the other playback device comprises: causing the other playback device to emit a sound; detecting the acoustic signal using the microphone array; and based on the detected acoustic signal, determining the position of the playback device relative to the other playback device. (Feature 37) The method of any of features 32-36, wherein retrieving the preference information comprises: causing a computing device to present one or more playback options; and receiving, from the computing device, an indication of at least one selection from the one or more playback options. (Feature 38) The method of any of features 32-36, wherein detecting the trigger event comprises: detecting movement of the playback device by at least one sensor. (Feature 39) The method of feature 38, wherein detecting the trigger event comprises: after detection of movement of the playback device, causing the other playback device to emit a wireless signal; detecting the wireless signal emitted by the other playback device; and based on the detected wireless signal, determining whether the playback device is in proximity to the other playback device. (Feature 40) The method of feature 38, wherein detecting the trigger event comprises: after detection of movement of the playback device, causing the other playback device to emit an acoustic signal; detecting, using the at least one microphone, the acoustic signal; and based on the detected acoustic signal, determining whether the playback device is in proximity to the other playback device. (Feature 41) One or more non-transitory computer-readable media comprising program instructions that are executable by the at least one processor such that a playback device is configured to: reproduce one or more first audio channels of audio content in synchrony with reproduction of one or more second audio channels of the audio content by another playback device; detect a trigger event indicating that the playback device is in proximity of the other playback device, wherein the trigger event comprises detection of a change in position of the playback device relative to the other playback device; after detection of the trigger event, retrieve configuration information related to the playback device and the other playback device; retrieve preference information indicating a preference of at least one user; based on the configuration information and the preference information, cause an allocation of audio channels between the playback device and the other playback device to be updated; and reproduce one or more third audio channels of the audio content based on the updated allocation of the audio channels. (Feature 42) A playback device comprising: a communication interface configured to facilitate communication via one or more data networks; at least one audio amplifier configured to drive at least one speaker; at least one processor; at least one non-transitory computer-readable medium comprising program instructions that are executable by the at least one processor such that the playback device is configured to: reproduce one or more first audio channels of audio content in synchrony with reproduction of one or more second audio channels of the audio content by another playback device; detect a trigger event indicating that the playback device is no longer in proximity of the other playback device, wherein the trigger event comprises detection of a change in position of the playback device relative to the other playback device; after detection of the trigger event, retrieve configuration information related to the playback device and the other playback device; based on the retrieved configuration information, cause an allocation of the audio content between the playback device and the other playback device to be updated; and reproduce one or more third audio channels of the audio content based on the updated allocation of the audio channels. (Feature 43) The playback device of feature 42, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to reproduce the one or more third audio channels comprises program instructions that are executable by the at least one processor such that the playback device is configured to: reproduce the one or more third audio channels in synchrony with reproduction of one or more fourth audio channels of the audio content by the other playback device. (Feature 44) The playback device of any of features 42-43, wherein the audio content comprises a left channel and a right channel, wherein the configuration information indicates that the playback device and the other playback device operate as a stereo pair where the playback device is allocated one of the left channel and the right channel for reproduction. (Feature 45) The playback device of feature 44, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to cause the allocation of the audio content to be updated comprises program instructions that are executable by the at least one processor such that the playback device is configured to: cause each of the playback device and the other playback device to be allocated both the left channel and the right channel for playback. (Feature 46) The playback device of any of features 42-45, wherein the audio content comprises a plurality of channels, wherein the one or more first channels comprises a first subset of the plurality of channels, wherein the one or more second channels comprises a second subset of the plurality of channels that is non-overlapping with the first subset of the plurality of channels, and wherein the one or more third channels comprises at least one channel from the first subset and at least one channel from the second subset. (Feature 47) The playback device of any of features 42-46, further comprising at least one sensor configured to detect movement of the playback device and wherein the program instructions that are executable by the at least one processor such that the playback device is configured to detect the trigger event comprises program instructions that are executable by the at least one processor such that the playback device is configured to: detect movement of the playback device by the at least one sensor. (Feature 48) The playback device of feature 47, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to detect the trigger event comprises program instructions that are executable by the at least one processor such that the playback device is configured to: after detection of movement by the at least one movement sensor, cause the other playback device to emit a wireless signal; detect the wireless signal using the communication interface; and based on the detected wireless signal, determine that the playback device is no longer in proximity of the other playback device. (Feature 49) The playback device of feature 47, further comprising at least one microphone and wherein the program instructions that are executable by the at least one processor such that the playback device is configured to detect the trigger event comprises program instructions that are executable by the at least one processor such that the playback device is configured to: after detection of movement by the at least one movement sensor, cause the other playback device to emit an acoustic signal; detect the acoustic signal using the at least one microphone; and based on the detected acoustic signal, determine that the playback device is no longer in proximity of the other playback device. (Feature 50) The playback device of feature 49, wherein the acoustic signal comprises an ultrasonic signal. (Feature 51) The playback device of any of features 42-50, wherein the program instructions that are executable by the at least one processor such that the playback device is configured to cause the allocation of the audio content to be updated comprises program instructions that are executable by the at least one processor such that the playback device is configured to: cause at least one of the playback device and the other playback device to update at least one equalization setting. (Feature 52) The playback device of any of features 42-51, wherein the configuration information indicates that one of the playback device and the other playback device is designated as a group coordinator for synchronous playback and wherein the program instructions that are executable by the at least one processor such that the playback device is configured to cause the allocation of the audio content to be updated comprises program instructions that are executable by the at least one processor such that the playback device is configured to: cause the designation of the one of the playback device and the other playback device as group coordinator to be updated. (Feature 53) A method performed by a playback device, the method comprising: reproducing one or more first audio channels of the audio content in synchrony with reproduction of one or more second audio channels of the audio content by another playback device; detecting a trigger event indicating that the playback device is no longer in proximity of the other playback device, wherein the trigger event comprises detection of a change in position of the playback device relative to the other playback device; after detecting of the trigger event, retrieving configuration information related to the playback device and the other playback device; based on the retrieved configuration information, causing an allocation of the audio content between the playback device and the other playback device to be updated; and reproducing one or more third audio channels of the audio content based on the updated allocation of the audio channels. (Feature 54) The method of feature 53, wherein reproducing the one or more third audio channels comprises: reproducing the one or more third audio channels in synchrony with reproduction of one or more fourth audio channels of the audio content by the other playback device. (Feature 55) The method of any of features 53-54, wherein the audio content comprises a left channel and a right channel, wherein the configuration information indicates that the playback device and the other playback device are configured as a stereo pair where the playback device is allocated one of the left channel and the right channel for reproduction, and wherein reproducing the one or more first audio channels comprises reproducing the one of the left channel and the right channel. (Feature 56) The method of feature 55, wherein causing the allocation of the audio content to be updated comprises: causing each of the playback device and the other playback device to be allocated both the left channel and the right channel for playback. (Feature 57) The method of any of features 53-56, wherein the audio content comprises a plurality of channels, wherein reproducing the one or more first channels comprises: reproducing a first subset of the plurality of channels in synchrony with reproduction of a second subset of the plurality of channels that is non-overlapping with the first subset of the plurality of channels by the other playback device. (Feature 58) The method of feature 57, wherein reproducing the one or more third channels comprises reproducing at least one channel from the first subset and at least one channel from the second subset. (Feature 59) The method of any of features 53-58, detecting the trigger event comprises: detecting movement of the playback device by the at least one sensor; and after detection of movement by the at least one movement sensor, causing the other playback device to emit a wireless signal; detecting the wireless signal using the communication interface; and based on the detected wireless signal, determining that the playback device is no longer in proximity of the other playback device. (Feature 60) The method of any of features 53-59, detecting the trigger event comprises: detecting movement of the playback device by the at least one sensor; and after detecting of movement by the at least one movement sensor, causing the other playback device to emit an acoustic signal; detecting the acoustic signal using the at least one microphone; and based on the detected acoustic signal, determining that the playback device is no longer in proximity of the other playback device. (Feature 61) One or more non-transitory computer-readable media comprising program instructions that are executable by the at least one processor such that a playback device is configured to: reproduce one or more first audio channels of the audio content in synchrony with reproduction of one or more second audio channels of the audio content by another playback device; detect a trigger event indicating that the playback device is no longer in proximity of the other playback device, wherein the trigger event comprises detection of a change in position of the playback device relative to the other playback device; after detection of the trigger event, retrieve configuration information related to the playback device and the other playback device; based on the retrieved configuration information, cause an allocation of the audio content between the playback device and the other playback device to be updated; and reproduce one or more third audio channels of the audio content based on the updated allocation of the audio channels.
181,612
11943595
DETAILED DESCRIPTION To provide a better understanding of the present invention to those skilled in the art, preferred embodiments and typical material or range parameters for key components will be detailed in the follow description. These preferred embodiments of the present invention are illustrated in the accompanying drawings with numbered elements to elaborate on the contents and effects to be achieved. It should be noted that the drawings are simplified schematics, and the material and parameter ranges of key components are illustrative based on the present day technology, and therefore show only the components and combinations associated with the present invention, so as to provide a clearer description for the basic structure, implementing or operation method of the present invention. The components would be more complex in reality and the ranges of parameters or material used may evolve as technology progresses in the future. In addition, for ease of explanation, the components shown in the drawings may not represent their actual number, shape, and dimensions; details may be adjusted according to design requirements. In the following description and in the claims, the terms “include”, “comprise” and “have” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Thus, when the terms “include”, “comprise” and/or “have” are used in the description of the present invention, the corresponding features, areas, steps, operations and/or components would be pointed to existence, but not limited to the existence of one or a plurality of the corresponding features, areas, steps, operations and/or components. In the following description and in the claims, when “a B1component is formed by/of C1”, C1exist in the formation of B1component or C1is used in the formation of B1component, and the existence and use of one or a plurality of other features, areas, steps, operations and/or components are not excluded in the formation of B1component. In the following, the term “horizontal direction” generally means a direction parallel to a horizontal surface, the term “horizontal surface” generally means a surface parallel to a direction X and direction Y in the drawings, and the term “vertical direction” generally means a direction parallel to a direction Z in the drawings, wherein the directions X, Y and Z are perpendicular to each other. In the following, the term “top view” generally means a viewing result along the vertical direction, and the term “side view” generally means a viewing result along the horizontal direction. In the following description and in the claims, the term “substantially” generally means a small deviation may exist or not exist. For instance, the terms “substantially parallel” and “substantially along” means that an angle between two components may be less than or equal to a certain degree threshold, e.g., 10 degrees, 5 degrees, 3 degrees or 1 degree. For instance, the term “substantially aligned” means that a deviation between two components may be less than or equal to a certain difference threshold, e.g., 2 μm or 1 μm. For instance, the term “substantially the same” means that a deviation is within, e.g., 10% of a given value or range, or mean within 5%, 3%, 2%, 1%, or 0.5% of a given value or range. Although terms such as first, second, third, etc., may be used to describe diverse constituent elements, such constituent elements are not limited by the terms. The terms are used only to discriminate a constituent element from other constituent elements in the specification, and the terms do not relate to the sequence of the manufacture if the specification do not describe. The claims may not use the same terms, but instead may use the terms first, second, third, etc. with respect to the order in which an element is claimed. Accordingly, in the following description, a first constituent element may be a second constituent element in a claim. It should be noted that the technical features in different embodiments described in the following can be replaced, recombined, or mixed with one another to constitute another embodiment without departing from the spirit of the present invention. In the present invention, the sound producing cell may perform an acoustic transformation converting signals (e.g. electric signals or signals with other suitable type) into an acoustic wave. In some embodiments, the sound producing cell may be a component in a sound producing device, a speaker, a microspeaker or other suitable device, so as to convert the electric signals into the acoustic wave, but not limited thereto. Note that an operation of the sound producing cell means that the acoustic transformation is performed by the sound producing cell (e.g., the acoustic wave is produced by actuating the sound producing cell with electrical driving signal). In the use of the sound producing cell, the sound producing cell may be disposed on a base. The base may be hard or flexible, wherein the base may include silicon, germanium, glass, plastic, quartz, sapphire, metal, polymer (e.g., polyimide (PI), polyethylene terephthalate (PET)), any other suitable material or a combination thereof. As an example, the base may be a circuit board including a laminate (e.g. copper clad laminate, CCL), a land grid array (LGA) board or any other suitable board containing conductive material, but not limited thereto. Note that a normal direction of the base may be parallel to the direction Z in the drawings. Referring toFIG.1andFIG.2,FIG.1is a schematic diagram of a top view illustrating a sound producing cell according to a first embodiment of the present invention, andFIG.2is an enlarging schematic diagram showing a structure in a region R1inFIG.1. As shown inFIG.1, the sound producing cell100includes a membrane110and at least one anchor structure120outside the membrane110, wherein the membrane110is connected to the anchor structure120, so as to be anchored by the anchor structure120. For example, the membrane110may be surrounded by the anchor structure120, but not limited thereto. In the operation of the sound producing cell100, the membrane110can be actuated to have a movement. In this embodiment, the membrane110may be actuated to move upwardly and downwardly, but not limited thereto. Note that, in the present invention, the terms “move upwardly” and “move downwardly” represent that the membrane110moves substantially along the direction Z. During the operation of the sound producing cell100, the anchor structure120may be immobilized. Namely, the anchor structure120may be a fixed end (or fixed edge) respecting the membrane110during the operation of the sound producing cell100. A shape of the membrane110may be designed based on requirement(s). In some embodiments, the shape of the membrane110may be a polygon (i.e., a rectangle or a rectangle with chamfers), a shape having a curved edge or other suitable shapes, but not limited thereto. For example, the shape of the membrane110shown inFIG.1may be a rectangle with chamfers, but not limited thereto. The membrane110and the anchor structure120may include any suitable material(s). In some embodiments, the membrane110and the anchor structure120may individually include silicon (e.g., single crystalline silicon or poly-crystalline silicon), silicon compound (e.g., silicon carbide, silicon oxide), germanium, germanium compound (e.g., gallium nitride or gallium arsenide), gallium, gallium compound or a combination thereof, but not limited thereto. The membrane110and the anchor structure120may have the same material or different materials. In the present invention, the membrane110may include a plurality of subparts. As shown inFIG.1, the membrane110includes a first membrane subpart112and a second membrane subpart114, wherein the first membrane subpart112and the second membrane subpart114are opposite to each other in the top view, only one edge of the first membrane subpart112is anchored by being connected to the anchor structure120, only one edge of the second membrane subpart114is anchored by being connected to the anchor structure120, and other edges of the first membrane subpart112and other edges of the second membrane subpart114are non-anchored and not connected to the anchor structure120(these edges are referred as “non-anchored edges” in the following). Namely, inFIG.1, a first anchored edge112aof the first membrane subpart112is an only one edge of the first membrane subpart112which is anchored, and a second anchored edge114aof the second membrane subpart114is an only one edge of the second membrane subpart114which is anchored, wherein the first membrane subpart112is directly connected to the anchor structure120through the first anchored edge112aonly, and the second membrane subpart114is directly connected to the anchor structure120through the second anchored edge114aonly. In the present invention, the first anchored edge112aand the second anchored edge114amay be fully or partially anchored. For example, in the embodiment shown inFIG.1, the first anchored edge112aand the second anchored edge114aare fully anchored. As shown inFIG.1, the membrane110has a plurality of slits SL, wherein the membrane110may be divided into the subparts by the slit(s) SL. In the present invention, the slit SL may have at least one straight pattern, at least one curved pattern or a combination thereof, and a width of the slit SL should be sufficiently small. For example, the width of the slit SL may range from 1 μm to 5 μm, but not limited thereto. InFIG.1andFIG.2, the membrane110may have a first slit SL1, at least one second slit SL2and at least one third slit SL3, wherein the first slit SL1may be formed between the first membrane subpart112and the second membrane subpart114, the second slit SL2may be formed between the first membrane subpart112and the anchor structure120, the third slit SL3may be formed between the second membrane subpart114and the anchor structure120, an end of the second slit SL2may be situated in a corner region CR (shown inFIG.2) of the membrane110, and an end of the third slit SL3may be situated in another corner region CR of the membrane110. For example, inFIG.1, the membrane110may have one first slit SL1, two second slits SL2and two third slits SL3which are straight, the first membrane subpart112may be between two second slits SL2in the top view, and the second membrane subpart114may be between two third slits SL3in the top view, but not limited thereto. InFIG.1, the non-anchored edges of each subpart may be accomplished by the slits SL. Regarding the first membrane subpart112, a first non-anchored edge112n1opposite to the first anchored edge112ain the top view may be defined by the first slit SL1, and a second non-anchored edge112n2adjacent to the first anchored edge112ais defined by the second slit SL2. Regarding the second membrane subpart114, a third non-anchored edge114n3opposite to the second anchored edge114ain the top view may be defined by the first slit SL1, and a fourth non-anchored edge114n4adjacent to the second anchored edge114ais defined by the third slit SL3. In the present invention, shapes of the subparts of the membrane110may be designed based on requirement(s), wherein the shape of the subpart of the membrane110of may be a polygon (i.e., a rectangle), a shape having a curved edge or other suitable shapes. For instance, inFIG.1, the shape of the first membrane subpart112and the shape of the second membrane subpart114may substantially be rectangles, and the first membrane subpart112and the second membrane subpart114may be substantially congruent, but not limited thereto. Thus, inFIG.1, the second non-anchored edge112n2may be adjacent to and between the first non-anchored edge112n1and the first anchored edge112a, and the fourth non-anchored edge114n4may be adjacent to and between the third non-anchored edge114n3and the second anchored edge114a, but not limited thereto. InFIG.1, the second slit SL2and the third slit SL3are connected to the first slit SL1. For example, the first slit SL1may be connected between two second slits SL2and connected between two third slits SL3, but not limited thereto. Since the shape of the first membrane subpart112and the shape of the second membrane subpart114may substantially be rectangles, the first anchored edge112a, the first non-anchored edge112n1, the second anchored edge114aand the third non-anchored edge114n3are substantially parallel to each other and have substantially the same length, and the second non-anchored edges112n2and the fourth non-anchored edges114n4are substantially parallel to each other (i.e., parallel to the direction X) and have substantially the same length. That is to say, the first slit SL1defining the first non-anchored edge112n1and the third non-anchored edge114n3is parallel to the first anchored edge112aand the second anchored edge114a. In some embodiments, inFIG.1, the second slit SL2and the third slit SL3may be connected, such that the second slit SL2and the third slit SL3may be combined to form a long straight slit, but not limited thereto. As shown inFIG.1, the first anchored edge112aof the first membrane subpart112is one of the edges of the membrane110, and the second anchored edge114aof the second membrane subpart114is another one of the edges of the membrane110. The second non-anchored edge112n2of the first membrane subpart112may be or may not be one of the edges of the membrane110, and the fourth non-anchored edge114n4of the second membrane subpart114may be or may not be one of the edges of the membrane110. For example, inFIG.1, the second non-anchored edge112n2of the first membrane subpart112may not be the edge of the membrane110, and the fourth non-anchored edge114n4of the second membrane subpart114may not be the edge of the membrane110, such that the second slit SL2may be between the first membrane subpart112and one of the edges of the membrane110in the top view, and the third slit SL3may be between the second membrane subpart114and one of the edges of the membrane110in the top view, but not limited thereto. Note that the slit SL may release the residual stress of the membrane110, wherein the residual stress is generated during the manufacturing process of the membrane110or originally exist in the membrane110. The sound producing cell100may include an actuating layer130disposed on the membrane110and configured to actuate the membrane110. In some embodiments, as shown inFIG.1, the actuating layer130may not totally overlap the membrane110in the top view. For example, inFIG.1, the actuating layer130may be disposed on the first membrane subpart112and the second membrane subpart114, and the actuating layer130may overlap a portion of the first membrane subpart112and a portion of the second membrane subpart114in the top view. Optionally, inFIG.1, the actuating layer130may be disposed on and overlap the anchor structure120, and the actuating layer130may overlap the anchored edge of the subpart of the membrane110, but not limited thereto. As shown inFIG.1, in the top view, a distance may exist between the actuating layer130and the slit SL, so as to enhance the reliability of the slit SL and the actuating layer130, but not limited thereto. The actuating layer130may include an actuator having a monotonic electromechanical converting function with respect to the movement of membrane110along the direction Z. In some embodiments, the actuating layer130may include a piezoelectric actuator, an electrostatic actuator, a nanoscopic-electrostatic-drive (NED) actuator, an electromagnetic actuator or any other suitable actuator, but not limited thereto. For example, in an embodiment, the actuating layer130may include a piezoelectric actuator, the piezoelectric actuator may contain such as two electrodes and a piezoelectric material layer (e.g., lead zirconate titanate, PZT) disposed between the electrodes, wherein the piezoelectric material layer may actuate the membrane110based on driving signals (e.g., driving voltages) received by the electrodes, but not limited thereto. For example, in another embodiment, the actuating layer130may include an electromagnetic actuator (such as a planar coil), wherein the electromagnetic actuator may actuate the membrane110based on a received driving signals (e.g., driving current) and a magnetic field (i.e. the membrane110may be actuated by the electromagnetic force), but not limited thereto. For example, in still another embodiment, the actuating layer130may include an electrostatic actuator (such as conducting plate) or a NED actuator, wherein the electrostatic actuator or the NED actuator may actuate the membrane110based on a received driving signals (e.g., driving voltage) and an electrostatic field (i.e. the membrane110may be actuated by the electrostatic force), but not limited thereto. The membrane110is actuated by the actuating layer130, so as to move along the direction Z, thereby performing the acoustic transformation. Namely, the subpart of the membrane110may be actuated to perform an up-and-down movement, such that the acoustic transformation is performed. Note that, the acoustic wave is produced due to the movement of the membrane110actuated by the actuating layer130, and the movement of the membrane110is related to a sound pressure level (SPL) of the acoustic wave. When the subpart performs the up-and-down movement, openings in the direction Z may be formed and adjacent to its all non-anchored edges. For example, in the operation of the sound producing cell100, a central opening may be formed between the first non-anchored edge112n1of the first membrane subpart112and the third non-anchored edge114n3of the second membrane subpart114, and side openings may be respectively formed between the second non-anchored edge112n2of the first membrane subpart112and the anchor structure120and between the fourth non-anchored edge114n4of the second membrane subpart114and the anchor structure120. The subparts of the membrane110move along the same direction or opposite directions based on requirement(s). In some embodiments, the first membrane subpart112and the second membrane subpart114may move up and down in the direction Z synchronously (i.e., the first membrane subpart112and the second membrane subpart114may be actuated to move toward the same direction) to avoid big central opening between the first membrane subpart112and the second membrane subpart114from being formed, but not limited thereto. The actuating layer130may actuate the membrane110to produce the acoustic wave based on received driving signal(s). The acoustic wave is corresponding to an input audio signal, and the driving signal applied on the actuating layer130is corresponding to (related to) the input audio signal. Note that, the short side of the sound producing cell100(or membrane110) may be beneficial for obtaining higher resonant frequency, and the long side of the sound producing cell100(or membrane110) may be beneficial for enlarging SPL. In other words, the sound producing cell100(or membrane110) with large aspect ratio, a ratio of a length of the long side thereof with respect to a length of the short side, may achieve both higher resonant frequency and the larger SPL, compared to a cell with less aspect ratio. The aspect ratio for the sound producing cell100(or membrane110) may depend on practical requirement. For example, the aspect ratio of the sound producing cell100(or membrane110) may be larger than 2, so as to enhance the performance of the sound producing cell100, but not limited thereto. In the following, the details of a method of manufacturing a sound producing cell100will be further exemplarily explained. Note that in the following manufacturing method, the actuating layer130in the sound producing cell100may include a piezoelectric actuator for example, but not limited thereto. Any suitable type actuator can be included in the actuating layer130of the sound producing cell100. In the following manufacturing method, the forming process may include atomic layer deposition (ALD), a chemical vapor deposition (CVD) and other suitable process(es) or a combination thereof. The patterning process may include such as a photolithography, an etching process, any other suitable process(es) or a combination thereof. Referring toFIG.3toFIG.8,FIG.3toFIG.8are schematic diagrams illustrating structures at different stages of a manufacturing method of a sound producing cell according to an embodiment of the present invention. In this embodiment, the sound producing cell100may be manufactured by at least one semiconductor process to be a MEMS chip, but not limited thereto. As shown inFIG.3, a wafer WF is provided, wherein the wafer WF may include a first layer WL1and a second layer WL2, and may optionally include an insulating layer WL3between the first layer WL1and the second layer WL2. The first layer WL1, the insulating layer WL3and the second layer WL2may individually include any suitable material, such that the wafer WF may be any suitable type. For instance, the first layer WL1and the second layer WL2may individually include silicon (e.g., single crystalline silicon or poly-crystalline silicon), silicon carbide, germanium, gallium nitride, gallium arsenide, other suitable material or a combination thereof. In some embodiments, the first layer WL1may include single crystalline silicon, such that the wafer WF may be a silicon on insulator (SOI) wafer, but not limited thereto. For instance, the insulating layer WL3may include oxide, such as silicon oxide (e.g., silicon dioxide), but not limited thereto. The thicknesses of the first layer WL1, the insulating layer WL3and the second layer WL2may be individually adjusted based on requirement(s). InFIG.3, a compensation oxide layer CPS may be optionally formed on an upper side of the wafer WF, wherein the upper side is upper than a top surface WL1aof the first layer WL1opposite to the second layer WL2, such that the first layer WL1is between the compensation oxide layer CPS and the second layer WL2. The material of oxide contained in the compensation oxide layer CPS and the thickness of the compensation oxide layer CPS may be designed based on requirement(s). InFIG.3, a first conductive layer CT1and an actuating material AM may be formed on the upper side of the wafer WF (on the first layer WL1) in sequence, such that the first conductive layer CT1may be between the actuating material AM and the first layer WL1. In some embodiments, the first conductive layer CT1may be in contact with the actuating material AM. The first conductive layer CT1may include any suitable conductive material, and the actuating material AM may include any suitable material. In some embodiments, the first conductive layer CT1may include metal (such as platinum), and the actuating material AM may include a piezoelectric material, but not limited thereto. For example, the piezoelectric material may include such as a lead-zirconate-titanate (PZT) material, but not limited thereto. Moreover, the thicknesses of the first conductive layer CT1and the actuating material AM may be individually adjusted based on requirement(s). Then, inFIG.3, the actuating material AM, the first conductive layer CT1and the compensation oxide layer CPS may be patterned in sequence. As shown inFIG.4, a separating insulating layer SIL may be formed on the actuating material AM and be patterned. The thickness of the separating insulating layer SIL and the material of the separating insulating layer SIL may be designed based on requirement(s). For instance, the material of the separating insulating layer SIL may be oxide, but not limited thereto. As shown inFIG.4, a second conductive layer CT2may be formed on the actuating material AM and the separating insulating layer SIL, and then, the second conductive layer CT2may be patterned. The thickness of the second conductive layer CT2and the material of the second conductive layer CT2may be designed based on requirement(s). For instance, the second conductive layer CT2may include metal (such as platinum), but not limited thereto. For instance, the second conductive layer CT2may be in contact with the actuating material AM. The actuating material AM, the first conductive layer CT1and the second conductive layer CT2may be sub-layers in the actuating layer130of the sound producing cell100, so as to make the actuating layer130have a piezoelectric actuator including two electrodes and the actuating material AM between two electrodes. InFIG.4, the separating insulating layer SIL may be configured to separate at least a portion of the first conductive layer CT1from at least a portion of the second conductive layer CT2. As shown inFIG.5, the first layer WL1of the wafer WF may be patterned, so as to form a trench line TL. InFIG.5, the trench line TL is a portion where the first layer WL1is removed. That is to say, the trench line TL is between two parts of the first layer WL1. As shown inFIG.6, the wafer WF is disposed on a substrate SB and an adhering layer AL, wherein the adhering layer AL is adhered between the substrate SB and the first layer WL1of the wafer WF. InFIG.6, the actuating layer130is between the wafer WF and the substrate SB. Due to this step, the first layer WL1of the wafer WF and the structures on the upper side of the wafer WF (i.e., the structures upper than the top surface WL1aof the wafer WF) may be protected in subsequent steps. As shown inFIG.7, the second layer WL2of the wafer WF may be patterned, so as to make the second layer WL2form the anchor structure120and to make the first layer WL1form the membrane110anchored by the anchor structure120. In detail, the second layer WL2of the wafer WF may have a first part and a second part, the first part of the second layer WL2may be removed, and the second part of the second layer WL2may form the anchor structure120. Since the first part of the second layer WL2is removed, the first layer WL1forms the membrane110, wherein the membrane110is corresponding to the removing first part of the second layer WL2in the top view. For example, the first part of the second layer WL2may be removed by a deep reactive ion etching (DRIE) process, but not limited thereto. Note that the subparts (e.g., the first membrane subpart112and the second membrane subpart114) of the membrane110are determined when patterning the first layer WL1of the wafer WF to form the trench line(s) TL. Optionally, inFIG.7, since the insulating layer WL3of the wafer WF exists, after the second layer WL2of the wafer WF is patterned, a part of the insulating layer WL3corresponding to the first part of the second layer WL2may be removed also, so as to make the first layer WL1form the membrane110, but not limited thereto. Furthermore, inFIG.7, the second part of the second layer WL2, a portion of the insulating layer WL3overlapping the second part of the second layer WL2and a portion of the first layer WL1overlapping the second part of the second layer WL2may be combined to serve as the anchor structure120. As shown inFIG.8, the substrate SB and the adhering layer AL are removed by a suitable process, so as to complete the manufacture of the sound producing cell100. For example, the substrate SB and the adhering layer AL may be removed by a peel-off process, but not limited thereto. InFIG.8, since the first part of the second layer WL2is removed to make the membrane110included in the first layer WL1be formed, the slit SL is formed within and penetrates through the membrane110because of the trench line TL. Since the slit SL is formed because of the trench line TL, the width of the trench line TL may be designed based on the requirement of the slit SL. For example, the width of the trench line TL may be less than or equal to 5 μm, less than or equal to 3 μm, or less than or equal to 2 μm, so as to make the slit SL have desire width, but not limited thereto. The sound producing cell and its manufacturing method of the present invention are not limited by the above embodiments. Other embodiments of the present invention are described below. For ease of comparison, same components will be labeled with the same symbol in the following. The following descriptions relate the differences between each of the embodiments, and repeated parts will not be redundantly described. Referring toFIG.9andFIG.10,FIG.9is a schematic diagram of a top view illustrating a sound producing cell according to a second embodiment of the present invention, andFIG.10is an enlarging schematic diagram showing a structure in a region R2inFIG.9. As shown inFIG.9andFIG.10, a difference between this embodiment and the first embodiment is that the sound producing cell200of this embodiment includes a recess structure RS disposed at a corner of the sound producing cell200and outside the membrane110, wherein the recess structure RS is directly connected to a slit segment SLs in the corner region CR of the membrane110. In the embodiment shown inFIG.9, the sound producing cell200may include four recess structures RS disposed at four corners of the sound producing cell200and outside the membrane110, but not limited thereto. The slit segment SLs in the corner region CR may be a slit SL connected to the second slit SL2or the third slit SL3, or the slit segment SLs in the corner region CR may be a portion of the second slit SL2or a portion of the third slit SL3. The slit segment SLs may have a curved pattern, a straight pattern or a combination thereof. For example, inFIG.10, the slit segment SLs may be connected between the end of the second slit SL2situated in the corner region CR and the recess structure RS, and the slit segment SLs may have a curved pattern, but not limited thereto. As shown inFIG.9andFIG.10, the recess structure RS may be formed on the anchor structure120and at a corner of the sound producing cell200. For example, the sound producing cell200may have a first layer WL1and a second layer WL2disposed under the first layer WL1(e.g.,FIG.8), wherein a portion of the first layer WL1may be configured to serve as the membrane110(i.e., the first layer WL1may include the membrane110), another portion of the first layer WL1may surround the membrane110and combine with the second layer WL2to be the anchor structure120, the slit segment SLs in the corner region CR of the membrane110may pass through the first layer WL1, and the recess structure RS may pass through the first layer WL1and have a bottom belonging to the anchor structure120(e.g., the second layer WL2), but not limited thereto. In this case, regarding the manufacturing method of the sound producing cell200, the slits SL of the membrane110and the recess structure RS may be patterned (etched) in the same process (the same etching process). As shown inFIG.9andFIG.10, the recess structure RS may have a curved pattern, and the curved pattern of the recess structure RS may be designed based on requirement(s). For instance, inFIG.10, the slit segment SLs in the corner region CR and the recess structure RS may be combined to form a pattern with a half circular arc, but not limited thereto. The existence of the curved recess structure RS connected to the slit segment SLs situating in the corner region CR may enhance the success rate of the manufacturing process of the sound producing cell200, thereby increasing the yield rate of the sound producing cell200. In detail, in the step of removing the substrate SB and the adhering layer AL (e.g., the peel-off process), due to the existence of the curved recess structure RS connected to the slit segment SLs situating in the corner region CR, the stress concentration position may be changed from the corner region CR of the membrane110(e.g., the end of the slit SL) to the recess structure RS, and the stress applied on the recess structure RS may be dispersed, so as to reduce the damage on the membrane110during this process. Moreover, since the recess structure RS has the curved pattern, the stress applied on the recess structure RS in this process may be dispersed effectively, so as to decrease the damage on the recess structure RS, thereby enhancing the success rate of the manufacturing process of the sound producing cell200. Referring toFIG.11,FIG.11is a schematic diagram of a top view illustrating a sound producing cell according to a third embodiment of the present invention. As shown inFIG.11, a difference between this embodiment and the first embodiment is that the membrane110of the sound producing cell300of this embodiment includes a latch structure310. Under the condition that the first membrane subpart112and the second membrane subpart114moves along the direction Z (i.e., the normal direction of the base where the membrane110is disposed), the latch structure310may lock the first membrane subpart112and the second membrane subpart114when a moving distance of the first membrane subpart112along the direction Z and a moving distance of the second membrane subpart114along the direction Z are greater than a threshold value. Namely, the latch structure310is configured to limit moving distances of the first membrane subpart112and the second membrane subpart114. Because the subpart of the membrane110only has one anchored edge, the subpart of the membrane110may be fragile and may be damaged in the manufacturing process. In this embodiment, the existence of the latch structure310may enhance the success rate of manufacturing the membrane110, thereby increasing the yield rate of the sound producing cell300. In detail, in the step of removing the substrate SB and the adhering layer AL (e.g., the peel-off process), the displacement of the first membrane subpart112and the displacement of the second membrane subpart114along the direction Z are caused by the adhering force of the adhering layer AL. In this case, the latch structure310may lock the first membrane subpart112and the second membrane subpart114when the first membrane subpart112and the second membrane subpart114move along the direction Z with a displacement greater than the threshold value, so as to limit the movement of the first membrane subpart112and the second membrane subpart114and provide a restoring force for the first membrane subpart112and the second membrane subpart114, thereby reducing the damage on the membrane110. The latch structure310may have any suitable design based on requirement(s). In this embodiment, the latch structure310shown inFIG.11may be formed because of the slit(s) SL. For example, inFIG.11, the latch structure310may be formed because of two first slits SL1and three fourth slits SL4and SL4′, wherein the first slits SL1and the fourth slits SL4and SL4′ may be between the first membrane subpart112and the second membrane subpart114, and three fourth slits SL4and SL4′ may be connected between two first slits SL1. InFIG.11, the first slits SL1may be parallel to each other, but not limited thereto. InFIG.11, the fourth slit SL4′ extending along the direction X may be connected between two fourth slits SL4extending along the direction Y, and the fourth slit SL4extending along the direction Y may be connected between the fourth slits SL4′ extending along the direction X and the first slit SL1extending along the direction X, but not limited thereto. As shown inFIG.11, the latch structure310may include a first latch component312and a second latch component314, the first latch component312may be a portion of the first membrane subpart112(equivalently, the first latch component312may belong to the first membrane subpart112), and the second latch component314may be a portion of the second membrane subpart114(equivalently, the second latch component314may belong to the second membrane subpart114). InFIG.11, the first latch component312may be disposed between the second latch component314of the second membrane subpart114and another portion of the second membrane subpart114, and the second latch component314may be disposed between the first latch component312of the first membrane subpart112and another portion of the first membrane subpart112. For example, inFIG.11, a length direction of the first latch component312and a length direction of the second latch component314may be substantially parallel to the direction X, but not limited thereto. When the first membrane subpart112and the second membrane subpart114move along the direction Z with a displacement greater than the threshold value, the first latch component312is buckled to the second latch component314, so as to lock the first membrane subpart112and the second membrane subpart114. Note that the width of the slit SL and the size of the latch component are related to the buckled effect of the latch structure310. Referring toFIG.12,FIG.12is a schematic diagram of a top view illustrating a sound producing cell according to a fourth embodiment of the present invention. As shown inFIG.12, a difference between this embodiment and the first embodiment is that the membrane110of the sound producing cell400of this embodiment includes at least one spring connected between the subparts of membrane110, wherein the number of the spring(s) may be designed based on requirement(s). InFIG.12, the membrane110may include a first spring SPR1directly connected between the first membrane subpart112and the second membrane subpart114. Because of the existence of the first spring SPR1, the success rate of manufacturing the membrane110may be enhanced, thereby increasing the yield rate of the sound producing cell400. In detail, in the step of removing the substrate SB and the adhering layer AL, the displacement of the first membrane subpart112and the displacement of the second membrane subpart114along the direction Z are caused by the adhering force of the adhering layer AL. When the first membrane subpart112and the second membrane subpart114move along the direction Z with a large displacement, the first spring SPR1may limit the movement of the first membrane subpart112and the second membrane subpart114and provide a restoring force for the first membrane subpart112and the second membrane subpart114, thereby reducing the damage on the membrane110. The spring may have any suitable design based on requirement(s). As shown inFIG.12, the first spring SPR1may be formed because of the slit(s) SL. In this embodiment, the first spring SPR1shown inFIG.12may be formed because of two first slits SL1and two fifth slits SL5, wherein the fifth slit SL5may be connected to the first slit SL1, and the fifth slit SL5may have a curved pattern. For instance, the fifth slit SL5may include a hook-shaped curved pattern, and one end of the fifth slit SL5is not connected to another slit SL, but not limited thereto. For instance, the first slits SL1may be parallel to each other, but not limited thereto. When the membrane110moves, the stress caused by the deformation of the membrane110may applied on the spring. InFIG.12, because the fifth slit SL5includes the curved pattern (i.e., the hook-shaped curved pattern), the effect of the stress concentration may be reduced, such that the damage on the membrane110and the first spring SPR1may be reduced, thereby increasing the yield rate of the sound producing cell400. In addition, as shown inFIG.12, a connecting direction from the first spring SPR1to the first membrane subpart112may be different from a connecting direction from the first spring SPR1to the second membrane subpart114. For example, inFIG.12, the connecting direction from the first spring SPR1to the first membrane subpart112may be opposite to the connecting direction from the first spring SPR1to the second membrane subpart114, but not limited thereto. For example, inFIG.12, the first spring SPR1may substantially be a 1-shape, but not limited thereto. Referring toFIG.13,FIG.13is a schematic diagram of a top view illustrating a sound producing cell according to a fifth embodiment of the present invention. As shown inFIG.13, a difference between this embodiment and the fourth embodiment is the design of the first spring SPR1. InFIG.13, the first spring SPR1of the membrane110of the sound producing cell500may be formed because of the two first slits SL1, two fifth slits SL5and a sixth slit SL6, wherein two fifth slits SL5may be connected to the same first slit SL1, the sixth slit SL6may be connected to another first slit SL1, the fifth slit SL5may have two curved pattern and one straight pattern, and the sixth slit SL6may be between two fifth slits SL5and have a curved pattern. For instance, the fifth slit SL5may include a hook-shaped curved pattern, and one end of the fifth slit SL5is not connected to another slit SL, but not limited thereto. In addition, in the first spring SPR1shown inFIG.13, the connecting direction from the first spring SPR1to the first membrane subpart112may be the same as the connecting direction from the first spring SPR1to the second membrane subpart114, but not limited thereto. For example, inFIG.13, the first spring SPR1may substantially be a U-shape, but not limited thereto. Due to this design, the size of the central opening between the first membrane subpart112and the second membrane subpart114may be decreased, so as to reduce the leakage of the air in the operation of the sound producing cell500. When the membrane110moves, the stress caused by the deformation of the membrane110may applied on the spring. InFIG.13, because of the design of the U-shape first spring SPR1having curved slits SL, the effect of the stress concentration may be reduced, such that the damage on the membrane110and the first spring SPR1may be reduced, thereby increasing the yield rate of the sound producing cell500. Referring toFIG.14andFIG.15,FIG.14is a schematic diagram of a top view illustrating a sound producing cell according to a sixth embodiment of the present invention, andFIG.15is an enlarging schematic diagram showing a structure in a region R3inFIG.14. As shown inFIG.14andFIG.15, a difference between this embodiment and the first embodiment is that the membrane110of the sound producing cell600of this embodiment further includes a third membrane subpart116and a fourth membrane subpart118. The third membrane subpart116and the fourth membrane subpart118may be disposed between the first membrane subpart112and the second membrane subpart114in the top view, and the third membrane subpart116and the fourth membrane subpart118may be opposite to each other in the top view. In other words, the third membrane subpart116may be disposed by a first side (e.g., left side) of the sound producing cell600between the first membrane subpart112and the second membrane subpart114in the top view, the fourth membrane subpart118may be disposed by a second side (e.g., right side) of the sound producing cell600between the first membrane subpart112and the second membrane subpart114in the top view, and the first side and the second side of the sound producing cell600may be opposite to each other in the top view. InFIG.14, only one edge of the third membrane subpart116may be anchored by being connected to the anchor structure120, only one edge of the fourth membrane subpart118may be anchored by being connected to the anchor structure120, and other edges of the third membrane subpart116and other edges of the fourth membrane subpart118may be non-anchored and not connected to the anchor structure120. Namely, a third anchored edge116aof the third membrane subpart116may be an only one edge of the third membrane subpart116which is anchored, and a fourth anchored edge118aof the fourth membrane subpart118is an only one edge of the fourth membrane subpart118which is anchored, wherein the third membrane subpart116may be directly connected to the anchor structure120through the third anchored edge116aonly, and the fourth membrane subpart118may be directly connected to the anchor structure120through the fourth anchored edge118aonly. InFIG.14, one second slit SL2may be between the first membrane subpart112and the third membrane subpart116to define one second non-anchored edge112n2of the first membrane subpart112and one fifth non-anchored edge116n5of the third membrane subpart116, another second slit SL2may be between the first membrane subpart112and the fourth membrane subpart118to define another second non-anchored edge112n2of the first membrane subpart112and one sixth non-anchored edge118n6of the fourth membrane subpart118, one third slit SL3may be between the second membrane subpart114and the third membrane subpart116to define one fourth non-anchored edge114n4of the second membrane subpart114and another fifth non-anchored edge116n5of the third membrane subpart116, and another third slit SL3may be between the second membrane subpart114and the fourth membrane subpart118to define another fourth non-anchored edge114n4of the second membrane subpart114and another sixth non-anchored edge118n6of the fourth membrane subpart118. In some embodiments, the fifth non-anchored edge116n5of the third membrane subpart116may be adjacent to the third anchored edge116aof the third membrane subpart116, and the sixth non-anchored edge118n6of the fourth membrane subpart118may be adjacent to the fourth anchored edge118aof the fourth membrane subpart118, but not limited thereto. As shown inFIG.14, the shape of the first membrane subpart112and the shape of the second membrane subpart114may substantially be trapezoids, the shape of the third membrane subpart116and the shape of the fourth membrane subpart118may substantially be triangles, the first membrane subpart112and the second membrane subpart114may be substantially congruent, and the third membrane subpart116and the fourth membrane subpart118may be substantially congruent, but not limited thereto. During the operation of the sound producing cell600, side openings are respectively between the first membrane subpart112and the third membrane subpart116, between the second membrane subpart114and the third membrane subpart116, between the first membrane subpart112and the fourth membrane subpart118and between the second membrane subpart114and the fourth membrane subpart118. The size of the side opening is relative to a low frequency roll-off (LFRO) effect in the frequency response of the sound producing cell600, wherein the strong LFRO effect may cause an evident SPL drop of the acoustic wave in the low frequency. In detail, regarding the side opening of the sound producing cell600, an acoustic resistance for low frequency may be according to a formula: R∝Lb×d3, wherein R is the acoustic resistance for low frequency, L is the thickness of the membrane110, b is the length of the second non-anchored edge112n2of the first membrane subpart112or the length of the fourth non-anchored edge114n4of the second membrane subpart114, and d is the maximum size of the side opening in the direction Z. If the acoustic resistance for low frequency is increased, the leakage of the air (e.g., acoustic leakage) in the operation of the sound producing cell600is decreased, so as to reduce the LFRO effect in the frequency response of the sound producing cell600. According to the formula, when d (i.e., the maximum size of the side opening in the direction Z) is decreased, the acoustic resistance for low frequency is increased. In the first embodiment shown inFIG.1, regarding the first membrane subpart112, the maximum size of the side opening in the direction Z is a maximum distance between the second non-anchored edge112n2and the anchor structure120in the direction Z. In the sixth embodiment shown inFIG.14, regarding the first membrane subpart112, the maximum size of the side opening in the direction Z is a maximum distance between the second non-anchored edge112n2of the first membrane subpart112and the fifth non-anchored edge116n5of the third membrane subpart116(or the sixth non-anchored edge118n6of the fourth membrane subpart118) in the direction Z. In the sixth embodiment shown inFIG.14, since the third membrane subpart116and the fourth membrane subpart118exist, d shown in the formula may be decreased by controlling the third membrane subpart116and the fourth membrane subpart118to be close to the first membrane subpart112and the second membrane subpart114in the direction Z during the operation of the sound producing cell600. That is to say, inFIG.14, the third membrane subpart116may be configured to reduce the acoustic leakage at the first side (left side) of the sound producing cell600, and the fourth membrane subpart118is configured to reduce the acoustic leakage at the second side (right side) of the sound producing cell. The sound producing cell600may include at least one suitable structure to make d (i.e., the maximum size of the side opening in the direction Z) decreased, thereby enhancing the acoustic resistance for low frequency. In this embodiment, due to this suitable structure, during the operation of the sound producing cell600, the fifth non-anchored edges116n5of the third membrane subpart116may be respectively close to the second non-anchored edge112n2of the first membrane subpart112and the fourth non-anchored edge114n4of the second membrane subpart114in the direction Z, and the sixth non-anchored edges118n6of the fourth membrane subpart118may be respectively close to the second non-anchored edge112n2of the first membrane subpart112and the fourth non-anchored edge114n4of the second membrane subpart114in the direction Z. Accordingly, during the operation of the sound producing cell600, the sizes of the side openings may be reduced, so as to enhance the acoustic resistance for low frequency, thereby reducing the LFRO effect in the frequency response of the sound producing cell600. For example, in order to make d decreased, the membrane110may include at least one spring connected between the subparts of membrane110, such that the non-anchored edges of these subparts may be close to each other in the direction Z during the operation of the sound producing cell600. As shown inFIG.14, the membrane110may include at least one second spring SPR2and at least one third spring SPR3, the second spring SPR2may be directly connected between the first membrane subpart112and the third membrane subpart116or directly connected between the first membrane subpart112and the fourth membrane subpart118, and the third spring SPR3may be directly connected between the second membrane subpart114and the third membrane subpart116or between the second membrane subpart114and the fourth membrane subpart118. InFIG.14, the membrane110may include two second springs SPR2and two third springs SPR3, two second springs SPR2may be respectively connected between the first membrane subpart112and the third membrane subpart116and between the first membrane subpart112and the fourth membrane subpart118, and two third springs SPR3may be respectively connected between the second membrane subpart114and the third membrane subpart116and between the second membrane subpart114and the fourth membrane subpart118, but not limited thereto. Note that the second spring SPR2and the third spring SPR3are formed because of the slits SL (e.g., the slits SL other than the first slit SL1, the second slits SL2and the third slits SL3). In addition, in one spring shown inFIG.14, the connecting direction from this spring to one subpart may be the same as the connecting direction from this spring to another subpart, but not limited thereto. For example, inFIG.14, the spring may substantially be a U-shape, but not limited thereto. For example, the U-shape of the spring may have a great curvature, but not limited thereto. Due to this design, the size of the side opening between two subparts may be decreased (i.e., d is decreased), so as to reduce the leakage of the air in the operation of the sound producing cell600, thereby reducing the LFRO effect in the frequency response of the sound producing cell600. For example, in order to make d decreased, the actuating layer130may be disposed on the first membrane subpart112, the second membrane subpart114, the third membrane subpart116and the fourth membrane subpart118. During the operation of the sound producing cell600, the actuating layer130may actuate these subparts to move along the direction Z, such that the non-anchored edges of these subparts may be close to each other in the direction Z. Moreover, in the region R3shown inFIG.15, the sound producing cell600may include a recess structure RS outside the membrane110, wherein the recess structure RS may be directly connected to a slit segment SLs in the corner region CR of the membrane110, and the recess structure RS may have a curved pattern (e.g., the recess structure RS may have a pattern with a half circular arc). For example, inFIG.15, the slit segment SLs may be connected between the end of the second slit SL2situated in the corner region CR and the recess structure RS, and the slit segment SLs may have a straight pattern, but not limited thereto. The existence of the curved recess structure RS connected to the slit segment SLs situating in the corner region CR may enhance the success rate of the manufacturing process of the sound producing cell600, thereby increasing the yield rate of the sound producing cell600. Referring toFIG.16,FIG.16is a schematic diagram of a top view illustrating a sound producing cell according to a seventh embodiment of the present invention. As shown inFIG.16, a difference between this embodiment and the sixth embodiment is the design of the spring. In the sound producing cell700shown inFIG.16, the fifth slits SL5including a hook-shaped curved pattern and a straight pattern may be individually connected to the first slit SL1, the second slit SL2or the third slit SL3, and the second springs SPR2and the third springs SPR3may be formed because of the first slit SL1, the second slits SL2, the third slits SL3and the fifth slits SL5, but not limited thereto. Furthermore, inFIG.16, the spring may substantially be a V-shape, but not limited thereto. Referring toFIG.17,FIG.17is a schematic diagram of a top view illustrating a sound producing cell according to an eighth embodiment of the present invention. As shown inFIG.17, a difference between this embodiment and the sixth embodiment is that the slits SL of the membrane110of the sound producing cell800further includes at least one side slit SLi formed on the third membrane subpart116and/or the fourth membrane subpart118. Due to the existence of the side slits SLi, the structural strengths of the third membrane subpart116and the fourth membrane subpart118may be weakened, such that the second spring SPR2and the third spring SPR3may pull the third membrane subpart116and the fourth membrane subpart118to make their the non-anchored edges be closer to the non-anchored edges of the first membrane subpart112and the second membrane subpart114in the direction Z during the operation of the sound producing cell800. On the other hand, compared with the structure which the side slit SLi does not exist, the membrane110of this embodiment may form a plurality smaller openings replacing one original greater opening between two non-anchored edges of the subparts during the operation of the sound producing cell800, wherein at least one smaller openings may be formed between two non-anchored edges, and at least one smaller opening may be formed by side slit(s) SLi. Namely, d of the original greater opening is changed to a plurality of d′ of the smaller openings, and d′ is smaller than d. For example, according to above formula, assuming that one original greater opening is replaced by three smaller openings and d of the original greater opening is three times greater than d′ of the smaller opening, the acoustic resistance of three smaller openings is nine times greater than the acoustic resistance of the original greater opening. Thus, the acoustic resistance for low frequency may be increased by this design. As shown inFIG.17, the second spring SPR2may be formed because of the first slit SL1, the second slit SL2, the fifth slit SL5and the side slit(s) SLi, and the third spring SPR3may be formed because of the first slit SL1, the third slit SL3, the fifth slit SL5and the side slit(s) SLi, but not limited thereto. In some embodiments, as shown inFIG.17, the actuating layer130may be disposed on the first membrane subpart112and the second membrane subpart114, and the actuating layer130may be not disposed on the third membrane subpart116and the fourth membrane subpart118(i.e., no actuating layer is disposed on the third membrane subpart116and the fourth membrane subpart118), but not limited thereto. Moreover, inFIG.17, the membrane110may optionally include a first spring SPR1directly connected between the first membrane subpart112and the second membrane subpart114. For example, the first spring SPR1shown inFIG.17may be formed because of two first slits SL1and two fifth slits SL5, but not limited thereto. Referring toFIG.18andFIG.19,FIG.18is a schematic diagram of a top view illustrating a sound producing cell according to a ninth embodiment of the present invention, andFIG.19is a schematic diagram of a side view illustrating the sound producing cell according to the ninth embodiment of the present invention, whereinFIG.18andFIG.19only show the first membrane subpart112, and the design of the second membrane subpart114may be similar to the design of the first membrane subpart112. As shown inFIG.18, a difference between this embodiment and the first embodiment is the design of the anchored edge of the subpart of the membrane110. In the sound producing cell900of this embodiment, the anchored edge of the subpart of the membrane110is partially anchored, such that the anchored edge includes at least one anchored part and at least one non-anchored part, wherein the anchored part of the anchored edge is anchored, and the non-anchored part of the anchored edge is non-anchored. For example, inFIG.18, the first anchored edge112aof the first membrane subpart112which is partially anchored may include two anchored parts AP and one non-anchored part NP between two anchored parts AP, but not limited thereto. The non-anchored part NP of the first anchored edge112amay move toward the direction Z when the sound producing cell900is operated (i.e., the first membrane subpart112is actuated), so as to enhance the deformation of the membrane110, thereby increasing the SPL of the acoustic wave produced by the sound producing cell900. In order to make the anchored edge have the anchored part(s) AP and the non-anchored part(s) NP, the slits SL of the membrane110may include at least one inner slit. In this embodiment, the first membrane subpart112may have at least one first inner slit SLn1and at least one second inner slit SLn2, wherein the non-anchored part NP of the first anchored edge112amay be defined by the first inner slit SLn1, and the second inner slit SLn2is connected to the first inner slit SLn1, so as to make the first anchored edge112ahave the anchored part(s) AP and the non-anchored part(s) NP. Namely, the first inner slit SLn1may be parallel to the first anchored edge112aand between the first membrane subpart112and the anchor structure120, and the second inner slit SLn2may be not parallel to the first anchored edge112a. For example, inFIG.18, the first membrane subpart112may have one first slit SL1and two second slits SL2, and the second inner slit SLn2may be a straight slit perpendicular to the first anchored edge112a, but not limited thereto. For example, the second inner slit SLn2may extend from the first anchored edge112atoward the first slit SL1, and the second inner slit SLn2may not be connected to the first slit SL1. The first inner slit SLn1defining the non-anchored part NP of the first anchored edge112amay be connected between two slits SL. For example, inFIG.18, the first inner slit SLn1may be connected between two second inner slits SLn2, such that the anchored part AP and the non-anchored part NP of the first anchored edge112amay be divided by the second inner slit SLn2, but not limited thereto. Optionally, inFIG.18, the first inner slit SLn1and the second inner slit SLn2may be separated from the first slit SL1, the second slit SL2and the third slit SL3, but not limited thereto. As shown inFIG.18, the first membrane subpart112may be divided into a plurality of parts by the inner slits SL. For example, inFIG.18, the first membrane subpart112may be divided into three parts912p1,912p2and912p3, the part912p1and the part912p3may be between the second slit SL2and the second inner slit SLn2, and the part912p2may be between two second inner slits SLn2. For example, inFIG.18, the part912p1and the part912p3may have the anchored part AP of the first anchored edge112a, so as to be anchored by the anchor structure120. For example, inFIG.18, the part912p2may have the non-anchored part NP of the first anchored edge112a, such that the part912p2may move along the direction Z with greater displacement (compared with the parts912p1and912p3) during the operation of the sound producing cell900, thereby increasing the SPL of the acoustic wave produced by the sound producing cell900. As shown inFIG.18, the actuating layer130may include three portions respectively disposed on three parts912p1,912p2and912p3of the first membrane subpart112, so as to actuate the first membrane subpart112. InFIG.19showing the side view of the sound producing cell900during its operation, the part912p2may move along the direction Z with greater displacement (compared with the parts912p1and912p3) during the operation of the sound producing cell900, and the non-anchored part NP of the first anchored edge112amay be higher than the anchored part AP in the direction Z. Referring toFIG.20,FIG.20is a schematic diagram of a top view illustrating a sound producing cell according to a tenth embodiment of the present invention. As shown inFIG.20, a difference between this embodiment and the ninth embodiment is the design of the anchored edge of the subpart of the membrane110. In the sound producing cell900′ shown inFIG.20, the first anchored edge112aof the first membrane subpart112may include two non-anchored parts NP and one anchored part AP between two non-anchored parts NP, but not limited thereto. InFIG.20, the first membrane subpart112may have two first inner slits SLn1and two second inner slits SLn2, and the first inner slit SLn1may be connected between the second inner slit SLn2and the second slit SL2, but not limited thereto. InFIG.20, the part912p2may have the anchored part AP of the first anchored edge112a, so as to be anchored by the anchor structure120. InFIG.20, the part912p1and the part912p3may have the non-anchored part NP of the first anchored edge112a, such that the part912p1and the part912p3may move along the direction Z with greater displacement (compared with the part912p2) during the operation of the sound producing cell900′, thereby increasing the SPL of the acoustic wave produced by the sound producing cell900′. In summary, according to the design of the sound producing cell of the present invention, the sound producing cell may achieve higher resonant frequency, larger SPL, high yield rate and/or low air leakage. Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
63,668
11943596
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS The present invention will hereinafter be described in detail with reference to exemplary embodiments. To make the technical problems to be solved, technical solutions and beneficial effects of the present invention more apparent, the present invention is described in further detail together with the figures and the embodiments. It should be understood the specific embodiments described hereby is only to explain the disclosure, not intended to limit the disclosure. Referring to theFIGS.1-5, the present invention provides one embodiment of an acoustic device100. The acoustic device100includes a frame1, a vibration system2supported on the frame1and a magnetic circuit system3having a magnetic gap30. The vibration system2and the magnetic circuit system3are supported on the frame1respectively, and the magnetic circuit system3drives the vibration system2to generate sounds. The frame1is a rectangular annular hollow structure. In a first embodiment of present invention, the vibration system2includes a diaphragm21, a voice coil22at least partially located in the magnetic gap30to drive the diaphragm21to generate sounds, a voice coil holder23connecting the diaphragm21and the voice coil22, and a support assembly24connecting the frame1and the voice coil holder23. The diaphragm21includes a dome211, an annular suspension212, and a plurality of mounting portions213each disposed at four corners of the suspension212. The suspension212is disposed around the dome211, and the mounting portions213are fixed to the frame1and arranged to cover a part of the frame1. The voice coil22includes a leading wire221. The voice coil holder23comprises a first fixing portion231extending toward a direction of the voice coil22, a second fixing portion232extending toward to the support assembly24, a first connecting portion233connecting the first fixing portion231and the second fixing portion232, a flat portion234surrounded by the first fixing portion231and the first connecting portion233, and a second connecting portion235connecting the flat portion234and the first connecting portion233. The first fixing portion231is mounted to the voice coil22, and the second fixing portion232is mounted to the support assembly24. The first connecting portion233is a hollow annular structure. The flat portion234abuts against the diaphragm21. The first connecting portion233provides with a reinforcing rib2331protruding and extending from the first fixing portion231. The reinforcing rib2331protrudes in a direction away from the diaphragm21. There is a plurality of reinforcing ribs2331, and adjacent reinforcing ribs2331are spaced apart from each other. The flat portion234of the voice coil holder23is attached to a side of the dome221proximal to the voice coil22. The first fixing portion231of the voice coil holder23bends and extends from an inner end of the first connecting portion233to a direction away from the diaphragm21. The second fixing portion232bends and extends from an outer end of the first connecting portion233to a direction away from the diaphragm21. The first fixing portion231and the second fixing portion232are disposed opposite to and spaced apart from each other. The first fixing portion231extends toward the voice coil22and is fixedly connected to an end of the voice coil22close to the diaphragm21, and the second fixing portion232extends toward the support assembly24and is fixedly connected to the support assembly24. At least a part of the first connecting portion233abuts against and supports the diaphragm21. In present embodiment, the voice coil holder23is a plastic voice coil holder, and the reinforcing rib2331is a protruding structure integrally injection-molded with the first fixing portion231, the second fixing portion232and the first connecting portion233. And in other embodiments, the reinforcing rib2331can also be a thicker part formed during the molding process. At this time, there is no obvious protrusion shape, but a thickness of the first connecting portion with the reinforcing rib is greater than that of the other parts without the reinforcing rib. The acoustic device100can be with a rectangular shape, and the reinforcing rib2331is arranged along a direction of a long axis of the first connecting portion233, and/or the reinforcing rib2331is arranged along a direction of a short axis of the first connecting portion233. Optionally, the reinforcing ribs2331are symmetrically arranged on both sides of the first connecting portion233. The support assembly24comprises a flexible circuit board241mounted on a side of the voice coil holder23away from the diaphragm21and a support membrane242mounted on a side of the flexible circuit board241away from the frame1. The flexible circuit board241comprises a first fixing end2411secured to the frame1, a second fixing end2412secured to the voice coil holder23, a third connecting portion2413connecting the first fixing end2411and the second fixing end2412, and a connecting arm2414bending and extending from the second fixing end2412. The leading wire221is soldered and fixed to the connecting arm2414. The second fixing end2412is fixedly connected with the second fixing portion232. The number of the support membranes242is two, and each of the two short axis directions of the acoustic device100is arranged with one support membrane242. The number of the flexible circuit boards241is also two. And each of the two short axis directions of the acoustic device100is arranged with one flexible circuit board241. And only one of the flexible circuit boards241is provided with the connecting arm2414. The magnetic circuit system3comprises a yoke31and a main magnet32mounted on the yoke31, and a plurality of auxiliary magnets33around and spaced apart from the main magnet32for forming the magnetic gap30, a main pole plate34covered on the main magnet32, and an auxiliary pole plate35covered on the auxiliary magnets33. The number of the auxiliary magnets33is four, and the four auxiliary magnets33are arranged around the main magnet32, and an avoiding portion351is disposed on the auxiliary pole plate35corresponding to a position of the reinforcing rib2331. Referring to theFIG.6, a second embodiment of present invention, and the only difference between the second embodiment and the first embodiment is that the voice coil holder23′ in the second embodiment is a metal voice coil holder, and the reinforcing rib2331′ is a protruding structure formed by stamping from the first connecting portion231′. The auxiliary pole plate35′ is also provided with a corresponding avoiding portion351′. Comparing with the related art, in present invention, the acoustic device includes a frame, a vibration system and a magnetic circuit system supported on the frame. The vibration system includes a diaphragm, a voice coil driving the diaphragm to generate sounds, a voice coil holder connecting the diaphragm and the voice coil, and a support assembly connecting the frame and the voice coil holder. The magnetic circuit system has a magnetic gap, and the voice coil partially locates in the magnetic gap. The voice coil holder comprises a first fixing portion extending toward a direction of the voice coil, a second fixing portion extending toward to the support assembly, and a first connecting portion connecting the first fixing portion and the second fixing portion. The first fixing portion is mounted to the voice coil, and the second fixing portion is mounted to the support assembly. The first connecting portion provides with a reinforcing rib protruding and extending from the first fixing portion. Therefore, by disposing reinforcing ribs on the first connecting portion of the voice coil holder, it is beneficial to enhance a rigidity of the voice coil holder. And to a certain extent, an influence of split vibration on performance can be improved. A natural frequency can be increased by more than 10%. A high frequency of the acoustic device is extended, thereby improving a high frequency performance of the acoustic device. It is to be understood, however, that even though numerous characteristics and advantages of the present exemplary embodiments have been set forth in the foregoing description, together with details of the structures and functions of the embodiments, the disclosure is illustrative only, and changes may be made in detail, especially in matters of shape, size, and arrangement of parts within the principles of the invention to the full extent indicated by the broad general meaning of the terms where the appended claims are expressed.
8,616
11943597
DETAILED DESCRIPTION OF THE INVENTION To make the objectives, features, and advantages of the present invention more obvious and comprehensible, the present invention is further described in detail below with reference to the accompanying drawings and specific implementations. The phrase “an embodiment”, “one embodiment”, or “embodiments” as used herein refers to a particular feature, structure, or characteristic that can be included in at least one implementation of the present invention. The “in an embodiment” appearing in different places throughout the specification does not necessarily refer to the same embodiment, or an independent embodiment or optional embodiment that is mutually exclusive with other embodiments. Unless otherwise specified, the terms “connection”, “connecting”, and “connected” in this specification that indicate electrical connection all indicate direct or indirect electrical connection. FIG.2is a first longitudinal schematic cross-sectional view of the receiver according to one embodiment of the present invention, andFIG.3is a second longitudinal schematic cross-sectional view of the receiver according to one embodiment of the present invention. The receiver shown inFIG.2andFIG.3includes a housing210, a diaphragm mechanism (or a diaphragm)220, and an electromagnetic driving mechanism (not labelled). The housing210has a hollow inner cavity230. The diaphragm mechanism220is disposed in the hollow inner cavity230and partitions the hollow inner cavity230into a first cavity232and a second cavity234. The diaphragm mechanism220includes a vibration plate222. A fixed end2224of the vibration plate222is connected to an inner wall of the housing210, and a free end (or a vibration end)2222of the vibration plate222is suspended in the hollow inner cavity230. In the specific embodiment shown inFIG.2andFIG.3, the housing210includes a cover plate212and a hollow box214with a top opening. The hollow box214includes a bottom surface and a side wall. The cover plate212covers the top opening of the hollow box214, and the hollow box214and the cover plate212form the hollow inner cavity230. For example, the cover plate212and the hollow box214are fixedly connected by using adhesives or through electric welding. In a preferred embodiment, both the cover plate212and the hollow box214are both made of magnetic permeable materials. In the specific embodiment shown inFIG.2andFIG.3, the diaphragm mechanism220is disposed within the hollow box214, and the diaphragm mechanism220partitions the hollow inner cavity230into the first cavity232close to the cover plate212and the second cavity234close to a bottom surface of the hollow box214. A plurality of bosses216are provided on an inner wall surface of the side wall of the hollow box214, and are configured to support the diaphragm mechanism220. The electromagnetic driving mechanism is disposed in the hollow inner cavity230and includes a coil assembly240and at least one magnetic field generation member250,260. The magnetic field generation member250,260is respectively disposed in the first cavity232and the second cavity234, and the magnetic field generation member250,260is close to the free end2222of the vibration plate222. The coil assembly240is disposed in the second cavity234. The coil assembly240includes a coil242and a magnetic core244. The coil242and the vibration plate222are placed in the same direction (that is, the coil242is placed horizontally or in parallel relative to the vibration plate222). The magnetic core244is inserted in a hollow inner hole of the coil242. A first end of the magnetic core244extends out of the hollow inner hole of the coil242and is fixed in the second cavity234, and a second end of the magnetic core244extends out of the hollow inner hole of the coil242and serves as a support for the vibration plate222. The magnetic core244is preferably an iron core. In the specific embodiment shown inFIG.2andFIG.3, the electromagnetic driving mechanism includes the first magnetic field generation member250disposed in the first cavity232and close to the free end2222of the vibration plate222and the second magnetic field generation member260disposed in the second cavity234and close to the free end2222of the vibration plate222. The first magnetic field generation member250and the second magnetic field generation member260are opposite to each other. The first magnetic field generation member250is fixed to the cover plate212(or the top surface of the housing210) and faces the free end2222of the vibration plate222, and a required gap is reserved between the first magnetic field generation member250and the free end2222of the vibration plate222, wherein the required gap is 0.05-0.2 mm. The second magnetic field generation member260is fixed to the bottom surface of the hollow box214(or a bottom surface of the housing210) and faces the free end2222of the vibration plate222, and a required gap is reserved between the second magnetic field generation member260and the free end of the vibration plate222, wherein the required gap is 0.05-0.2 mm. The second magnetic field generation member260and the coil assembly240are arranged side by side, and the coil assembly240is closer to the fixed end2224of the vibration plate222than the second magnetic field generation member260. In a preferred embodiment, the magnetic field generation member250,260is a permanent magnet. In one embodiment, only the first magnetic field generation member250may be adopted, or only the second magnetic field generation member260may be adopted, as long as a fixed magnetic field (or the DC magnetic field) can be provided. In the specific embodiment shown inFIG.2andFIG.3, the electromagnetic driving mechanism further includes a magnetic permeable assembly270. The magnetic permeable assembly270is located between the second magnetic field generation member260and the bottom surface of the hollow box214. The magnetic permeable assembly270includes a first magnetic permeable block272and a second magnetic permeable block274sequentially arranged between the second magnetic field generation member260and the bottom surface of the hollow box214. The first magnetic permeable block272and the second magnetic permeable block are arranged opposite to each other and are spaced apart from each other, and the first end of the magnetic core244extends out of the hollow inner hole of the coil242and is clamped between the first magnetic permeable block272and the second magnetic permeable block274. It should be particularly noted that in the specific embodiment shown inFIG.2andFIG.3, the magnetic core244is an L-shaped magnetic core. The L-shaped magnetic core244includes a horizontal portion and a vertical portion forming an L-shaped structure. The horizontal portion of the L-shaped magnetic core244is inserted in the hollow inner hole of the coil242. One end of the horizontal portion of the L-shaped magnetic core244extends out of the hollow inner hole of the coil242and is clamped between the first magnetic permeable block272and the second magnetic permeable block274. The other end of the horizontal portion of the L-shaped magnetic core244is connected to the vertical portion of the L-shaped magnetic core244. The vertical portion of the L-shaped magnetic core244extends out of the hollow inner hole of the coil242and is connected to the fixed end2224of the vibration plate222. One end of the horizontal portion of the L-shaped magnetic core244is referred to as a first end of the L-shaped magnetic core244, and the vertical portion of the L-shaped magnetic core244is referred to as a second end of the L-shaped magnetic core244. In the specific embodiment shown inFIG.2andFIG.3, a side of the diaphragm mechanism220that is located at the free end2222of the vibration plate222is supported by the boss216, and a side of the diaphragm mechanism220that is located at the fixed end2224of the vibration plate222is supported by the vertical portion of the L-shaped magnetic core244. A periphery of the diaphragm mechanism220is fixed and sealingly connected with the inner wall of the housing210by adopting the adhesive. Referring toFIG.2andFIG.3, the diaphragm mechanism220further includes a fixed frame224. The fixed frame224is connected to the inner side surfaces of the side walls of the hollow box214and has an inner space (not labelled) formed through the fixed frame in a thickness direction of the fixed frame224. The fixed frame224is made of a non-magnetic permeable material that may be stainless steel, aluminum, or other non-magnetic permeable metal or non-metal materials. The fixed end2224of the vibration plate222is fixed to an inner side of the fixed frame224, the free end2222of the vibration plate is suspended in the inner space of the fixed frame224. A reserved gap226is formed between an outer side surface of the free end2222of the vibration plate222and an inner side surface of the fixed frame224. In the embodiment shown inFIG.2andFIG.3, the vibration plate222and the fixed frame224are of a one-piece design, and a U-shaped reserved gap226is a slot formed on the one-piece design. In another embodiment, the diaphragm mechanism220further includes a hinge (not labelled), and the fixed end2224of the vibration plate222is hinged to the inner side of the fixed frame224through the hinge. The hinge is disposed on the fixed frame224, and a protrusion and a groove matching the hinge are respectively arranged on the fixed end of the vibration plate222and the fixed frame224. The principle of the electromagnetic driving mechanism shown inFIG.2andFIG.3to drive the vibration plate222to vibrate is: when an alternating current is applied to the coil242, the generated AC magnetic field enters the vibration plate222through the L-shaped magnetic core244, so that the vibration plate222is polarized. Under the action of the fixed magnetic field (or the DC magnetic field) generated by the magnetic field generation member250,260, a driving force is generated to push the vibration plate222to vibrate repeatedly in the vertical direction, thereby driving a sounding diaphragm (not labelled) of the diaphragm mechanism220to blow the air to produce sound. FIG.4is a schematic exploded view of the receiver shown inFIG.2andFIG.3. Compared withFIG.1, the assemblies inside the receiver shown inFIG.4are clearly structured, and the stacked design makes the assembly process simple, which is very suitable for automated production. FIG.5is a schematic longitudinal cross-sectional view of the receiver according to another embodiment of the present invention. The embodiment shown inFIG.5is an extension of the embodiment shown inFIG.2. A main difference between the two is: the vibration plate222inFIG.2is a straight plate, and the magnetic core244is an L-shaped structure; the vibration plate522inFIG.5is an inverted L-shaped structure, and the magnetic core544is a straight rod or a straight plate. As shown inFIG.5, the coil assembly540is disposed in the second cavity234. The coil assembly540includes a coil542and a magnetic core544. The coil542and the vibration plate522are placed in the same direction (that is, the coil542is placed horizontally or in parallel relative to the vibration plate522). The magnetic core544is a straight rod or a straight plate inserted in a hollow inner hole of the coil542. A first end of the magnetic core544extends out of the hollow inner hole of the coil542and is clamped between the first magnetic permeable block272and the second magnetic permeable block274, and a second end of the magnetic core544extends out of the hollow inner hole of the coil242. FIG.6is a structural implementation diagram of the diaphragm mechanism520inFIG.5in one embodiment. The diaphragm mechanism inFIG.5andFIG.6includes a fixed frame524and an inverted L-shaped vibration plate522. The fixed frame524is connected to the inner side surfaces of the side walls of the hollow box214and has an inner space (not labelled) formed through the fixed frame in a thickness direction of the fixed frame524. The inverted L-shaped vibration plate522includes a horizontal portion and a vertical portion forming an inverted L-shaped structure. One end of the horizontal portion of the inverted L-shaped vibration plate522is a free end5222of the vibration plate522, the free end5222being suspended in the inner space of the fixed frame524, and a reserved gap526is formed between an outer side surface of the free end5222and an inner side surface of the fixed frame524. The other end of the horizontal portion that is connected to the vertical portion is a fixed end5224of the inverted L-shaped vibration plate522, the fixed end5224being fixed to an inner side of the fixed frame524, and the vertical portion of the inverted L-shaped vibration plate522is connected to a second end of the magnetic core544as a connecting end of the inverted L-shaped vibration plate522. In the specific embodiment shown inFIG.5, a side of the diaphragm mechanism520that is located at the free end5222of the vibration plate522is supported by the boss216, and a side of the diaphragm mechanism520that is located at the fixed end5224of the vibration plate522is supported by the second end of the magnetic core544. FIG.7is a schematic exploded view of the receiver shown inFIG.5. Compared withFIG.1, the assemblies inside the receiver shown inFIG.7are clearly structured, and the stacked design makes the assembly process simple, which is very suitable for automated production. In summary, the vibration plate222,522in the present invention are made of the magnetic permeable material, and the fixed end of the vibration plate is connected to the magnetic core of the coil assembly, so that the alternating current magnetic field generated by the coil after being energized enters the vibration plate and interacts with the DC magnetic field to generate a driving force to push the vibration plate to vibrate and produce sound without additional driving rods and reeds, and the vibration plate and the reed are combined into one. As a result, the receiver in the present invention has the following advantages or beneficial effects:(1) The assemblies inside the receiver are clearly structured, and the stacked design makes the assembly process simple, which is very suitable for automated production;(2) The connection between the movable parts (for example, the driving rod and the reed) is reduced, and the reliability is higher;(3) Fewer component parts and simpler assembly process lead to higher production efficiency; and(4) Fewer components and simpler assembly process facilitate cost reduction. In the present invention, unless otherwise specified, the terms such as “connection”, “connected”, “connecting”, “connect” and the like that indicate electrical connection indicate direct or indirect electrical connection. It should be noted that any modifications made by a person skilled in the art to the specific implementations of the present invention shall fall within the scope of the claims of the present invention. Correspondingly, the scope of the claims of the present invention is not merely limited to the foregoing specific implementations.
15,177
11943598
wherein,1—frame;10—magnetic circuit mounting hole;11—flange;2—diaphragm;20—voice coil mounting hole;21—diaphragm bottom;22—tapered edge portion;23—reinforcing rib;3—input drive mechanism;31—dust cover;32—voice coil;320—lead;33—damper;34—secondary neodymium magnetic steel;35—magnetic pole core;36—main neodymium magnetic steel;37—U-yoke;4—yoke ring;5—audio signal input terminal. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS In the following, the preferable embodiments of the present disclosure are explained in detail combining with the accompanying drawings so that the advantages and features of the present disclosure can be easily understood by the skilled persons in the art. It should be noted that the explanation on these implementations is to help understanding of the present disclosure, and is not intended to limit the present disclosure. This embodiment provides a multi-input-driving loudspeaker, herein, “multi-input” refers to multiple audio signal inputs, multiple audio signals are input to multiple voice coils, and the multiple voice coils jointly drive the loudspeaker to produce sound. Referring toFIG.1toFIG.4, the multi-input-driving loudspeaker comprises a frame1, a diaphragm2, and a plurality of input driving mechanisms3. The diaphragm2is used to vibrate to produce sound, and is fixed arranged on the frame1. Each input driving mechanism3comprises a voice coil32and a magnetic circuit assembly for driving the voice coil32to vibrate; wherein, the frame1is provided with a plurality of magnetic circuit mounting holes10, and at most one magnetic circuit assembly is arranged at each magnetic circuit mounting hole10; the diaphragm2is provided with a plurality of voice coil mounting holes20, and at most one voice coil32is provided at each voice coil mounting hole20. That is, the plurality of input-driving mechanisms is mounted on the frame1and the diaphragm2. There are three or more input driving mechanisms3to increase the driving energy of the loudspeaker, and the three or more input driving mechanisms3are arranged at equal intervals along a circumference. The diaphragm has a diaphragm bottom21that is circular as a whole and shaped as a flat plate, and the center of the circumference coincides with the center of the diaphragm bottom21, that is, the plurality of input driving mechanisms3is arranged at equal intervals along the circumference of the diaphragm bottom21. Correspondingly, the diaphragm bottom21is provided with three or more voice coil mounting holes20, the center lines of the voice coil mounting holes20pass through the circumference, and each of the voice coil mounting holes20is provided with one voice coil32so that the voice coil32is connected with the diaphragm bottom21; the frame1is provided with three or more magnetic circuit mounting holes10, the magnetic circuit mounting holes10are arranged at equal intervals along the circumference, and each of the magnetic circuit mounting holes10is provided with one magnetic circuit assembly. Specifically, as shown inFIGS.1-4, the number of the input driving mechanisms3, the voice coil mounting holes20and the magnetic circuit mounting holes10are all four, and they are arranged in a ring around the center of the diaphragm bottom21. In this embodiment, the frame1is made of plastic using processes such as injection molding, which is easy to form and has a certain strength, and the magnetic circuit mounting holes10are through holes that penetrate the frame1from top to bottom. The diaphragm2further comprises a tapered edge portion22extending obliquely upwards from the outer edge of the diaphragm bottom21, and the tapered edge portion22is arranged in a circle around the diaphragm bottom21. The diaphragm2is made of paper pulp, PP (polypropylene), ballistic fiber or aluminum alloy, and the made diaphragm2is light in weight, has good damping elasticity and rigidity, high temperature and low temperature resistance, waterproof and mildew proof. In addition, the tapered edge portion22of the diaphragm2is fixedly connected to the frame1through a yoke ring4, which is made of sponge, rubber, or cloth. With the diaphragm2with the above-mentioned shape, the directional expansion width is superior to that of the traditional conical loudspeaker, and the height is lower than that of the traditional conical diaphragm2, which is beneficial to reducing the overall height of the loudspeaker. Each input-driving mechanism3also comprises a dust cover31and a damper33respectively. The specific mechanism of the input-driving mechanisms3will be described in detail below. As shown inFIG.1, each input-driving mechanism3is consisted of a dust cover31, a voice coil32, a damper33, a secondary neodymium magnetic steel34, a magnetic pole core35, a main neodymium magnetic steel36, and a U-yoke37. In each of the input-driving mechanism3, the dust cover31is fixedly connected to the diaphragm2, each voice coil mounting hole20is covered with one dust cover31, and the voice coil32is covered under the dust cover21. The upper end portion of the voice coil32is inserted into and close fit with the voice coil mounting hole20of the diaphragm2, and the voice coil32is connected to the diaphragm2to drive the diaphragm2to vibrate. The damper33is provided with a through hole in the middle so as to be sleeved on the voice coil32, the outer periphery of the voice coil is tightly connected with the through hole, and with the restriction of the damper33, the voice coil can only move up and down, and cannot produce movement in the horizontal direction; the damper33is specifically located in a cavity formed between the frame1and the diaphragm2after they are connected, the upper surface of the frame1has a plurality of upwardly extending flanges11surrounding the magnetic circuit mounting holes10, each of the magnetic circuit mounting holes10is surrounded by one flange11, and each damper33and each flange11cooperate with each other so that they are fitted closely, so that the damper33can be embedded between the inner wall of the flange11(as shown inFIG.3), to prevent the damper33from shaking. The U-yoke37has an inner cavity and an open upper end, the upper edge of the U-yoke is fixedly connected at the magnetic circuit mounting hole10(such as the hole wall of the magnetic circuit mounting hole10, or the lower surface of the frame1close to the magnetic circuit mounting hole10), and the magnetic circuit mounting hole10is in communication with the inner cavity of the U-yoke37; the secondary neodymium magnetic steel34, the magnetic pole core35, and the main neodymium magnetic steel36are stacked from top to bottom, and are fixedly arranged in the inner cavity of the U-yoke37, to form a magnetic circuit assembly; the lower surface of the secondary neodymium magnetic steel34is closely attached to the upper surface of the magnetic pole core35, and the lower surface of the magnetic pole core35is closely attached to the upper surface of the main neodymium magnetic steel36; there are a gap between the secondary neodymium magnetic steel34, the magnetic pole the core35and the main neodymium magnetic steel36and the inner wall of the U-yoke37, thereby forming a magnetic gap surrounding the secondary neodymium magnetic steel34, the magnetic pole core35and the main neodymium magnetic steel36, the lower end of the voice coil32is inserted into the magnetic gap downward from the magnetic circuit mounting hole10, there is a gap between the voice coil32and the secondary neodymium magnetic steel34, the magnetic pole core35and the main neodymium magnet36, and there is also a gap between it and the inner wall of the U-yoke37, so that it can move up and down in the magnetic gap. As shown inFIG.2andFIG.3, the frame1is provided with multiple pairs of audio signal input terminals5, each pair of audio signal input terminals5is electrically connected to a lead of one voice coil32. Wherein, each pair of audio signal input terminals5comprises a positive terminal and a negative terminal, one lead of each voice coil32is electrically connected to the positive terminal of one pair of audio signal input terminals5, and another lead is electrically connected to the negative terminal of this pair of audio signal input terminals5, to receive the audio signal (analog signal or digital signal) input from the pair of audio signal input terminal5. Thus, four voice coils32are simultaneously driven through the four pairs of audio signal input terminals5. By providing multiple integrated terminals for audio signal input in the frame1, the positive and negative leads of each voice coil32can be connected to the intermediate terminals at the bottom of the frame1, and this connection method simplifies the manufacture of multi-input-driving loudspeakers, and is also convenient for the connection of audio signal input. As shown inFIG.4, a plurality of reinforcing ribs23are arranged on the diaphragm2, which can increase the working strength of the diaphragm2. Specifically, as shown inFIG.4, a plurality of reinforcing ribs23are arranged at equal intervals along the circumferential direction of the diaphragm2, and each reinforcing rib23extends along the radial direction of the diaphragm2. The ribs23are located between the voice coil mounting holes20. The working principle of the multi-input-driving loudspeaker is: the audio signal is input to the plurality of voice coils32through the audio signal input terminals5on the frame1, and the plurality of voice coils32move up and down synchronously under the action of the magnetic circuit assemblies, thereby driving the diaphragm2to vibrate to produce sound. The multi-input-driving loudspeaker of the present disclosure adopts a diaphragm2with a flat plate shaped bottom, three or more voice coil mounting holes20are provided on the plane formed by the diaphragm bottom21, and tightly fitted with three or more voice coils32, and then the voice coils32are tightly fitted with three or more dampers33to form three or more input-driving mechanisms3, and by using three or more magnetic circuit assemblies to drive the voice coils32, and three or more voice coils32to drive the voice diaphragm2, it can not only reduce the height of the product, but also broaden the directivity of the product, and through multiple audio signal inputs, it can reduce the distortion of the product, increase the sensitivity of the loudspeakers, and improve the intelligiblity of the loudspeaker. The use of integrated terminals simplifies the connection of the product and facilitates the connection of audio signal input. The loudspeaker structure is ingenious and rational, and through the use of a flat-bottom conical diaphragm structure, the flat-bottom conical diaphragm has a better directivity than traditional loudspeakers; by receiving the audio signal input via three or more voice coils, the original sound reproduction and distortion are better than that of traditional loudspeakers; by adopting a diaphragm with a flat-bottom, the height of the diaphragm is lower than that of the traditional conical diaphragm, and the reduction of the height of the diaphragm can also reduce the height of the product; by using an input-driving structure composed of three or more voice coils and three or more magnetic circuit assemblies, the sensitivity of the loudspeaker is increased; by closely connecting the flat-bottom conical diaphragm with three or more voice coils, the three or more voice coils are driven through three or more audio signal inputs to move up and down in the U-yoke magnetic circuit to drive the diaphragm to sound. The embodiments described above are only for illustrating the technical concepts and features of the present disclosure, are preferred embodiments, and are intended to make those skilled in the art being able to understand the present disclosure and thereby implement it, and should not be concluded to limit the protective scope of this disclosure.
11,943
11943599
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG.1shows an actuator5according to one exemplary embodiment of the invention in a lateral exploded view. The actuator5has a housing10. The housing10is formed by a first cover cap12, a second cover cap16and a ring14. The two cover caps12,16are arranged on the outside and are composed of non-thermally conductive, non-magnetically conductive material. It should be mentioned that one or both of said two cover caps12,16could, however, also be composed, for example, of thermally conductive, non-magnetically conductive material. The ring14is composed of thermally conductive, magnetically conductive material. The actuator5has a spring arrangement20which is formed by a first spring element22and a second spring element24. The design thereof will be discussed in more detail further below. The actuator5has a coil which is formed by a coil carrier32, a first coil section34and a second coil section36. The two coil sections34,36are attached here to the coil carrier32. Electric current can flow through the coil sections34,36, such that a magnetic field is generated in the coil30. The actuator5has a magnet40. The latter is formed by a magnetic central part42and by a first non-magnetic pole plate44and a second non-magnetic pole plate46. The central part42is accommodated here between the two pole plates44,46. Two sets of four screws18,19each are used to fasten the components mentioned. Alternatively, for example, fastening by adhesive bonding, welding or riveting would also be possible. FIG.2shows the actuator5in a perspective exploded view. It can be seen here that the first spring element22has a total of four spring arms26. Accordingly, the second spring element24has a total of four spring arms28. In the assembled state, the magnet40is designed in such a way that the two pole plates44,46directly adjoin the magnetic central part42. The magnet40is then held as a whole by the two axially adjacent spring elements22,24. As a result, the magnet40is movable only in one axial direction, wherein it is biased by the spring elements22,24into a central inoperative position. As shown, the pole plates44,46are designed to be curved concavely on their outwardly directed surface. This enables a particularly space-saving arrangement of the magnet40between the spring elements22,24and allows a particularly high magnetic flux density in the edge region of the pole plates. In the assembled state, the coil30surrounds the magnet40radially. The coil30here is fixedly secured in the housing10. By application of an electrical voltage to the coil30, the magnet40can be deflected out of its inoperative position, as a result of which vibrations occur. In particular, a voltage to which an audio signal is modulated can be applied here. The magnet40then vibrates in accordance with this audio signal and generates corresponding vibrations. The ring14made of magnetically conductive material is used here to provide an advantageous magnetic closure. A first cylinder-like projection13, which extends inward from the first cover cap12, and a second cylindrical projection17, which extends inward from the second cover cap16, serve to define the axial direction along which the magnet40is movable. FIG.3shows the actuator5in the assembled state. It can be seen here that three cylindrical contact points7are arranged on the outside of the first cover cap12. With said contact points, the actuator5can adjoin a component of a motor vehicle. Furthermore, a bore8in which a thread is formed is arranged in the center. The actuator5can thus be fastened to a component. The second cover cap16is also configured accordingly. By fastening or application of the actuator5by bore8to a component of a motor vehicle, the vibrations already mentioned further above, which the magnet40can generate, can be transmitted to the component. In this way, the component itself can be excited to vibrate, which leads to it emitting sound waves. These sound waves can typically be heard in the interior of a vehicle. In this way, sound can be generated without the provision of a separate loudspeaker, which is particularly appropriate at low frequencies and leads to a significant saving on space and weight. It should be mentioned that, for example, the bore8can also be used to connect the actuator5to a rigid component, such as, for example, a body part of a vehicle, and the actuator5can be connected on the opposite side to a component which is to be excited into vibrating. In this way, the stationary component, such as, for example, a body of the vehicle, can serve as a reference, relative to which the vibrations are excited. FIG.4shows an alternative embodiment of a spring element, here by way of example the first spring element22. This can be used in the context of the embodiment described with reference toFIGS.1to3instead of the first spring element22shown there and/or instead of the second spring element24shown there. In contrast to the star-shaped design which can be seen inFIG.2, the spring arms26of the spring element22illustrated inFIG.4are designed in a spiral shape. A different spring characteristic can thus be achieved. FIG.5shows schematically an exemplary pole plate44,46in cross section. The pole plate is, by way of example, configured to be substantially planar61on the side facing the magnet (not illustrated). The outer side or surface62facing away from the magnet is concave and thus the entire cross section of the pole plate is concave. The outer side has a collar63on the circumferential edge, at which collar the pole plate has a greater thickness or material thickness than in the center. The pole plate has a substantially planar plateau64in the region of the center of the outer side. The transition65between collar63and plateau64can be designed in various ways; a linear transition, a circular-arc-shaped transition and a parabolic transition are illustrated on the left-hand side by way of example. If it is found in the course of the proceedings that a feature or a group of features is not absolutely necessary, then the applicant aspires right now to a wording of at least one independent claim that no longer has the feature or the group of features. This may be, for example, a subcombination of a claim present on the filing date or a subcombination of a claim present on the filing date that is restricted by further features. Claims or combinations of features of this kind requiring rewording are intended to be understood as also covered by the disclosure of this application. It should also be pointed out that refinements, features and variants of aspects of the invention which are described in the various embodiments or exemplary embodiments and/or shown in the figures can be combined with one another in any desired manner. Single or multiple features are interchangeable with one another in any desired manner. Combinations of features arising therefrom are intended to be understood as also covered by the disclosure of this application. Back-references in dependent claims are not intended to be understood as a relinquishment of the attainment of independent substantive protection for the features of the back-referenced dependent claims. These features may also be combined with other features in any desired manner. Features which are only disclosed in the description or features which are only disclosed in the description or in a claim in conjunction with other features may in principle be of independent significance essential to aspects of the invention. They may therefore also be individually included in claims for the purpose of delimitation from the prior art.
7,688
11943600
DETAILED DESCRIPTION Described herein are techniques for audio rendering. In the following description, for purposes of explanation, numerous examples and specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention as defined by the claims may include some or all of the features in these examples alone or in combination with other features described below, and may further include modifications and equivalents of the features and concepts described herein. In the following description, various methods, processes and procedures are detailed. Although particular steps may be described in a certain order, such order is mainly for convenience and clarity. A particular step may be repeated more than once, may occur before or after other steps (even if those steps are otherwise described in another order), and may occur in parallel with other steps. A second step is required to follow a first step only when the first step must be completed before the second step is begun. Such a situation will be specifically pointed out when not clear from the context. In this document, the terms “and”, “or” and “and/or” are used. Such terms are to be read as having an inclusive meaning. For example, “A and B” may mean at least the following: “both A and B”, “at least both A and B”. As another example, “A or B” may mean at least the following: “at least A”, “at least B”, “both A and B”, “at least both A and B”. As another example, “A and/or B” may mean at least the following: “A and B”, “A or B”. When an exclusive-or is intended, such will be specifically noted (e.g., “either A or B”, “at most one of A and B”). FIG.1is a block diagram of a rendering system100. The rendering system100includes a distribution module110, a number of renderers120(three shown:120a,120band120c), and a routing module130. The renderers120are categorized into a number of different categories, which are discussed in more detail below. The rendering system100receives an audio signal150, renders the audio signal150, and generates a number of loudspeaker signals170. Each of the loudspeaker signals170drives a loudspeaker (not shown). The audio signal150is an object audio signal and includes one or more audio objects. Each of the audio objects includes object metadata152and object audio data154. The object metadata152includes position information for the audio object. The position information corresponds to the desired perceived position for the object audio data154of the audio object. The object audio data154corresponds to the audio data that is to be rendered by the rendering system100and output by the loudspeakers (not shown). The audio signal150may be in one or more of a variety of formats, including the Dolby® Atmos™ format, the Ambisonics format (e.g., B-format), the DTS:X™ format from Xperi Corp., etc. For brevity, the following refers to a single audio object in order to describe the operation of the rendering system100, with the understanding that multiple audio objects may be processed concurrently, for example by instantiating multiple instances of one or more of the renderers120. For example, an implementation of the Dolby® Atmos™ system may reproduce up to 128 simultaneous audio objects in the audio signal150. The distribution module110receives the object metadata152from the audio signal150. The distribution module110also receives loudspeaker configuration information156. The loudspeaker configuration information156generally indicates the configuration of the loudspeakers connected to the rendering system100, such as their numbers, configurations or physical positions. When the loudspeaker positions are fixed (e.g., being components physically attached to a device that includes the rendering system100), the loudspeaker configuration information156may be static, and when the loudspeaker positions may be adjusted, the loudspeaker configuration information156may be dynamic. The dynamic information may be updated as desired, e.g. when the loudspeakers are moved. The loudspeaker configuration information156may be stored in a memory (not shown). Based on the object metadata152and the loudspeaker configuration information156, the distribution module110determines selection information162and position information164. The selection information162selects two or more of the renderers120that are appropriate for rendering the audio object for the given position information in the object metadata152, given the arrangement of the loudspeakers according to the loudspeaker configuration information156. The position information164corresponds to the source position to be rendered by each of the selected renderers120. In general, the position information164may be considered to be a weighting function that weights the object audio data154among the selected renderers120. The renderers120receive the object audio data154, the loudspeaker configuration information156, the selection information162and the position information164. The renderers120use the loudspeaker configuration information156to configure their outputs. The selection information162selects two or more of the renderers120to render the object audio data154. Based on the position information164, each of the selected renderers120renders the object audio data154to generate rendered signals166. (E.g., the renderer120agenerates the rendered signals166a, the renderer120bgenerates the rendered signals166b, etc.). Each of the rendered signals166from each of the renderers120corresponds to a driver signal for one of the loudspeakers (not shown), as configured according to the loudspeaker configuration information156. For example, if the rendering system100is connected to 14 loudspeakers, the renderer120agenerates up to 14 rendered signals166a. (If a given audio object is rendered such that it is not to be output from a particular loudspeaker, then that one of the rendered signals166may be considered to be zero or not present, as indicated by the loudspeaker configuration information156.) The routing module130receives the rendered signals166from each of the renderers120and the loudspeaker configuration information156. Based on the loudspeaker configuration information156, the routing module130combines the rendered signals166to generate the loudspeaker signals170. To generate each of the loudspeaker signals170, the routing module130combines, for each loudspeaker, each one of the rendered signals166that correspond to that loudspeaker. For example, a given loudspeaker may be related to one of the rendered signals166a, one of the rendered signals166b, and one of the rendered signals166c; the routing module130combines these three signals to generate the corresponding one of the loudspeaker signals170for that given loudspeaker. In this manner, the routing module130performs a mixing function of the appropriate rendered signals166to generate the respective loudspeaker signals170. Due to the linearity of acoustics, the principle of superposition allows the rendering system100to use any given loudspeaker concurrently for any number of the renderers120. The routing module130implements this by summing, for each loudspeaker, the contribution from each of the renderers120. As long as the sum of those signals does not overload the loudspeaker, the result corresponds to a situation where independent loudspeakers are allocated to each renderer, in terms of impression for the listener. When multiple audio objects are rendered to be output concurrently, the routing module130combines the rendered signals166in a manner similar to the single audio object case discussed above. FIG.2is a flowchart of a method200of audio processing. The method200may be performed by the rendering system100(seeFIG.1). The method200may be implemented by one or more computer programs, for example that the rendering system100executes to control its operation. At202, one or more audio objects are received. Each of the audio objects respectively includes position information. (For example, two audio objects A and B may have respective position information PA and PB.) As an example, the rendering system100(seeFIG.1) may receive one or more audio objects in the audio signal150. For each of the audio objects, the method continues with204. At204, for a given audio object, at least two renderers are selected based on the position information of the given audio object. Optionally, the at least two renderers have at least two categories. (Of course, a particular audio object may be rendered using a single category of renderer; such a situation operates similarly to the multiple category situation discussed herein.) For example, when the position information indicates that a particular two renderers (having a particular two categories) would be appropriate for rendering that audio object, then those two renderers are selected. The renderers may be selected based on the loudspeaker configuration information156(seeFIG.1). As an example, the distribution module110may generate the selection information162to select at least two of the renderers120, based on the position information in the object metadata152and the loudspeaker configuration information156. At206, for the given audio object, at least two weights are determined based on the position information. The weights are related to the renderers selected at204. As an example, the distribution module110(seeFIG.1) may generate the position information164(corresponding to the weights) based on the position information in the object metadata152and the loudspeaker configuration information156. At208, the given audio object is rendered, based on the position information, using the selected renderers (see204) weighted according to the weights (see206), to generate a plurality of rendered signals. As an example, the renderers120(seeFIG.1, selected according to the selection information162) generate the rendered signals166from the object audio data154, weighted according to the position information164. Continuing the example, when the renderers120aand120bare selected, the rendered signals166aand166bare generated. At210, the plurality of rendered signals (see208) are combined to generate a plurality of loudspeaker signals. For a given loudspeaker, the corresponding rendered signals166are summed to generate the loudspeaker signal. The loudspeaker signals may be attenuated when above a maximum signal level, in order to prevent overloading a given loudspeaker. As an example, the routing module130may combine the rendered signals166to generate the loudspeaker signals170. At212, the plurality of loudspeaker signals (see210) are output from a plurality of loudspeakers. When multiple audio objects are to be output concurrently, the method200operates similarly. For example, multiple given audio objects may be processed using multiple paths of204-206-208in parallel, with the rendered signals corresponding to the multiple audio objects being combined (see210) to generate the loudspeaker signals. FIG.3is a block diagram of a rendering system300. The rendering system300may be used to implement the rendering system100(seeFIG.1) or to perform one or more of the steps of the method200(seeFIG.2). The rendering system300may store and execute one or more computer programs to implement the rendering system100or to perform the method200. The rendering system300includes a memory302, a processor304, an input interface306, and an output interface308, connected by a bus310. The rendering system300may include other components that (for brevity) are not shown. The memory302generally stores data used by the rendering system300. The memory302may also store one or more computer programs that control the operation of the rendering system300. The memory302may include volatile components (e.g., random access memory) and non-volatile components (e.g., solid state memory). The memory302may store the loudspeaker configuration information156(seeFIG.1) or the data corresponding to the other signals inFIG.1, such as the object metadata152, the object audio data154, the rendered signals166, etc. The processor304generally controls the operation of the rendering system300. When the rendering system300implements the rendering system100(seeFIG.1), the processor304implements the functionality corresponding to the distribution module110, the renderers120, and the routing module130. The input interface306receives the audio signal150, and the output interface308outputs the loudspeaker signals170. FIG.4is a block diagram of a loudspeaker system400. The loudspeaker system400includes a rendering system402and a number of loudspeakers404(six shown,404a,404b,404c,404d,404eand404f). The loudspeaker system400may be configured as a single device that includes all of the components (e.g., a soundbar form factor). The loudspeaker system400may be configured as separate devices (e.g., the rendering system402is one component, and the loudspeakers404are one or more other components). The rendering system402may correspond to the rendering system100(seeFIG.1), receiving the audio signal150, and generating loudspeaker signals406that correspond to the loudspeaker signals170(seeFIG.1). The components of the rendering system402may be similar to those of the rendering system300(seeFIG.3). The loudspeakers404output auditory signals (not shown) corresponding to the loudspeaker signals406(six shown,406a,406b,406c,406d,406eand406f). The loudspeaker signals406may correspond to the loudspeaker signals170(seeFIG.1). The loudspeakers404may output the loudspeaker signals as discussed above regarding312inFIG.3. Categories of Renderers As mentioned above, the renderers (e.g., the renderers120ofFIG.1) are classified into various categories. Four general categories of renderers include sound field renderers, binaural renderers, panning renderers, and beamforming renderers. As discussed above (see204inFIG.2), for a given audio object, the selected renderers have at least two categories. For example, based on the object metadata152and the loudspeaker configuration information156(seeFIG.1), the distribution module110may select a sound field renderer and a beamforming renderer (of the renderers120) to render a given audio object. Additional details of the four general categories of renderers are provided below. Note that where a category includes sub-categories of renderers, it is to be understood that the references to different categories of renderers are similar applicable to different sub-categories of renderers. The rendering systems described herein (e.g., the rendering system100ofFIG.1) may implement one or more of these categories of renderers. Sound Field Renderers In general, sound field rendering aims to reproduce a specific acoustic pressure (sound) field in a given volume of space. Sub-categories of sound field renderers include wave field synthesis, near-field compensated high-order Ambisonics, and spectral division. One important capability of sound field rendering methods is the ability to project virtual sources in the near field, meaning generate sources that the listener will be localized at a position between himself and the speakers. While such effect is possible also for binaural renderers (see below), the particularity here is that the correct localization impression can be generated over a wide listening area. Binaural Renderers Binaural rendering methods focus on delivering to the listener's ears a signal carrying along the source signal processed to mimic the binaural cues associated with the source location. While the simpler way to deliver such signals is commonly over headphones, it can be successfully done over a speaker system as well, through the use of crosstalk cancellers in order to deliver individual left and right ear feeds to the listener. Panning Renderers Panning methods make direct use of the basic auditory mechanisms (e.g., changing interaural loudness and temporal differences) to move sound images around through delay and/or gain differentials applied to the source signal before being fed to multiple speakers. Amplitude panners, which use only gain differentials, are popular due to their simple implementation and stable perceptual impressions. They have been deployed in many consumer audio systems such as stereo systems and traditional cinema content rendering. (An example of a suitable amplitude panner design for arbitrary speaker arrays is provided by V. Pulkki, “Virtual sound source positioning using vector base amplitude panning,” Journal of the Audio Engineering Society, vol. 45, no. 6, pp. 456-466, 1997.) Finally, methods that use reflections from the reproduction environment generally rely on similar principles to manipulate the spatial impression from the system. Beamforming Renderers Beamforming was originally designed for sensor arrays (e.g., microphone arrays), as a means to amplify the signal coming from a set of preferred directions. Thanks to the principle of reciprocity in acoustics, the same principle can be used to create directional acoustic signals. U.S. Pat. No. 7,515,719 describes the use of beamforming to create virtual speakers through the use of focused sources. Rendering System Considerations The rendering system categories discussed above have a number of considerations regarding the sweet spot and the source location to be rendered. The sweet spot generally corresponds to the space where the rendering is considered acceptable according to a listener perception metric. While the exact extent of such area is generally imperfectly defined due to the absence of analytic metrics capturing well the perceptual quality of the rendering, it is generally possible to derive qualitative information from typical error metrics (e.g., square error), and compare different systems in different configurations. For example, a common observation is that the sweet spot is smaller (for all categories of renderers) at higher frequencies. Generally, it can also be observed that the sweet spot grows with the number of speakers available in the system, except for panning methods, for which the addition of speakers has different advantages. The different rendering system categories may also vary in the way and capabilities they have to deliver audio to be perceived at various source locations. Sound field rendering methods generally allow for the creation of virtual sources anywhere in the direction of the speaker array from the point of view of the listener. One aspect of those methods is that they allow for the manipulation of the perceived distance of the source in a transparent way and from the perspective of the entire listening area. Binaural rendering methods can theoretically deliver any source locations in the sweet spot, as long as the binaural information related to those positions has been previously stored. Finally, the panning methods can deliver any source direction for which a pair/trio of speakers sufficiently close (e.g., approximately 60 degree angle such as between 55-65 degrees) is available from the point of view of the listener. (However, panning methods generally do not define specific ways to handle source distance, so additional strategies need to be used if a distance component is desired.) In addition, some rendering system categories exhibit an interdependence between the source location and the sweet spot. For example, for a linear array of loudspeakers implementing a wave field synthesis process (in the sound field rendering category), a source location in the center behind the array may be perceived in a large sweet spot in front of the array, whereas a source location in front of the array and displaced to the side may be perceived in a smaller, off-center sweet spot. Detailed Embodiments Given the above considerations, embodiments are directed toward using two or more rendering methods in combination, where the relative weight between the selected rendering methods depends on the audio object location. With the increasing availability of hardware allowing for the use of large number of speakers in consumer applications, the possibility of using complex rendering strategies becomes more and more appealing. Indeed, the number of speakers still remains limited so that using a single rendering method generally leads to strong limitations, generally with regard to the sweet spot extent. Additionally, complex strategies can potentially deal with complex speaker setups, for example some missing surround coverage in some region, or just lacking speaker density. However, the standard limitations of those reproduction methods remain, leading to the necessary compromise between coverage (the largest array possible to have a wider range of possible source locations) and density (the densest array possible to avoid as much as possible high frequency distortion due to aliasing) for a given number of channels. In view of the above issues, embodiments are directed to using multiple types of renderers driven together to render object-based audio content. For example, in the rendering system100(seeFIG.1), the distribution module110processes the object-based audio content based on the object metadata152and the loudspeaker configuration information156in order to determine (1) which of the renderers120to activate (the selection information162), and (2) the source position to be rendered by each activated renderer (the position information164). Each selected renderer then renders the object audio data154according to the position information164and generates the rendered signals166that the routing module130routes to the appropriate loudspeaker in the system. The routing module130allows the use of a given loudspeaker by multiple renderers. In this manner, the rendering system100uses the distribution module110to distribute each audio object to the renderers120that will effectively convey the intended spatial impression in the desired listening area. For a system at K speakers (k=1 . . . K), rendering 0 objects (o=1 . . . 0) with R distinct renderers (r=1 . . . R), the output s of each speaker k is given by: sk(t)=∑o=1O∑r=1Rwr(x→o)*[δk∈r⁢Dk(r)(x→r(o))*so(t)] In the above equation: sk(t): output signal from speaker k so(t): object signal wr: activation of renderer r as a function of the object position {right arrow over (x)}o(can be a real scalar or a real filter) δk∈r: indicator function, is 1 if speaker k is attached to renderer r, 0 otherwise Dk(r): driving function of speaker k as directed by renderer r as a function of an object position {right arrow over (x)}r(o)(can be a real scalar or a real filter) {right arrow over (x)}o: object position according to its metadata {right arrow over (x)}r(o): object position used to drive renderer r for object o (can be equal to {right arrow over (x)}0) The type of renderer for renderer r is reflected in the driving function Dk(r). The specific behavior of a given renderer is determined by its type and the available setup of speakers it is driving (as determined by δk∈r). The distribution of a given object among the renderers is controlled by the distribution algorithm, through the activation coefficient wrand the mapping {right arrow over (x)}r(o)of a given object o in the space controlled by renderer r. Applying the above equation to the rendering system100(seeFIG.1), each skcorresponds to one of the loudspeaker signals170, socorresponds to the object audio data154for a given audio object, wrcorresponds to the selection information162, δk∈rcorresponds to the loudspeaker configuration information156(e.g., configuring the routings performed by the routing module130), Dk(r)corresponds to a rendering function for each of the renderers120, and {right arrow over (x)}oand {right arrow over (x)}r(o)correspond to the position information164. The combination of wrand Dk(r)may be considered to be weights that provide the relative weight between the selected renderers for the given audio object. Although the above equation is written in the time domain, an example implementation may operate in the frequency domain, for example using a filter bank. Such an implementation may transform the object audio data154to the frequency domain, perform the operations of the above equation in the frequency domain (e.g., the convolutions become multiplications, etc.), and then inverse transform the results to generate the rendered signals166or the loudspeaker signals170. FIGS.5A and5Bare respectively a top view and a side view of a soundbar500. The soundbar500may implement the rendering system100(seeFIG.1). The soundbar500includes a number of loudspeakers including a linear array502(having 12 loudspeakers502a,502b,502c,502d,502e,502f,502g,502h,502i,502j,502kand502l) and an upward firing group504(including 2 loudspeakers504aand504b). The loudspeaker502amay be referred to as the far left loudspeaker, the loudspeaker502lmay be referred to as the far right loudspeaker, the loudspeaker504amay be referred to as the upward left loudspeaker, and the loudspeaker504bmay be referred to as the upward right loudspeaker. The number of loudspeakers and their arrangement may be adjusted as desired. The soundbar500is suitable for consumer use, for example in a home theater configuration, and may receive its input from a connected television or audio/video receiver. The soundbar500may be placed above or below the television screen, for example. FIGS.6A,6B and6Care respectively a first top view, a second top view and a side view showing the output coverage for the soundbar500(seeFIGS.5A and5B) in a room.FIG.6Ashows a near field output602generated by the linear array502. The near field output602is generally projected outward from the front of the linear array502.FIG.6Bshows a virtual side outputs604aand604bgenerated by the linear array502using beamforming. The virtual side outputs604aand604bresult from beamforming against the walls.FIG.6Cshows a virtual top output606generated by the upward firing group504. (Also shown is the near field output602ofFIG.6A, generally in the plane of the listener.) The virtual top output606results from reflecting against the ceiling. For a given audio object, the soundbar500may combine two or more of these outputs together, e.g. using a routing module such as the routing module130(seeFIG.1), in order to conform the audio object's perceived position with its position metadata. FIG.7is a block diagram of a rendering system700. The rendering system700is a specific embodiment of the rendering system100(seeFIG.1) suitable for the soundbar500(seeFIG.5A). The rendering system700may be implemented using the components of the rendering system300(seeFIG.3). As with the rendering system100, the rendering system700receives the audio signal150. The rendering system700includes a distribution module710, four renderers720a,720b,720cand720d(collectively the renderers720), and a routing module730. The distribution module710, in a manner similar to the distribution module110(seeFIG.1), receives the object metadata152and the loudspeaker configuration information156, and generates the selection information162and the position information164. The renderers720receive the object audio data154, the loudspeaker configuration information156, the selection information162and the position information164, and generate rendered signals766a,766b,766cand766d(collectively the rendered signals766). The renderers720otherwise function similarly to the renderers120(seeFIG.1). The renderers720include a wave field renderer720a, a left beamformer720b, a right beamformer720c, and a vertical panner720d. The wave field renderer720agenerates the rendered signals766acorresponding to the near field output602(seeFIG.6A). The left beamformer720bgenerates the rendered signals766bcorresponding to the virtual side output604a(seeFIG.6B). The right beamformer720cgenerates the rendered signals766ccorresponding to the virtual side output604b(seeFIG.6B). The vertical panner720dgenerates the rendered signals766dcorresponding to the virtual top output606(seeFIG.6C). The routing module730receives the loudspeaker configuration information156and the rendered signals766, and combines the rendered signals766in a manner similar to the routing module130(seeFIG.1) to generate loudspeaker signals770aand770b(collectively the loudspeaker signals770). The routing module730combines the rendered signals766a,766band766cto generate the loudspeaker signals770athat are provided to the loudspeakers of the linear array502(seeFIG.5A). The routing module730routes the rendered signals766dto the loudspeakers of the upward firing group504(seeFIG.5A) as the loudspeaker signals770b. As an audio object's perceived position changes across the listening environment, the distribution module710performs cross-fading (using the position information164) among the various renderers720to result in smooth perceived source motion between the different regions ofFIGS.6A,6B and6C. FIGS.8A and8Bare respectively a top view and a side view showing an example of the source distribution for the soundbar500(seeFIG.5A). For a particular audio object in the audio signal150(seeFIG.1), the object metadata152defines a desired perceived position within a virtual cube of size 1×1×1. This virtual cube is mapped to a cube in the listening environment, e.g. by the distribution module110(seeFIG.1) or the distribution module710(seeFIG.7) using the position information164. FIG.8Ashows the horizontal plane (x,y), with the point902at (0,0), point904at (1,0), point906at (0,−0.5), and point908at (1,−0.5). (These points are marked with the “X”.) The perceived position of the audio object is then mapped from the virtual cube to the rectangular area920defined by these four points. Note that this plane is only half the virtual cube in this dimension, and that sources where y>0.5 (e.g., behind the listener positions910) are placed on the line between the points906and908, in front of the listener positions910. The points902and904may be considered to be at the front wall of the listening environment. The width of the area920(e.g., between points902and904) is roughly aligned with (or slightly inside of) the sides of the linear array502(see alsoFIG.5A). FIG.8Bshows the vertical plane (x,z), with the point902at (0,0), point906at (−0.5,0), point912at (0,1), and point916at (−0.5,1). The perceived position of the audio object is then mapped from the virtual cube to the rectangular area930defined by these four points. As withFIG.8A, inFIG.8Bsources where y>0.5 (e.g., behind the listener positions910) are placed on the line between the points906and916. The points912and916may be considered to be at the ceiling of the listening environment. The bottom of the area930is aligned at the level of the linear array502. InFIG.8A, note the trapezoid922in the horizontal plane, with its wide base aligned with one side of the area920between points902and904, and its narrow base aligned in front of the listener positions910(on the line between points906and908). The system distinguishes sources with desired perceived positions inside the trapezoid922from those outside the trapezoid922(but still within the area920). Within the trapezoid922, the source is reproduced without using the beamformers (e.g.,720band720cinFIG.7); instead, the sound field renderer (e.g.,720ainFIG.7) is used to reproduce the source. Outside the trapezoid922, the source may be reproduced using both the beamformers (e.g.,720band720c) and the sound field renderer (e.g.,720a) in the horizontal plane. In particular, the sound field renderer720aplaces a source at the same coordinate y, at the very left of the trapezoid922, if the source is located on the left (or the very right if the source is located on the right), while the two beamformers720band720ccreate a stereo phantom source between each other through panning. The left-right panning factor between the two beamformers720band720cmay follow a constant energy amplitude panning rule mapping x=0 to the left beamformer720bonly and x=1 to the right beamformer720conly. (The distribution module710may use the position information164to implement this amplitude panning rule, e.g., using the weights.) The system applies a constant-energy cross-fading rule between the sound field renderer720aand the pair of beamformers720b-720c, so that the sound energy from the beamformers720b-720cincreases while the sound energy from the sound field renderer720adecreases as the source is placed further from the trapezoid922. (The distribution module710may use the position information164to implement this cross-fading rule.) In the z dimension (seeFIG.8B), the system applies a constant-energy cross-fade rule between the signal fed to the combination of the beamformers720b-720cand the sound field renderer720a, and the rendered signals766drendered by the vertical panner720dthat are fed to the upward firing group504(seeFIGS.5A and5B). The cross-fade factor is proportional to the z coordinate, with z=0 corresponding to all of the signal being rendered through the beamformers720b-720cand the sound field renderer720a, and z=1 corresponding to all of the signal being rendered using the vertical panner720d. The rendered signal766dproduced by the vertical panner720dis distributed between the two channels (to the two loudspeakers504aand504b) using a constant-energy amplitude panning rule, mapping x=0 to the left loudspeaker504aonly and x=1 to the right loudspeaker504bonly. (The distribution module710may use the position information164to implement this amplitude panning rule.) FIGS.9A and9Bare top views showing a mapping of object-based audio (FIG.9A) to a loudspeaker array (FIG.9B).FIG.9Ashows a horizontal square region1000defined by point1002at (0,0), point1004at (1,0), point1006at (0,1), and point1008at (1,1). Point1003is at (0,0.5), at the midpoint between points1002and1006, and point1007is at (1,0.5), at the midpoint between points1004and1008. Point1005is at (0.5,0.5), the center of the square region1000. Points1002,1004,1012and1014define a trapezoid1016. Adjacent to the sides of the trapezoid1016are two zones1020and1022, which have a width of 0.25 units in the specified x direction. Adjacent to the sides of the zones1020and1022are the triangles1024and1026. An audio object may have a desired perceived position within the square region1000according to its metadata (e.g., the object metadata152ofFIG.1). An example object audio system that uses the horizontal square1000is the Dolby Atmos® system. FIG.9Bshows the mapping of a portion of the square region1000(seeFIG.9A) to a region1050defined by points1052,1054,1053and1057. Note that only half of the square region1000(defined by the points1002,1004,1003and1007) is mapped to the region1050; the perceived positions in the other half of the square region1000are mapped on the line between points1053and1057. (This is similar to what was described above inFIG.8A.) A loudspeaker array1059is within the region1050; the width of the loudspeaker array1059corresponds to the width L of the region1050. Similarly to the square region1000(seeFIG.9A), the region1050includes a trapezoid1056, two zones1070and1072adjacent to the sides of the trapezoid1056, and two triangles1074and1076. The zones1070and1072correspond to the zones1020and1022(seeFIG.9A), and the triangles1074and1076correspond to the triangles1024and1026(seeFIG.9A). A wide base of the trapezoid1056corresponds to the width L of the region1050, and a narrow base corresponds to a width l. The height of the trapezoid1056is (H−h), where H corresponds to a large triangle that includes the trapezoid1056and extends from the wide base (having width L) to a point1075, and h corresponds to the height of a small triangle that extends from the narrow base (having width l) to the point1075. As will be detailed more below, within the zones1070and1072, the system implements a constant-energy cross-fading rule between the categories of renderers. More precisely, the output of the loudspeaker array1059(seeFIG.9B) may be described as follows. The loudspeaker array1059has M speakers (m=1, . . . , M from left to right). Those speakers are driven as follows: sm(t)=∑o=1Oso(t)*sin⁢zo·[sin⁡(θNF(xo,yo))·DmNF(xNF(o),yNF(o))+cos⁡(θNF(xo,yo))·DmB] The factor θNF/B(xo,yo) drives the balance between the near-field wave field synthesis renderer720aand the beamformers720b-720c(seeFIG.7). It is defined using the notation presented inFIG.9Bfor the trapezoid1056, so that for y0≤½: {θNF/B(xo,yo)=1,if⁢❘"\[LeftBracketingBar]"xo-12❘"\[RightBracketingBar]"<12-yo⁢L-lLθNF/B⁢(xo,yo)=❘"\[LeftBracketingBar]"4⁢xo-2❘"\[RightBracketingBar]"-2+4⁢y0⁢L-lL,if⁢❘"\[LeftBracketingBar]"xo-12❘"\[RightBracketingBar]"∈[12-yo⁢L-lL,34-yo⁢L-lL]θNF/B⁢(xo,yo)=0,if⁢❘"\[LeftBracketingBar]"xo-12❘"\[RightBracketingBar]">34-yo⁢L-lL Then, for y0>½: θNF/B(xo,yo)=|4x0−2|−2l/L The positioning of the sources in the near-field, using the wave field renderer720a, follows the rule: xNF(o)=xo⁢lL⁢and⁢yNF(o)=min⁡(y0,12)·H The driving functions are written in the frequency domain. For sources behind the array plane (e.g., behind the loudspeaker array1059such as on the line between points1052and1054): DmNF(x→NF(o);ω)=α⁡(x→NF(o);x→1)·EQm(ω)·PreEQ⁡(x→NF(o);ω)·e-j⁢ωc⁢x→m-x→NF(o)2x→m-x→NF(o)23/2︸WFS⁢driving⁢function(1)with {right arrow over (x)}NF(o)=(xNF(o),yNF(o),0) and c speed of sound. And in front of the array plane (e.g., in front of the loudspeaker array1059), note that only the last term changes: DmNF(x→NF(o);ω)=α⁡(x→NF(o);x→1)·EQm(ω)·PreEQ⁡(x→NF(o);ω)·ej⁢ωc⁢x→m-x→NF(o)2x→m-x→NF(o)23/2︸WFS⁢driving⁢function(2)with {right arrow over (x)}NF(o)=(xNF(o),yNF(o),0) In these expressions, the last term corresponds to the amplitude and delay control values in the 2.5D Wave Field Synthesis theory for a localized sources in front and behind the array plane (e.g., defined by the loudspeaker array1059). (An overview of Wave Field Synthesis theory is provided by H. Wierstorf, “Perceptual Assessment of Sound Field Synthesis,” Technische Universitat Berlin, 2014.) The other coefficients are defined as follows: ω: frequency (in rad/s) α: window function, limits truncation artifacts and implement local wave field synthesis, as a function of source and listening positions. EQm: equalization filter compensating for speaker response distortion. PreEQ: pre-equalization filter compensating for 2.5-dimension effects and truncation effects. {right arrow over (x)}l: arbitrary listening position. Regarding the beamformers720b-720c, the system pre-computes a set of M/2 speaker delays and amplitudes adapted to the configuration of the left half of the linear loudspeaker array1059. In the frequency domain, it gives us filter coefficients Bm(ω) for each speaker m and frequency ω. The beamformer driving function for the left half of the speaker array (m=1 . . . M/2) is then a filter defined in the frequency domain as: DmNF({right arrow over (x)}NF(o);ω)=EQm(ω)·Bm(ω) In the above equation, EQmis the equalization filter compensating for speaker response distortion (same filter as in Equations (1) and (2)). The system is designed for a symmetric setup, so that we can just flip the beam filters for the right half of the array to obtain the other beam, so that for m=M/2 . . . M, we have: DmNF({right arrow over (x)}NF(o);ω)=EQm(ω)·BM−m+1(ω) The rendered signals766d(seeFIG.7), which correspond to the loudspeaker signals770bprovided to the two upward firing speakers504a-504b(seeFIG.5), correspond to the signals SULand SURas follows: {sUL(t)=∑o=1Ocos⁢zo·sin⁢y0·DULH(zH(o))*so(t)sUR⁢(t)=∑o=1Ocos⁢zo·cos⁢y0·DURH(zH(o))*so(t) According to an embodiment, the vertical panner720d(seeFIG.7) includes a pre-filtering stage. The pre-filtering stage applies a height perceptual filter H proportionally to the height coordinate z0. In such a case, the applied filter for a given z0is (1-z0)+z0⁢H2. FIG.10is a block diagram of a rendering system1100. The rendering system1100is a modification of the rendering system700(seeFIG.7) suitable for implementation in the soundbar500(seeFIG.5A). The rendering system1100may be implemented using the components of the rendering system300(seeFIG.3). The components of the rendering system1100are similar to those of the rendering system700and use similar reference numbers. The rendering system1100also includes a second pair of beamformers1120eand1120f. The left beamformer1120egenerates rendered signals1166d, and the right beamformer1120fgenerates rendered signals1166e, which the routing module730combines with the other rendered signals766a,766band766cto generate the loudspeaker signals770a. When their output is considered on its own, the left beamformer1120ecreates a virtual left rear source, and the right beamformer1120fcreates a virtual right rear source, as shown inFIG.11. FIG.11is a top view of showing the output coverage for the beamformers1120eand1120f, implemented in the soundbar500(seeFIGS.5A and5B) in a room. (The output coverage for the other renderers of the rendering system1100is as shown inFIGS.6A-6C.) The virtual left rear output1206aresults from the left beamformer1120e(seeFIG.10) generating signals that are reflected from the left wall and back wall of the room. The virtual right rear output1206bresults from the right beamformer1120f(seeFIG.10) generating signals that are reflected from the right wall and back wall of the room. (Note the triangular area where1206aand1206boverlap behind the listeners.) For a given audio object, the soundbar500may combine the output coverage ofFIG.11with one or more of the output coverage ofFIGS.6A-6C, e.g. using a routing module such as the routing module730(seeFIG.10). The output coverages ofFIGS.6A-6C and11show how the soundbar500(seeFIGS.5A and5B) may be used in place of the loudspeakers in a traditional 7.1-channel (or 7.1.2-channel) surround sound system. The left, center and right loudspeakers of the 7.1-channel system may be replaced by the linear array502driven by the sound field renderer720a(seeFIG.7), resulting in the output coverage shown inFIG.6A. The top loudspeakers of the 7.1.2-channel system may be replaced by the upward firing group504driven by the vertical panner720d, resulting in the output coverage shown inFIG.6C. The left and right surround loudspeakers of the 7.1-channel system may be replaced by the linear array502driven by the beamformers720band720c, resulting in the output coverage shown inFIG.6B. The left and right rear surround loudspeakers of the 7.1-channel system may be replaced by the linear array502driven by the beamformers1120eand1120f(seeFIG.10), resulting in the output coverage shown inFIG.11. As discussed above, the system enables multiple renderers to render an audio object, according to their combined output coverages, in order to generate an appropriate perceived position for the audio object. In summary, the systems described herein have an advantage of having the rendering system with the most resolution (e.g., the near field renderer) at the front where most of the cinematographic content is expected to be located (as it matches the screen location) and where human localization accuracy is maximal, while rear, lateral and height rendering remains coarser, which may be less critical for typical cinematographic content. Many of these systems also remain relatively compact and can sensibly be integrated alongside typical visual devices (e.g., above or below the television screen). One feature to keep in mind is that the speaker array can be used to generate concurrently a large number of beams thanks to the superposition principle (e.g., combined using the routing module), to create much more complex systems. Beyond the output coverages shown above, further configurations may model other loudspeaker setups using other combinations of renderers. FIG.12is a top view of a soundbar1200. The soundbar1200may implement the rendering system100(seeFIG.1). The soundbar1200is similar to the soundbar500(seeFIG.5A), and includes the linear array502(having 12 loudspeakers502a,502b,502c,502d,502e,502f,502g,502h,502i,502j,502kand502l) and the upward firing group504(including 2 loudspeakers504aand504b). The soundbar1200also includes two side firing loudspeakers1202aand1202b, with the loudspeaker1202areferred to as the left side firing loudspeaker and the loudspeaker1202breferred to as the right side firing loudspeaker. As compared to the soundbar500(seeFIG.5A), the soundbar1200uses the side firing loudspeakers1202aand1202bto generate the virtual side outputs604aand604b(seeFIG.6B). FIG.13is a block diagram of a rendering system1300. The rendering system1300is a modification of the rendering system1100(seeFIG.10) suitable for implementation in the soundbar1200(seeFIG.12). The rendering system1300may be implemented using the components of the rendering system300(seeFIG.3). The components of the rendering system1300are similar to those of the rendering system1100and use similar reference numbers. As compared to the rendering system1100, the rendering system1300replaces the beamformers720band720cwith a binaural renderer1320. The binaural renderer1320receives the loudspeaker configuration information156, the object audio data154, the selection information162, and the position information164. The binaural renderer1320performs binaural rendering on the object audio data154and generates a left binaural signal1366band a right binaural signal1366c. Considering only the side firing loudspeakers1202aand1202b(seeFIG.12), the left binaural signal1366bgenerally corresponds to the output from the left side firing loudspeaker1202a, and the right binaural signal1366cgenerally corresponds to the output from the right side firing loudspeaker1202b. (Recall that the routing module730will then combine the binaural signals1366band1366cwith the other rendered signals766to generate the loudspeaker signals770to the full set of loudspeakers502,504and1202.) FIG.14is a block diagram of a renderer1400. The renderer1400may correspond to one or more of the renderers discussed above, such as the renderers120(seeFIG.1), the renderers720(seeFIG.7), the renderers1120(seeFIG.10), etc. The renderer1400illustrates that a renderer may include more than one renderer as components thereof. As shown here, the renderer1400includes a renderer1402in series with a renderer1404. Although two renderers1402and1404are shown, the renderer1400may include additional renderers, in assorted serial and parallel configurations. The renderer1400receives the loudspeaker configuration information156, the selection information162, and the position information164; the renderer1400may provide these signals to one or more of the renderers1402and1404, depending upon their particular configurations. The renderer1402receives the object audio data154, and one or more of the loudspeaker configuration information156, the selection information162, and the position information164. The renderer1402performs rendering on the object audio data154and generates rendered signals1410. The rendered signals1410generally correspond to intermediate rendered signals. For example, the rendered signals1410may be virtual speaker feed signals. The renderer1404receives the rendered signals1410, and one or more of the loudspeaker configuration information156, the selection information162, and the position information164. The renderer1404performs rendering on the rendered signals1410and generates rendered signals1412. The rendered signals1412correspond to the rendered signals discussed above, such as the rendered signals166(seeFIG.1), the rendered signals766(seeFIG.7), the rendered signals1166(seeFIG.10), etc. The renderer1400may then provide the rendered signals1412to a routing module (e.g., the routing module130ofFIG.1, the routing module730ofFIG.7orFIG.10orFIG.13), etc. in a manner similar to that discussed above. In general, the renderers1402and1404have different types in a manner similar to that discussed above. For example, the types may include amplitude panners, vertical panners, wave field renderers, binaural renderers, and beamformers. A specific example configuration is shown inFIG.15. FIG.15is a block diagram of a renderer1500. The renderer1500may correspond to one or more of the renderers discussed above, such as the renderers120(seeFIG.1), the renderers720(seeFIG.7), the renderers1120(seeFIG.10), the renderer1400(seeFIG.14), etc. The renderer1500includes an amplitude panner1502, a number N of binaural renderers1504(three shown:1504a,1504band1504c), and a number M of beamformer sets that include a number of left beamformers1506(three shown:1506a,1506band1506c) and right beamformers1508(three shown:1508a,1508band1508c). The amplitude panner1502receives the object audio data154, the selection information162, and the position information164. The amplitude panner1502performs rendering on the object audio data154and generates virtual speaker feeds1520(three shown:1520a,1520band1520c), in a manner similar to the other amplitude panners described herein. The virtual speaker feeds1520may correspond to canonical loudspeaker feed signals such as 5.1-channel surround signals, 7.1-channel surround signals, 7.1.2-channel surround signals, 7.1.4-channel surround signals, 9.1-channel surround signals, etc. The virtual speaker feeds1520are referred to as “virtual” since they need not be provided directly to actual loudspeakers, but instead may be provided to the other renderers in the renderer1500for further processing. The specifics of the virtual speaker feeds1520may differ among the various embodiments and implementations of the renderer1500. For example, when the virtual speaker feeds1520include a low-frequency effects channel signal, the amplitude panner1502may provide that channel signal to one or more loudspeakers directly (e.g., bypassing the binaural renderers1504and the beamformers1506and1508). As another example, when the virtual speaker feeds1520include a center channel signal, the amplitude panner1502may provide that channel signal to one or more loudspeakers directly, or may provide that signal directly to a set of one of the left beamformers1506and one of the right beamformers1508(e.g., bypassing the binaural renderers1504). The binaural renderers1504receive the virtual speaker feeds1520and the loudspeaker configuration information156. (In general, the number N of binaural renderers1504depends upon the specifics of the embodiments of the renderer1500, such as the number of virtual speaker feeds1520, the type of virtual speaker feed, etc., as discussed above.) The binaural renderers1504perform rendering on the virtual speaker feeds1520and generate left binaural signals1522(three shown:1522a,1522band1522c) and right binaural signals1524(three shown:1524a,1524band1524c), in a manner similar to the other binaural renderers described herein. The left beamformers1506receive the left binaural signals1522and the loudspeaker configuration information156, and the right beamformers1508receive the right binaural signals1524and the loudspeaker configuration information156. Each of the left beamformers1506may receive one or more of the left binaural signals1522, and each of the right beamformers1508may receive one or more of the right binaural signals1524, again depending on the specifics of the embodiments of the renderer1500as discussed above. (These one-or-more relationships are indicated by the dashed lines for1522and1524inFIG.15.) The left beamformers1506perform rendering on the left binaural signals1522and generate rendered signals1566(three shown:1566a,1566band1566c). The right beamformers1508perform rendering on the right binaural signals1524and generate rendered signals1568(three shown:1568a,1568band1568c). The beamformers1506and1508otherwise operate in a manner similar to the other beamformers described herein. The rendered signals1566and1568correspond to the rendered signals discussed above, such as the rendered signals166(seeFIG.1), the rendered signals766(seeFIG.7), the rendered signals1166(seeFIG.10), the rendered signals1412(seeFIG.14), etc. The renderer1500may then provide the rendered signals1566and1568to a routing module (e.g., the routing module130ofFIG.1, the routing module730ofFIG.7orFIG.10orFIG.13), etc. in a manner similar to that discussed above. The number M of left beamformers1506and right beamformers1508depends upon the specifics of the embodiments of the renderer1500, as discussed above. For example, the number M may be varied based on the form factor of the device that includes the renderer1500, on the number of loudspeaker arrays that are connected to the renderer1500, on the capabilities and arrangement of those loudspeaker arrays, etc. As a general guideline, the number M (of beamformers1506and1508) may be less than or equal to the number N (of binaural renderers1504). As another general guideline, the number of separate loudspeaker arrays may be less than or equal to twice the number N (of binaural renderers1504). As one example form factor, a device may have physically separate left and right loudspeaker arrays, where the left loudspeaker array produces all the left beams and the right loudspeaker array produces all the right beams. An another example form factor, a device may have physically separate front and rear loudspeaker arrays, where the front loudspeaker array produces the left and right beams for all front binaural signals, and the rear loudspeaker array produces the left and right beams for all rear binaural signals. FIG.16is a block diagram of a rendering system1600. The rendering system1600is similar to the rendering system100(seeFIG.1), with the renderers120(seeFIG.1) replaced by a renderer arrangement similar to that of the renderer1500(seeFIG.15); there are also differences relating to the distribution module110(seeFIG.1). The rendering system1600includes an amplitude panner1602, a number N of binaural renderers1604(three shown:1604a,1604band1604c), a number M of beamformer sets that include a number of left beamformers1606(three shown:1606a,1606band1606c) and right beamformers1608(three shown:1608a,1608band1508c), and a routing module1630. The amplitude panner1602receives the object metadata152and the object audio data154, performs rendering on the object audio data154according to the position information in the object metadata152, and generates virtual speaker feeds1620(three shown:1620a,1620band1620c), in a manner similar to the other amplitude panners described herein. Similarly, the specifics of the virtual speaker feeds1620may differ among the various embodiments and implementations of the rendering system1600, in a manner similar to that described above regarding the renderer1500(seeFIG.15). (As compared to the rendering system100(seeFIG.1), the rendering system1600omits the distribution module110, but uses the amplitude panner1602to weight the virtual speaker feeds1620among the binaural renderers1604.) The binaural renderers1604receive the virtual speaker feeds1620and the loudspeaker configuration information156. (In general, the number N of binaural renderers1604depends upon the specifics of the embodiments of the rendering system1600, such as the number of virtual speaker feeds1620, the type of virtual speaker feed, etc., as discussed above.) The binaural renderers1604perform rendering on the virtual speaker feeds1620and generate left binaural signals1622(three shown:1622a,1622band1622c) and right binaural signals1624(three shown:1624a,1624band1624c), in a manner similar to the other binaural renderers described herein. The left beamformers1606receive the left binaural signals1622and the loudspeaker configuration information156, and the right beamformers1608receive the right binaural signals1624and the loudspeaker configuration information156. Each of the left beamformers1606may receive one or more of the left binaural signals1622, and each of the right beamformers1608may receive one or more of the right binaural signals1624, again depending on the specifics of the embodiments of the rendering system1600as discussed above. (These one-or-more relationships are indicated by the dashed lines for1622and1624inFIG.16.) The left beamformers1606perform rendering on the left binaural signals1622and generate rendered signals1666(three shown:1666a,1666band1666c). The right beamformers1608perform rendering on the right binaural signals1624and generate rendered signals1668(three shown:1668a,1668band1668c). The beamformers1606and1608otherwise operate in a manner similar to the other beamformers described herein. The routing module1630receives the loudspeaker configuration information156, the rendered signals1666and the rendered signals1668. The routing module1630generates loudspeaker signals1670, in a manner similar to the other routing modules described herein. FIG.17is a flowchart of a method1700of audio processing. The method1700may be performed by the rendering system1600(seeFIG.16). The method1700may be implemented by one or more computer programs, for example that the rendering system1600executes to control its operation. At1702, one or more audio objects are received. Each of the audio objects respectively includes position information. As an example, the rendering system1600(seeFIG.16) may receive the audio signal150, which includes the object metadata152and the object audio data154. For each of the audio objects, the method continues with1704. At1704, for a given audio object, the given audio object is rendered, based on the position information, using a first category of renderer to generate a first plurality of signals. For example, the amplitude panner1602(seeFIG.16) may render the given audio object (in the object audio data154) based on the position information (in the object metadata152) to generate the virtual loudspeaker signals1620. At1706, for the given audio object, the first plurality of signals are rendered using a second category of renderer to generate a second plurality of signals. For example, the binaural renderers1604(seeFIG.16) may render the virtual speaker feeds1620to generate the left binaural signals1622and the right binaural signals1624. At1708, for the given audio object, the second plurality of signals are rendered using a third category of renderer to generate a third plurality of signals. For example, the left beamformers1606may render the left binaural signals1622to generate the rendered signals1666, and the right beamformers1608may render the right binaural signals1624to generate the rendered signals1668. At1710, the third plurality of signals are combined to generate a plurality of loudspeaker signals. For example, the routing module1630(seeFIG.16) may combine the rendered signals1666and the rendered signals1668to generate the loudspeaker signals1670. At1712, the plurality of loudspeaker signals (see1708) are output from a plurality of loudspeakers. When multiple audio objects are to be output concurrently, the method1700operates similarly. For example, multiple given audio objects may be processed using multiple paths of1704-1706-1708in parallel, with the rendered signals corresponding to the multiple audio objects being combined (see1710) to generate the loudspeaker signals. As another example, multiple given audio objects may be processed by combining the rendered signal for each audio object at the output one or more of the rendering stages. Applying this example to the rendering system1600(seeFIG.16), the amplitude panner1602may render the multiple given audio objects, each of the virtual loudspeaker signals1620corresponds to a combined rendering that combines the multiple given audio objects, and the binaural renderers1604and the beamformers1606and1608operate on the combined rendering. Implementation Details An embodiment may be implemented in hardware, executable modules stored on a computer readable medium, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the steps executed by embodiments need not inherently be related to any particular computer or other apparatus, although they may be in certain embodiments. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, embodiments may be implemented in one or more computer programs executing on one or more programmable computer systems each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion. Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein. (Software per se and intangible or transitory signals are excluded to the extent that they are unpatentable subject matter.) The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. Based on the above disclosure and the following claims, other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as defined by the claims. Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs):1. A method of audio processing, the method comprising:receiving one or more audio objects, wherein each of the one or more audio objects respectively includes position information;for a given audio object of the one or more audio objects:selecting, based on the position information of the given audio object, at least two renderers of a plurality of renderers, wherein the at least two renderers have at least two categories;determining, based on the position information of the given audio object, at least two weights;rendering, based on the position information, the given audio object using the at least two renderers weighted according to the at least two weights, to generate a plurality of rendered signals; andcombining the plurality of rendered signals to generate a plurality of loudspeaker signals; andoutputting, from a plurality of loudspeakers, the plurality of loudspeaker signals.2. The method of EEE 1, wherein the at least two categories include a sound field renderer, a beamformer, a panner, and a binaural renderer.3. The method of any one of EEEs 1-2, wherein a given rendered signal of the plurality of rendered signals includes at least one component signal,wherein each of the at least one component signal is associated with a respective one of the plurality of loudspeakers, andwherein a given loudspeaker signal of the plurality of loudspeaker signals corresponds to combining, for a given loudspeaker of the plurality of loudspeakers, all of the at least one component signal that are associated with the given loudspeaker.4. The method of EEE 3, wherein a first renderer generates a first rendered signal, wherein the first rendered signal includes a first component signal associated with a first loudspeaker and a second component signal associated with a second loudspeaker,wherein a second renderer generates a second rendered signal, wherein the second rendered signal includes a third component signal associated with the first loudspeaker and a fourth component signal associated with the second loudspeaker,wherein a first loudspeaker signal associated with the first loudspeaker corresponds to combining the first component signal and the third component signal, andwherein a second loudspeaker signal associated with the second loudspeaker corresponds to combining the second component signal and the fourth component signal.5. The method of any one of EEEs 1-4, wherein rendering the given audio object includes, for a given renderer of the plurality of renderers, applying a gain based on the position information to generate a given rendered signal of the plurality of rendered signals.6. The method of any one of EEEs 1-5, wherein the plurality of loudspeakers includes a dense linear array of loudspeakers.7. The method of any one of EEEs 1-6, wherein the at least two categories includes a sound field renderer, wherein the sound field renderer performs a wave field synthesis process.8. The method of any one of EEEs 1-7, wherein the plurality of loudspeakers are arranged in a first group that is directed in a first direction and a second group that is directed in a second direction that differs from the first direction.9. The method of EEE 8, wherein the first direction includes a forward component and the second direction includes a vertical component.10. The method of EEE 8, wherein the second direction includes a vertical component, wherein the at least two renderers includes a wave field synthesis renderer and an upward firing panning renderer, and wherein the wave field synthesis renderer and the upward firing panning renderer generate the plurality of rendered signals for the second group.11. The method of EEE 8, wherein the second direction includes a vertical component, wherein the at least two renderers includes a wave field synthesis renderer, an upward firing panning renderer and a beamformer, and wherein the wave field synthesis renderer, the upward firing panning renderer and the beamformer generate the plurality of rendered signals for the second group.12. The method of EEE 8, wherein the second direction includes a vertical component, wherein the at least two renderers includes a wave field synthesis renderer, an upward firing panning renderer and a side firing panning renderer, and wherein the wave field synthesis renderer, the upward firing panning renderer and the side firing panning renderer generate the plurality of rendered signals for the second group.13. The method of EEE 8, wherein the first direction includes a forward component and the second direction includes a side component.14. The method of EEE 8, wherein the first direction includes a forward component, wherein the at least two renderers includes a wave field synthesis renderer, and wherein the wave field synthesis renderer generates the plurality of rendered signals for the first group.15. The method of EEE 8, wherein the second direction includes a side component, wherein the at least two renderers includes a wave field synthesis renderer and a beamformer, and wherein the wave field synthesis renderer and the beamformer generate the plurality of rendered signals for the second group.16. The method of EEE 8, wherein the second direction includes a side component, wherein the at least two renderers includes a wave field synthesis renderer and a side firing panning renderer, and wherein the wave field synthesis renderer and the side firing panning renderer generate the plurality of rendered signals for the second group.17. The method of any one of EEEs 1-16, further comprising:combining the plurality of rendered signals for the one or more audio objects to generate the plurality of loudspeaker signals.18. The method of any one of EEEs 1-17, wherein the at least two renderers includes renderers in series.19. The method of any one of EEEs 1-18, wherein the at least two renderers includes an amplitude panner, a plurality of binaural renderers, and a plurality of beamformers;wherein the amplitude panner is configured to render, based on the position information, the given audio object to generate a first plurality of signals;wherein the plurality of binaural renderers is configured to render the first plurality of signals to generate a second plurality of signals;wherein the plurality of beamformers is configured to render the second plurality of signals to generate a third plurality of signals; andwherein the third plurality of signals are combined to generate the plurality of loudspeaker signals.20. An apparatus for processing audio, the apparatus comprising:a plurality of loudspeakers;a processor; anda memory,wherein the processor is configured to control the apparatus to receive one or more audio objects, wherein each of the one or more audio objects respectively includes position information;wherein for a given audio object of the one or more audio objects:the processor is configured to control the apparatus to select, based on the position information of the given audio object, at least two renderers of a plurality of renderers, wherein the at least two renderers have at least two categories;the processor is configured to control the apparatus to determine, based on the position information of the given audio object, at least two weights;the processor is configured to control the apparatus to render, based on the position information, the given audio object using the at least two renderers weighted according to the at least two weights, to generate a plurality of rendered signals; andthe processor is configured to control the apparatus to combine the plurality of rendered signals to generate a plurality of loudspeaker signals; andwherein the processor is configured to control the apparatus to output, from the plurality of loudspeakers, the plurality of loudspeaker signals.21. A method of audio processing, the method comprising:receiving one or more audio objects, wherein each of the one or more audio objects respectively includes position information;for a given audio object of the one or more audio objects:rendering, based on the position information, the given audio object using a first category of renderer to generate a first plurality of signals;rendering the first plurality of signals using a second category of renderer to generate a second plurality of signals;rendering the second plurality of signals using a third category of renderer to generate a third plurality of signals; andcombining the third plurality of signals to generate a plurality of loudspeaker signals; andoutputting, from a plurality of loudspeakers, the plurality of loudspeaker signals.22. The method of EEE 21, wherein the first category of renderer corresponds to an amplitude panner, wherein the second category of renderer corresponds to a plurality of binaural renderers, and wherein the third category of renderer corresponds to a plurality of beamformers.23. A non-transitory computer readable medium storing a computer program that, when executed by a processor, controls an apparatus to execute processing including the method of any one of EEEs 1-19, 21 or 22.24. An apparatus for processing audio, the apparatus comprising:a plurality of loudspeakers;a processor; anda memory,wherein the processor is configured to control the apparatus to receive one or more audio objects, wherein each of the one or more audio objects respectively includes position information;wherein for a given audio object of the one or more audio objects:the processor is configured to control the apparatus to render, based on the position information, the given audio object using a first category of renderer to generate a first plurality of signals,the processor is configured to control the apparatus to render the first plurality of signals using a second category of renderer to generate a second plurality of signals,the processor is configured to control the apparatus to render the second plurality of signals using a third category of renderer to generate a third plurality of signals, andthe processor is configured to control the apparatus to combine the third plurality of signals to generate a plurality of loudspeaker signals; andwherein the processor is configured to control the apparatus to output, from the plurality of loudspeakers, the plurality of loudspeaker signals. REFERENCES U.S. Application Pub. No. 2016/0300577.U.S. Application Pub. No. 2017/0048640.International Application Pub. No. WO 2017/087564 A1.U.S. Application Pub. No. 2015/0245157.H. Wittek, F. Rumsey, and G. Theile, “Perceptual Enhancement of Wavefield Synthesis by Stereophonic Means,” Journal of the Audio Engineering Society, vol. 55, no. 9, pp. 723-751, 2007.U.S. Pat. No. 7,515,719.U.S. Application Pub. No. 2015/0350804.M. N. Montag, “Wave field synthesis in Three Dimensions by Multiple Line Arrays,” University of Miami, 2011.R. Ranjan and W. S. Gan, “A hybrid speaker array-headphone system for immersive 3D audio reproduction,” Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1836-1840, April 2015.V. Pulkki, “Virtual sound source positioning using vector base amplitude panning,” Journal of the Audio Engineering Society, vol. 45, no. 6, pp. 456-466, 1997.U.S. Pat. No. 7,515,719.H. Wierstorf, “Perceptual Assessment of Sound Field Synthesis,” Technische Universitat Berlin, 2014.
74,023
11943601
In the figures, elements and procedures having the same or similar reference elements have the same or similar attributes and description, unless explicitly stated otherwise. SUMMARY In a first embodiment, a computer-implemented method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset. The computer-implemented method also includes identifying a direction of the first acoustic source relative to the headset based on a location of the first acoustic source; and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the direction of the first audio source. In a second embodiment, a headset includes a processor configured to receive, from an immersive reality application, a first audio waveform from a first acoustic source in a first location. The headset also includes a left speaker, configured to provide the first audio waveform to a left ear of a headset user, and a right speaker, configured to provide the first audio waveform to a right ear of the headset user, wherein the processor is configured to adjust a time delay of the first audio waveform between the left speaker and the right speaker, and to modulate an amplitude of the first audio waveform in the left speaker and the right speaker based on the first location of the first acoustic source. In a third embodiment, a computer-implemented method includes generating, in a server, a first audio waveform from a first acoustic source in an immersive reality application installed in the server. The computer-implemented method includes generating an environmental datum that places the first acoustic source within a virtual world, based on the immersive reality application, determining a perceived direction for the first audio waveform, based on the environmental datum, and providing an acoustic signal with the first audio waveform including a delay and an amplitude of the first audio waveform based on the perceived direction, to one or more speakers in a client device that is communicatively coupled with the server. In another embodiment, a system includes a memory storing instructions and a processor to execute the instructions to cause the system to perform a method. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the direction for the first acoustic source. In yet another embodiment, a system includes a first means to store instructions, and a second means to execute the instructions to cause the system to perform a method. The method includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset, identifying a direction for the first acoustic source relative to the headset based on a location of the first acoustic source, and providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the direction for the first acoustic source. DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be apparent, however, to one ordinarily skilled in the art, that embodiments of the present disclosure may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail so as not to obscure the disclosure. Audio is a primary interaction modality for enhanced reality applications, including virtual reality (VR) and augmented reality (AR). As part of this, spatial audio can play an important role in allowing for hardware/software-based audio filters, giving users access to novel audio experiences, providing better and more immersive user content creation. Wearable devices as disclosed herein include multiple acoustic and contact microphones combined with at least two speakers intended to present an audio signal to a user (one for each of the ears of a user or for each of two speakers in a binaural sound system) configured to provide a perceived audio signal for a user in an immersive reality application. The perceived audio signal may include multiple acoustic effects including doppler frequency shifts and spatial amplitude modulation for moving acoustic sources, and placement of one or more participants in the immersive reality application in different locations of a virtual world. In some embodiments, a server may collect signals from multiple users of smart glasses and combine the signals in a virtual reality for a podcast, using an immersive reality application as disclosed herein. The podcast may include sound effects placing each of the participants in different locations within the virtual reality. Embodiments as disclosed herein include placing one or more moving sound sources in the virtual reality, and adding the corresponding sound effects to a broadcast of the virtual reality. Accordingly, embodiments as disclosed herein provide the ability to send metadata with streams to influence the rendering of an audio stream, the ability for the user to select and move between different filters for the user, and the ability of the device to interpret the ‘filter’ application and apply user settings to the content. Being able to augment a user's video is highly desirable for immersive reality applications. With the inclusion of spatial audio into significantly more devices, there is a wide range of possible spatial audio ‘filters’ that can be applied to make the immersive applications highly appealing to users. In some embodiments, a remote server sends to a recipient device (e.g., a smart glass or other headset device), information about an audio filter to allow the device to provide an acoustic rendition of the user's choice. This could be done via any mix of networked devices and systems, including a mobile device tied to the user and paired with the smart glass. Possible filters include such concepts as: basic positional information (two people speaking to the right and left of the listener respectively), animated positions (‘make my voice walk around the person's head’), wherein speed is adjustable, make a user's and/or a caller's voice appear to jump up and down; make the user or caller seem (acoustically) to go far away and come up real close to the device, and more. These filters can then be applied and allowed to run during both a live audio interaction (VoIP) or offline (podcast, render of a voicemail recording, and the like) to enhance the immersive experience for the user. Additionally, immersive experiences can be further augmented by the user's settings and personalized data, such as applying their own spatial audio adjustments for HRTFs or device settings. Maintaining these as descriptive metadata also allows users to disable such effects if they are either distracting or impede accessibility. In some embodiments, the filters can apply spatial audio such as room acoustic effect (could be from the real-life room wherein the user is located, or a ‘virtual’ environment, or from other premade set), voice changes (doppler, reverb wet/dry mix), or additional audio effects. FIG.1illustrates an architecture10including one or more wearable devices100-1and100-2(hereinafter, collectively referred to as “wearable devices100”) with a user101, coupled to one another, to a mobile device110, a remote server130and to a database152, according to some embodiments. Wearable devices100may include a smart glass or augmented reality headset100-1and a wrist-band (100-2or “watch”), and mobile device110may be a smart phone, all of which may communicate with one another via wireless communications and exchange a first dataset103-1. In some embodiments, mobile device110may belong to user101as well. Dataset103-1may include a recorded video, audio, or some other file or streaming media. The user of wearable devices100is also the owner or is associated with mobile device110. Mobile device110may be communicatively coupled with remote server130and database152via a network150, and transmit/share information, files, and the like with one another (e.g., dataset103-2and dataset103-3). In some embodiments, smart glass100-1may include multiple sensors121such as inertial measurement units (IMUs), gyroscopes, microphones, cameras, and the like mounted within the frame of AR headset100-1or wrist-watch100-2or wrist-band. Other sensors121that can be included in wearable devices100(e.g., smart glasses100-1, wrist-bands100-2, and the like) may be magnetometers, microphones, photodiodes and cameras, touch sensors and other electromagnetic devices such as capacitive sensors, a pressure sensor, and the like. Smart glass100-1may include an acoustic microphone125-1and a contact microphone125-2(hereinafter, collectively referred to as “microphones125”). Acoustic microphone125-1receives acoustic signals propagating through the air, as pressure waves. Contact microphone125-2may be mechanically coupled to the skin and a bone of the user, e.g., in a nose pad or in an arm of smart glass100-1, in contact with the user's temple, and the like. In addition, smart glass100-1or any other wearable device100, or mobile device110may include a memory circuit120storing instructions, and a processor circuit112configured to execute the instructions to cause smart glass100-1to perform, at least partially, some of the steps in methods consistent with the present disclosure. In some embodiments, smart glass100-1, wrist-watch100-2, wrist-band, or wearable device100, mobile device110, server130, and/or database152may further include a communications module118enabling the device to wirelessly communicate with remote server130via network150. In some embodiments, communications module118can include, for example, radio-frequency hardware (e.g., antennas, filters analog to digital converters, and the like) and software (e.g., signal processing software). Smart glass100-1may thus download a multimedia online content (e.g., dataset103-1) from remote server130, to perform at least partially some of the operations in methods as disclosed herein. Network150may include, for example, any one or more of a local area network (LAN), a wide area network (WAN), the Internet, and the like. Further, the network can include, but is not limited to, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a star-bus network, tree or hierarchical network, and the like. FIG.2illustrates a smart glass200in an environment20including multiple acoustic sources205-1,205-2, and205-3(hereinafter, collectively referred to as “acoustic sources205”) and noise (e.g., background interference)207, according to some embodiments. Smart glass200may belong to a user201and may communicate wirelessly with a mobile device210also with user201. Smart glass200includes a camera222, one or more acoustic microphones225-1, contact microphone225-2(hereinafter, collectively referred to as “microphones225”), an inertial motion unit (IMU) sensor or gyroscope221, and at least one speaker223, mounted on the frame (e.g., nose pads, arms, rim, and the like) of smart glass200. Acoustic sources205may include a person205-1talking to the user of the smart glass, a music band205-2playing in the background, and a moving source205-3(e.g., a car, train, plane, toy, drone, or a moving person). A noise source207may be a background noise, environmental noise, and the like (e.g., kitchen noise in a restaurant, the humming of a motor engine or machine). Smart glass200may also include a memory circuit220storing instructions and a processor circuit212configured to execute the instructions to perform one or more operations consistent with methods as disclosed herein. For example, by collecting the acoustic signals from each of acoustic sources205, including noise207, processor circuit212may determine a direction of arrival, DA,215-1,215-2,215-3, and215-4(hereinafter, collectively referred to as “DAs215”) for a sound waveform from each of acoustic sources205and noise207, respectively. To do this, processor212may also provide a common clock signal to microphones225, so that the time of arrival at each microphone225of the different waveforms from each acoustic source205and noise207may be registered and stored in memory circuit220. By determining the different time of arrival of a waveform from each source to microphones225, the direction of the source relative to smart glass200may be established (e.g., DAs215). In some embodiments, memory circuit220may include an immersive reality application that provides instructions to processor212to project a virtual feature onto the display in at least one of the eyepieces of smart glass200. Accordingly, at least one or more of acoustic sources205and noise207may be a virtual feature embedded in a display of smart glass200. Moreover, in some embodiments, at least one or more of acoustic sources205and noise207may be a virtual feature which, while not displayed in one of the eyepieces of smart glass200, may still provide acoustic signals to the user via the one or two speakers223in smart glass200, positioned near each of the user's ears. In addition to determining DAs from different acoustic sources205and noise207, processor circuit212may use signals from IMU sensor221to determine position, location, and orientation of smart glass200relative to the real world (e.g., as defined by gravity). Accordingly, by integrating signals from IMU sensor221or communicating with a geolocation system, processor circuit212may identify a location for smart glass200, and a position and location of each of acoustic sources205and noise207. Using this information, processor circuit212may further be able to provide and update a virtual DA215for a virtual acoustic source, as user201moves along and rotates the head (and consequently, the smart glasses as well). Processor212then provides speakers223on each of the ears of user201an appropriate delay and relative intensity consistent with the virtual or updated direction of arrival of the waveform from the virtual source (e.g., stereo sound). Using this information, processor circuit212may provide audio beam steering (to provide a virtual sound/noise source in a virtual DA215to speakers223), tracking (to select/enhance/suppress a specific acoustic source205at microphones225), and form other audio effects for immersive reality applications. FIG.3illustrates a menu322for a user to choose tracking of a selected acoustic source305-1,305-2, and305-3(hereinafter, collectively referred to as “acoustic sources305”) in an immersive reality application350, according to some embodiments. In some embodiments, while the user has the different acoustic sources305on display in an eyepiece of a smart glass as disclosed herein, immersive reality application350may be running in the smart glass or in a mobile device paired with the smart glass (e.g., mobile devices110and210, and smart glass100or200). Immersive reality application350may provide, in real time, a menu322for the user to select between multiple acoustic sources305at least one that the user desires to track and pay attention to. When the user selects one or more of acoustic sources305, the system may be configured to enhance the signal of the microphones in the smart glass (e.g., microphone125and225) based on the time delays and sound wave amplitudes associated to the corresponding DA (e.g., DAs215). This may be done in real time. In addition, the processor in the smart glass may apply noise cancelation procedures for DAs associated with other acoustic sources305or noise (e.g., processors112or212, and noise207). Moreover, the processor in the smart glass may apply speech transcription and/or enhancement of the selected acoustic source305. Moreover, in some embodiments, immersive reality application350may detect and identify the noise source (e.g., kitchen noise or a noisy table at a restaurant, a motor engine, or a buzzing or humming sound coming from an engine, person, animal, or other environmental feature) and provide in menu322for the user an option340to cancel or suppress the noise source. FIG.4illustrates a selection of a direction of arrival415of an audio source405from multiple microphones425-1,425-2,425-3,425-4, and425-5(hereinafter, collectively referred to as “audio sources425”) on a smart glass400, according to some embodiments. Accordingly, DA415may be selected based on the difference in time of arrival of a sound waveform to each of spatially distributed microphones425on smart glass400. In some embodiments, it may suffice to know the difference in time of arrival to assess DA415as a unit vector having two direction cosines. In some embodiments, the system may be able to determine the specific location of acoustic source405relative to smart glass400and even relative to geocoordinates. In some embodiments, the assessment of DA415and location of acoustic source405may include resolving a linear regression problem associating times of arrival or sound signals to each of microphones425based on DA415and the speed of sound. To determine the time of arrival, the system may be configured to select a characteristic portion of the waveform generated by acoustic source405, that may be easily identifiable using digital filters at each microphone425. In some embodiments, and to enhance accuracy, the entire waveform or a substantive portion of it may be used to match the origin of acoustic source405. Other filtering techniques using hardware or software may be implemented, to identify distinct acoustic sources405involved in any given event. In some embodiments, the software may include non-linear techniques such as non-linear regression, neural networks, machine learning, and artificial intelligence. Accordingly, in some embodiments, the system may include geolocation sensors and devices (e.g., IMU sensors121or221) to better identify location and distances in the user environment at the time of the event recording. Speakers423-1and423-2may be associated with each of the left and right ears of the user. Accordingly, the processor may provide a delay between speaker423-1and speaker423-2to an acoustic waveform from acoustic source405, to provide to the user the impression that the source is located along DA415(e.g., stereo sound). In addition to a time delay, the processor may also adjust a frequency of the audio waveform according to a doppler shift in the signal based on the direction of motion, speed, and relative orientation of the smart glass thereof. In some embodiments, the Doppler adjustment may be provided to a virtual acoustic source moving at a virtual speed, in a virtual direction relative to the smart glass. FIGS.5A-5Dillustrate different environments50A,50B,50C, and50D (hereinafter, collectively referred to as “environments50”) of spatial audio filter effects for immersive reality applications550A,550B,550C, and550D (hereinafter, collectively referred to as “IR applications550”) running in smart glasses500-1and500-2(hereinafter, collectively referred to as “smart glasses500”), according to some embodiments. Network150, data set503-1, and mobile devices510-1and510-2(hereinafter, collectively referred to as “mobile devices510”) are consistent with the above descriptions of the same, throughout this disclosure. Users501-1,501-2,501D-1and501D-2(hereinafter, collectively referred to as “users501”) are associated with a mobile device510and with a smart glass500, a caller502, and a sound source505. IR applications550may include stereo sound provided via speakers523in smart glasses500, wherein users501acquire a “persona” or avatar511-1and511-2(hereinafter, collectively referred to as “avatars511”), and caller502acquires a persona or avatar512A,512B,512C-1, or512C-2(hereinafter, collectively referred to as “avatars512”). Avatars511and512may be “audio avatars.” In some embodiments, smart glasses500may include a display in at least one of the eyepieces, such that avatars511and512provided by IR applications550are image representations of users501and caller502. These image representations may be a face, a cartoon, a drawing, or a virtual reality, three-dimensional rendition of a face, head, or the full body including gestures. In that regard, the image representations of avatars511and512may be displayed in a virtual reality representation on the display. An IMU sensor521may provide data to locate and position users501in IR applications550. FIG.5Aillustrates smart glass user501-1and a caller502in a live voice over the internet (VoIP) chat, through network150. In some embodiments, caller502may also be using a smart glass (e.g., or an AR/VR headset) to communicate with user (501-1). User501may activate a filter in IR application550A that makes smart glass500-1render audio to sound like caller502is on the user's shoulder (e.g., avatar512A on the shoulder of avatar511). In a similar embodiment, user501may select a filter in IR application550A so that speaker523renders a sound such that avatar512A appears as a magical pixie of caller502, floating around, including a small twinkle or chirp added to the audio waveform to sound like they are coming from different places, or like caller502is jumping around user501(e.g., avatar512A jumping around avatar511). In some embodiments, the audio filters are world-locked so that the spatial audio uses a three degrees of freedom tracking521in smart glass500-1to make it sound like their voice is coming from a selected virtual world direction as user501-1moves or changes his/her head pose. FIG.5Billustrates an audio filter in IR application550B, wherein smart glass500-1includes an acoustic rendition where caller502sounds like avatar512B is dive bombing avatar511by coming from above and getting close then getting further away, relative to avatar511for user501-1. In some embodiments, these sound effects may be applied to a moving caller502(e.g., whether the caller is moving in the real world, or in a virtual world). FIG.5Cillustrates an audio filter in IR application550C, wherein smart glass500-1includes an acoustic rendition in which user501-1is broadcasting a podcast that includes caller502. User501-1(or caller502) may start telling a story with two different characters (e.g., character512C-1and character512C-2, hereinafter, collectively referred to as “characters512C”) talking. Characters512C can toggle so that the voice from character512C-1sounds from the right side of user avatar511while character512C-2sounds from the left user avatar511. FIG.5Dillustrates users501D-1and501D-2(hereinafter, collectively referred to as “users501D”) listening to an acoustic source505(e.g., a music band) in surround sound or spatial audio format, while wearing smart glasses500-1and500-2(hereinafter, collectively referred to as “smart glasses500”) as disclosed herein. User501D-2can toggle her smart glasses500-2such that they are now positioned at the same place where the vocals from the music band are originating from. This creates the immersive effect of acoustically rendering users501D ‘within the band’. FIG.6illustrates spatial audio filtering effects in smart glasses600-1,600-2, and600-3(hereinafter, collectively referred to as “smart glasses600”) for a podcast application650, according to some embodiments. Accordingly, users601-1,601-2, and601-3(hereinafter, collectively referred to as “users601”) are broadcasting a podcast650to an audience602. Each of users601may access podcast application650via a mobile device610-1,610-2, and610-3(hereinafter, collectively referred to as “mobile devices610”). Each of users601is perceptually heard from a different spatial position (say, left, right, and center, around a table, in a restaurant, and the like), making their voices easier to discern and tell apart, and making podcast650feel more ‘real’ rather than everyone being in mono-aural and sounding ‘inside my head’ (from the perspective of any one in audience602). In some embodiments, a remote server630providing podcast650to audience602may further provide virtual images showing avatars611-1,611-2, and611-3(hereinafter, collectively referred to as “avatars611”) for users601, respectively, in a virtual scene, wherein each of their voices is appropriately placed in a binaural sound system (e.g., assuming that one or more of the podcast audience is wearing a smart glass, or is watching the podcast from a display that includes binaural-stereo-sound system) according to their disposition in the virtual scene. Network150and mobile devices610are consistent with the above descriptions of the same, throughout this disclosure. FIG.7is a flowchart illustrating steps in a method700for audio beam steering, tracking, and audio effects for an immersive reality application, according to some embodiments. In some embodiments, at least one or more of the steps in method700may be performed by a processor executing instructions stored in a memory in either one of a smart glass or other wearable device on a user's body part (e.g., head, arm, wrist, leg, ankle, finger, toe, knee, shoulder, chest, back, and the like). In some embodiments, at least one or more of the steps in method700may be performed by a processor executing instructions stored in a memory, wherein either the processor or the memory, or both, are part of a mobile device for the user, a remote server, or a database, communicatively coupled with each other via a network (e.g., processor112, memory120, mobile device110, server130, and network150). Moreover, the mobile device, the smart glass, and the wearable devices may be communicatively coupled with each other via a wireless communication system and protocol (e.g., communications module118including a radio, Wi-Fi, Bluetooth, near-field communication—NFC— and the like). In some embodiments, a method consistent with the present disclosure may include one or more steps from method700performed in any order, simultaneously, quasi-simultaneously, or overlapping in time. Step702includes receiving, from an immersive reality application, a first audio waveform from a first acoustic source to provide to a user of a headset. In some embodiments, step702includes receiving the location of the first acoustic source from the immersive reality application. In some embodiments, step702further includes receiving multiple audio waveforms from multiple acoustic sources, and providing the audio signal to a first speaker in a client device includes inserting each of the audio waveforms with a time delay and an amplitude based on multiple perceived directions associated with a location for each of the acoustic sources provided by the immersive reality application. In some embodiments, the immersive reality application is a podcast and the first acoustic source includes a first participant of the podcast, and step702includes providing a second audio signal from the user of the headset as a second participant of the podcast in a second perceived direction indicative of a relative position between the first participant and the second participant, within the podcast. Step704includes identifying a perceived direction of the first acoustic source relative to the headset based on a location of the first acoustic source. The perceived direction may be the incident direction to the headset user in a real or virtual space, which is used to apply spatial audio to create a stereo sound for the headset user. In some embodiments, step704further includes receiving, from the immersive reality application, a second audio waveform from a second acoustic source, identifying a second perceived direction for the second acoustic source based on a location of the second acoustic source, and inserting the second audio waveform in the audio signal with a time delay and an amplitude based on the second perceived direction. In some embodiments, the first acoustic source is a caller in communication with the user of the headset, and step704includes placing the caller in a selected virtual position relative to the user of the headset. In some embodiments, the immersive reality application includes a capture of an event attended by the user of the headset, the first acoustic source is a second user of a second headset, the first audio waveform is provided by the second headset, and step704includes placing the second user in a selected location within the event. Step706includes providing, to a first speaker in a client device, an audio signal including the first audio waveform, wherein the audio signal includes a time delay and an amplitude of the first audio waveform based on the perceived direction. In some embodiments, the client device is the headset, and step706includes providing the audio signal to a second speaker in the headset, based on the perceived direction. In some embodiments, the client device is a perceived system including at least two speakers and communicatively coupled to the headset via a network, and step706includes providing the audio signal to a second speaker in the perceived system, based on the perceived direction. In some embodiments, the headset includes a display in at least one eyepiece, and step706includes providing an image on the display of the first acoustic source for the user of the headset that is consistent with the location of the first acoustic source. FIG.8is a flowchart illustrating steps in a method800for audio beam steering, tracking, and audio effects for an immersive reality application, according to some embodiments. In some embodiments, at least one or more of the steps in method800may be performed by a processor executing instructions stored in a memory in either one of a smart glass or other wearable device on a user's body part (e.g., head, arm, wrist, leg, ankle, finger, toe, knee, shoulder, chest, back, and the like). In some embodiments, at least one or more of the steps in method800may be performed by a processor executing instructions stored in a memory, wherein either the processor or the memory, or both, are part of a mobile device for the user, a remote server, or a database, communicatively coupled with each other via a network (e.g., processors112and212, memories220120, mobile devices110,210, server130, and network150). Moreover, the mobile device, the smart glass, and the wearable devices may be communicatively coupled with each other via a wireless communication system and protocol (e.g., communications module118including a radio, Wi-Fi, Bluetooth, near-field communication—NFC— and the like). In some embodiments, a method consistent with the present disclosure may include one or more steps from method800performed in any order, simultaneously, quasi-simultaneously, or overlapping in time. Step802includes generating, in a server, a first audio waveform from a first acoustic source in an immersive reality application installed in the server. In some embodiments, step802includes receiving the first audio waveform from a headset that is communicatively coupled with the server, wherein a user of the headset is a participant of the immersive reality application. In some embodiments, step802includes generating a second audio waveform from a second acoustic source in an immersive reality application installed in the server. Step804includes generating an environmental datum that places the first acoustic source within a virtual world, based on the immersive reality application. Step806includes determining a perceived direction for the first audio waveform, based on the environmental datum. In some embodiments, step806includes determining a perceived direction for the second audio waveform, based on a second environmental datum that places the second acoustic source within the virtual world. Step808includes providing an acoustic signal with the first audio waveform including a delay and an amplitude of the first audio waveform based on the perceived direction, to one or more speakers in a client device that is communicatively coupled with the server. In some embodiments, the first acoustic source is a moving source, and step808includes adjusting a frequency of the first audio waveform based on a doppler effect induced by the moving source. In some embodiments, step808includes providing, to a display of the client device, an image of the first acoustic source. In some embodiments, step808includes inserting the second audio waveform in the acoustic signal with a delay and an amplitude based on the perceived direction for the second audio waveform. Hardware Overview FIG.9is a block diagram illustrating a computer system for implementing a headset and methods for use thereof, according to some embodiments. In certain aspects, computer system900may be implemented using hardware or a combination of software and hardware, either in a dedicated server, or integrated into another entity, or distributed across multiple entities. Computer system900may include a desktop computer, a laptop computer, a tablet, a phablet, a smartphone, a feature phone, a server computer, or otherwise. A server computer may be located remotely in a data center or be stored locally. Computer system900includes a bus908or other communication mechanism for communicating information, and a processor902(e.g., processor112) coupled with bus908for processing information. By way of example, the computer system900may be implemented with one or more processors902. Processor902may be a general-purpose microprocessor, a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable entity that can perform calculations or other manipulations of information. Computer system900can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them stored in an included memory904(e.g., memory120), such as a Random Access Memory (RAM), a flash memory, a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable PROM (EPROM), registers, a hard disk, a removable disk, a CD-ROM, a DVD, or any other suitable storage device, coupled with bus908for storing information and instructions to be executed by processor902. The processor902and the memory904can be supplemented by, or incorporated in, special purpose logic circuitry. The instructions may be stored in the memory904and implemented in one or more computer program products, e.g., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, the computer system900, and according to any method well known to those of skill in the art, including, but not limited to, computer languages such as data-oriented languages (e.g., SQL, dBase), system languages (e.g., C, Objective-C, C++, Assembly), architectural languages (e.g., Java, .NET), and application languages (e.g., PHP, Ruby, Perl, Python). Instructions may also be implemented in computer languages such as array languages, aspect-oriented languages, assembly languages, authoring languages, command line interface languages, compiled languages, concurrent languages, curly-bracket languages, dataflow languages, data-structured languages, declarative languages, esoteric languages, extension languages, fourth-generation languages, functional languages, interactive mode languages, interpreted languages, iterative languages, list-based languages, little languages, logic-based languages, machine languages, macro languages, metaprogramming languages, multiparadigm languages, numerical analysis, non-English-based languages, object-oriented class-based languages, object-oriented prototype-based languages, off-side rule languages, procedural languages, reflective languages, rule-based languages, scripting languages, stack-based languages, synchronous languages, syntax handling languages, visual languages, wirth languages, and xml-based languages. Memory904may also be used for storing temporary variable or other intermediate information during execution of instructions to be executed by processor902. A computer program as discussed herein does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, subprograms, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. Computer system900further includes a data storage device906such as a magnetic disk or optical disk, coupled with bus908for storing information and instructions. Computer system900may be coupled via input/output module910to various devices. Input/output module910can be any input/output module. Exemplary input/output modules910include data ports such as USB ports. The input/output module910is configured to connect to a communications module912. Exemplary communications modules912include networking interface cards, such as Ethernet cards and modems. In certain aspects, input/output module910is configured to connect to a plurality of devices, such as an input device914and/or an output device916. Exemplary input devices914include a keyboard and a pointing device, e.g., a mouse or a trackball, by which a consumer can provide input to the computer system900. Other kinds of input devices914can be used to provide for interaction with a consumer as well, such as a tactile input device, visual input device, audio input device, or brain-computer interface device. For example, feedback provided to the consumer can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the consumer can be received in any form, including acoustic, speech, tactile, or brain wave input. Exemplary output devices916include display devices, such as an LCD (liquid crystal display) monitor, for displaying information to the consumer. According to one aspect of the present disclosure, smart glass100-1can be implemented, at least partially, using a computer system900in response to processor902executing one or more sequences of one or more instructions contained in memory904. Such instructions may be read into memory904from another machine-readable medium, such as data storage device906. Execution of the sequences of instructions contained in main memory904causes processor902to perform the process steps described herein. One or more processors in a multi-processing arrangement may also be employed to execute the sequences of instructions contained in memory904. In alternative aspects, hard-wired circuitry may be used in place of or in combination with software instructions to implement various aspects of the present disclosure. Thus, aspects of the present disclosure are not limited to any specific combination of hardware circuitry and software. Various aspects of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical consumer interface or a Web browser through which a consumer can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. The communication network (e.g., network150) can include, for example, any one or more of a LAN, a WAN, the Internet, and the like. Further, the communication network can include, but is not limited to, for example, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a star-bus network, tree or hierarchical network, or the like. The communications modules can be, for example, modems or Ethernet cards. Computer system900can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Computer system900can be, for example, and without limitation, a desktop computer, laptop computer, or tablet computer. Computer system900can also be embedded in another device, for example, and without limitation, a mobile telephone, a PDA, a mobile audio player, a Global Positioning System (GPS) receiver, a video game console, and/or a television set top box. The term “machine-readable storage medium” or “computer-readable medium” as used herein refers to any medium or media that participates in providing instructions to processor902for execution. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as data storage device906. Volatile media include dynamic memory, such as memory904. Transmission media include coaxial cables, copper wire, and fiber optics, including the wires forming bus908. Common forms of machine-readable media include, for example, floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. The machine-readable storage medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter affecting a machine-readable propagated signal, or a combination of one or more of them. In one aspect, a method may be an operation, an instruction, or a function and vice versa. In one aspect, a claim may be amended to include some or all of the words (e.g., instructions, operations, functions, or components) recited in other one or more claims, one or more words, one or more sentences, one or more phrases, one or more paragraphs, and/or one or more claims. To illustrate the interchangeability of hardware and software, items such as the various illustrative blocks, modules, components, methods, operations, instructions, and algorithms have been described generally in terms of their functionality. Whether such functionality is implemented as hardware, software, or a combination of hardware and software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. As used herein, the phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (e.g., each item). The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases. A reference to an element in the singular is not intended to mean “one and only one” unless specifically stated, but rather “one or more.” Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. The term “some” refers to one or more. Underlined and/or italicized headings and subheadings are used for convenience only, do not limit the subject technology, and are not referred to in connection with the interpretation of the description of the subject technology. Relational terms such as first and second and the like may be used to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. All structural and functional equivalents to the elements of the various configurations described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and intended to be encompassed by the subject technology. Moreover, nothing disclosed herein is intended to be dedicated to the public, regardless of whether such disclosure is explicitly recited in the above description. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” While this specification contains many specifics, these should not be construed as limitations on the scope of what may be described, but rather as descriptions of particular implementations of the subject matter. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially described as such, one or more features from a described combination can in some cases be excised from the combination, and the described combination may be directed to a subcombination or variation of a subcombination. The subject matter of this specification has been described in terms of particular aspects, but other aspects can be implemented and are within the scope of the following claims. For example, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. The actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the aspects described above should not be understood as requiring such separation in all aspects, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. The title, background, brief description of the drawings, abstract, and drawings are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. It is submitted with the understanding that they will not be used to limit the scope or meaning of the claims. In addition, in the detailed description, it can be seen that the description provides illustrative examples and the various features are grouped together in various implementations for the purpose of streamlining the disclosure. The method of disclosure is not to be interpreted as reflecting an intention that the described subject matter requires more features than are expressly recited in each claim. Rather, as the claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately described subject matter. The claims are not intended to be limited to the aspects described herein, but are to be accorded the full scope consistent with the language claims and to encompass all legal equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirements of the applicable patent law, nor should they be interpreted in such a way.
51,297
11943602
DETAILED DESCRIPTION Before turning to the figures, which illustrate certain embodiments in detail, it should be understood that the present disclosure is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology used herein is for the purpose of description only and should not be regarded as limiting. Various systems that generate and output audio for a user, including but not limited to head worn displays (HWDs) or head mounted displays (HMDs) that provide audio as part of an augment reality (AR) or virtual reality (VR) system, may provide the audio in a manner in which the audio is expected to be perceived by the user at particular locations. For example, the system may provide image content that includes an object to be displayed to the user, at a particular angle relative to an angle of the head of the user, such that the user is expected to perceive the audio to be coming from the object. Such systems may, for example, provide world-locked spatial audio. Various devices may provide output that includes audio or haptic output to a user. For example, wearable devices, including HMDs, may provide such output via direct or indirect contact with skin, cartilage, or bone of the user (or using a combination of some of these). The devices may be associated or form part of an augmented reality (AR) or virtual reality (VR) system. For example, content delivered by such systems may include image, video, audio, or haptic output, or any combination thereof, any of which may be presented in one or more channels. AR and VR systems can use an HMD (may also be referred to as a head-worn display (HWD)) to present images to a user to represent an augmented or virtual environment (e.g., simulated environment). The HMD can present the images so that the images are perceived with realistic depth. For example, the HMD can be used to present images that can be viewed stereoscopically, such as by sequentially or simultaneously presenting left eye images and right eye images, enabling a user to perceive a 3D environment. An AR system can present images using an at least partially transparent display, enabling the presented images to be perceived together with a real-world environment. A VR system can generate the images based on operating an application that generates display data regarding the virtual environment, and updates the display data responsive to interactions of the user with the virtual environment. The AR or VR system can include the HMD (e.g., headset), which can be worn by the user to present the display data to the user, as well as one or more hand devices, such as hand-held controllers, that can be manipulated by the user as the user interacts with the virtual environment. The AR or VR system can use any of a variety of audio and haptic output devices, such as transducers, speakers, and other movable members, to provide audio or haptic output together with or independently from images. As the system tracks the head of the user, generates an audio signal (e.g., performs a room acoustic computation; generates a binaural audio signal), and causes audio (e.g., sounds) to be outputted based on the audio signal, there may be latency. For example, there may be latency between when and where the audio is outputted and when and where the audio is expected to be outputted relative to other information being provided to the user, such as image data being provided to the user. The result can be an auditory environment that does not appear stable as the head turns. Latency can occur when head tracking, room acoustics computation, and any other audio processing, such as head related transfer function (HRTF) filtering, are all performed in a single device, and may be even longer if any information is to be transmitted back and forth between more than one device. For example, because the processing operations involved in rendering realistic room acoustics and performing HRTF convolution can have high demands on processing hardware, at least some of these operations may be performed by hardware remote from other hardware (e.g., remote from the HWD), which can result in additional latency in the transmission of head position data from the HWD and the packaging and transmission of the audio signal back to the HWD. For example, when the spatial update of a sound source location is delayed by more than 60 ms in VR (e.g., a motion-to-audio latency), or more than 30 ms in AR, the sound image can be perceived to drag with the head and then reset in its location. With respect to communication between multiple devices involved in processing and outputting the audio, such communications may be performed using a Bluetooth protocol, which can take upwards of 150 ms to transmit audio, which can be far above the perceptual thresholds noted above. As such, latencies generated in such systems can noticeably affect realism and externalization, which can result in an apparently intracranial percept of a moving sound. Systems and methods in accordance with certain aspects of the present solution can perform operations to compensate for various such sources of latency to cause an improved perception of audio, including head-tracked audio. For example, a system can include a position sensor configured to output position data of a HWD. The system can include one or more processors configured to identify a first head angle of the HWD using the position sensor, generate an audio signal using the first head angle, identify a second head angle of the HWD using the position sensor, determine an angle error based at least on the first head angle and the second head angle, and apply at least one of a time difference or a level difference to the audio signal based at least on the angle error to adjust the audio signal. The system can include an audio output device configured to output the adjusted audio signal. By adjusting the audio signal using the angle error, the system can correct for long spatial update latencies and reduce the perceptual impact of such latencies for the user. The system can receive a measure of head angle from a position sensor, which can be part of the HWD. The system can provide the head angle to processors that generate the binaural audio signal for output by the HWD in accordance with the head angle. The processors can be onboard the HWD or implemented by a remote device, such as a phone in communication with the HWD by Bluetooth. The system can receive the audio signal, and compare the head angle that was used to generate the audio data to a current head angle to determine the angle error. The system may also compare the current head angle to a predicted head angle (e.g., predicted based on a state associated with a point in time at which the audio data is to be outputted) to determine the angle error. The system can measure the round trip time between outputting the head angle and receiving the audio signal in order to identify the head angles to use to determine the angle error. The system can tag the audio signal with the time or angle at which it was rendered to facilitate determining the angle error. The system can determine ITDs, ILDs, or both, to apply to the audio signal or portions thereof based on the angle error, which can enable the system to correct a perceived location of the audio signal for the user to compensate for the angle error. The system can apply various heuristics when generating and applying the corrections, such as to account for multiple sources (e.g., as more sources are present, less compensation may be needed), focusing on a prominent or other particular source, or adjusting compensation based on a height of the location of the source (e.g., in addition to azimuthal angle). Referring now toFIG.1, a system100can be used to perform compensation of spatial update latency for audio signals, including head-tracked audio signals provided using binaural audio. While the system100is depicted inFIG.1as performing such operations along with an image processing pipeline, various aspects of the system100may or may not be performed together with image processing operations. The system100can include a plurality of sensors104a . . . n, processing circuitry116, and one or more displays152. The system100can be implemented using the HMD system200described with reference toFIG.2, the headset300described with reference toFIG.3, the audio system400described with reference toFIG.400, the computing environment described with reference toFIG.6, or any combination thereof. The system100can incorporate features of and be used to implement features of AR and VR systems. At least some of the processing circuitry116can be implemented using a graphics processing unit (GPU). The functions of the processing circuitry116can be executed in a distributed manner using a plurality of processing units. The processing circuitry116may include one or more circuits, processors, and/or hardware components. The processing circuitry116may implement any logic, functions or instructions to perform any of the operations described herein. The processing circuitry116can include any type and form of executable instructions executable by any of the circuits, processors or hardware components. The executable instructions may be of any type including applications, programs, services, tasks, scripts, libraries processes and/or firmware. Any of the components of the processing circuitry116including but not limited to head angle detector120, audio signal generator124, angle error detector128, audio signal modifier132, simulation generator144, and image renderer148may be any combination or arrangement of hardware, circuitry and executable instructions to perform their respective functions and operations. At least some portions of the processing circuitry116can be used to implement image processing executed by the sensors104. The processing circuitry116and components thereof may be implemented using multiple hardware devices, which can enable certain devices to have relatively lightweight form factors while using other devices to perform more computationally intensive operations. For example, a portion of the processing circuitry116implemented using hardware of the HWD can provide head angle data (or position or orientation data used to determine head angle) to other portions of processing circuitry116remote from the HWD that generates audio signal data using the head angle data. The portions of the processing circuitry116can communicate using various communication protocols, including but not limited to Bluetooth protocols (which may have latency associated with communicating head angle data and audio signal data). In some implementations, audio signal generator124and simulation generator144are implemented using processing circuitry116of a device remote from the HWD, while audio signal modifier132is implemented using processing circuitry116of the HWD, as the operations described with reference to audio signal modifier132may have relatively low computational intensity, and thus can be performed with relatively lightweight processing electronics without introducing significant processing latencies. The sensors104a . . . ncan be image capture devices or cameras, including video cameras. The sensors104a . . . nmay be cameras that generate images of relatively low quality (e.g., relatively low sharpness, resolution, or dynamic range), which can help reduce the SWAP of the system100. For example, the sensors104a . . . ncan generate images having resolutions on the order of hundreds of pixels by hundreds of pixels. At the same time, the processes executed by the system100as described herein can be used to generate display images for presentation to a user that have desired quality characteristics, including depth characteristics. The sensors104a . . . n(generally referred herein as sensors104) can include any type of one or more cameras. The cameras can be visible light cameras (e.g., color or black and white), infrared cameras, or combinations thereof. The sensors104a . . . ncan each include one or more lenses108a . . . jgenerally referred herein as lens108). In some embodiments, the sensor104can include a camera for each lens108. In some embodiments, the sensor104include a single camera with multiple lenses108a . . . j. In some embodiments, the sensor104can include multiple cameras, each with multiple lenses108. The one or more cameras of the sensor104can be selected or designed to be a predetermined resolution and/or have a predetermined field of view. In some embodiments, the one or more cameras are selected and/or designed to have a resolution and field of view for detecting and tracking objects, such as in the field of view of a HMD. The one or more cameras may be used for multiple purposes, such as tracking objects in a scene or an environment captured by the image capture devices and performing the collision detection techniques described herein. The one or more cameras of the sensor104and lens108may be mounted, integrated, incorporated or arranged on an HMD to correspond to a left-eye view of a user or wearer of the HMD and a right-eye view of the user or wearer. For example, an HMD may include a first camera with a first lens mounted forward-facing on the left side of the HMD corresponding to or near the left eye of the wearer and a second camera with a second lens mounted forward-facing on the right-side of the HMD corresponding to or near the right eye of the wearer. The left camera and right camera may form a front-facing pair of cameras providing for stereographic image capturing. In some embodiments, the HMD may have one or more additional cameras, such as a third camera between the first and second cameras an offers towards the top of the HMD and forming a triangular shape between the first, second and third cameras. This third camera may be used for triangulation techniques in performing the depth buffer generations techniques of the present solution, as well as for object tracking. The system100can include a first sensor (e.g., image capture device)104athat includes a first lens108a, the first sensor104aarranged to capture a first image112aof a first view, and a second sensor104bthat includes a second lens108b, the second sensor104barranged to capture a second image112bof a second view. The first view and the second view may correspond to different perspectives, enabling depth information to be extracted from the first image112aand second image112b. For example, the first view may correspond to a left eye view, and the second view may correspond to a right eye view. The system100can include a third sensor104cthat includes a third lens108c, the third sensor104carranged to capture a third image112cof a third view. As described with reference toFIG.2, the third view may correspond to a top view that is spaced from an axis between the first lens108aand the second lens108b, which can enable the system100to more effectively handle depth information that may be difficult to address with the first sensor104aand second sensor104b, such as edges (e.g., an edge of a table) that are substantially parallel to the axis between the first lens108aand the second lens108b. Light of an image to be captured by the sensors104a . . . ncan be received through the one or more lenses108a . . . j. The sensors104a . . . ncan include sensor circuitry, including but not limited to charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) circuitry, which can detect the light received via the one or more lenses108a . . . jand generate images112a . . . kbased on the received light. For example, the sensors104a . . . ncan use the sensor circuitry to generate the first image112acorresponding to the first view and the second image112bcorresponding to the second view. The one or more sensors104a . . . ncan provide the images112a . . . kto the processing circuitry116. The one or more sensors104a . . . ncan provide the images112a . . . kwith a corresponding timestamp, which can facilitate synchronization of the images112a . . . kwhen image processing is executed on the images112a . . . k. The sensors104can include eye tracking sensors104or head tracking sensors104that can provide information such as positions, orientations, or gaze directions of the eyes or head of the user (e.g., wearer) of an HMD. In some embodiments, the sensors104are inside out tracking cameras configured to provide images for head tracking operations. The sensors104can be eye tracking sensors104that provide eye tracking data, such as data corresponding to at least one of a position or an orientation of one or both eyes of the user. In some embodiments, the sensors104optically measure eye motion, such as by emitting light (e.g., infrared light) towards the eyes and detecting reflections of the emitted light. The sensors104can be oriented in a direction towards the eyes of the user (e.g., as compared to sensors104that capture images of an environment outside of the HMD). For example, the sensors104can include at least one fourth sensor104d(e.g., as illustrated inFIG.2) which can be oriented towards the eyes of the user to detect sensor data regarding the eyes of the user. In some embodiments, the head tracking sensors104generate motion data including at least one of a position, a velocity, or an acceleration of the head (e.g., of the HMD). The sensors104can include an inertial measurement unit (IMU), simultaneous localization and mapping (SLAM) camera system, magnetic tracker (e.g., magnetometer), or any combination thereof to perform the head tracking. The sensors104can include hand tracking sensors104that can provide information such as positions or orientations of one or more hands of the user. The hand tracking sensors104can generate motion data including at least one of a position, a velocity, or an acceleration of a respective hand (e.g., of a hand device224manipulated by the hand as described with reference toFIG.2). The head tracking sensors104and hand tracking sensors104can include any of a variety of position sensors, such as an inertial measurement unit (IMU), an accelerometer, a gyroscope, a magnetometer (e.g., magnetic compass), or any combination thereof. The sensors104can include various body position sensors such as leg sensors or torso sensors. The sensors104can capture images112of an environment around the sensors104. For example, the sensors104can capture images112of an environment in or around a field of view of the user of the HMD. The images112can be representations of the environment, such as color or grayscale array or matrix of pixels representing parameters of light captured from the environment (e.g., color, brightness, intensity). The environment can be an indoor or outdoor environment, including both natural and man-made structures, terrain, or other objects, including sky, clouds, roads, buildings, streets, pedestrians, or cyclists. The environment can include one or more objects (e.g., real-world objects), which can be represented by the images112captured by the sensors. The processing circuitry116can update, maintain, and selectively allow or prevent access to transmission of data associated with a user, including but not limited to head tracking data, eye tracking data, user profile data, or various data associated with models124. For example, the processing circuitry116may use as inputs, personal or biometric information of a user for user-authentication or experience-personalization purposes. A user may opt to make use of these functionalities, such as to enhance their experience using the system100or various devices associated with or in communication with one or more components of the system100. As an example, a user may provide personal or biometric information to the system100. The user's privacy settings may specify that such information may be used only for particular processes, such as authentication, and further specify that such information may not be shared with any third-party system or used for other processes or applications associated with the system100(or another system in communication with the system100, such as a social network). The user's privacy setting may specify that data received or detected by the system100, such as images, sensor data, eye tracking data, biometric data, or data, may be used only for a limited purpose (e.g., authentication, operation of selected component(s) of the system100), and further specify that such data may not be shared with any third-party system or used by other processes or applications associated with the system100or devices in communication with the system100. The user's privacy setting may specify that the system100does not perform operations to detect (or store, or transmit) particular data, such as head tracking data or eye tracking data, unless the system100identifies that the privacy setting indicates permission to detect (or store, or transmit) the data. The processing circuitry116can include a head angle detector120. The head angle detector120can include any function, operation, routine, logic, or instructions to perform functions such as identifying or determining a head angle of the user or a device worn by the user (e.g., HWD) using information from one or more sensors104. WhileFIG.1depicts the head angle detector120as implemented by the processing circuitry116, at least a portion of the head angle detector120can be implemented by the sensors104, such as to provide a head angle directly from the sensors104to the processing circuitry116and components thereof. The head angle detector120can identify a head angle of the HWD. The head angle detector120can identify the head angle using position or orientation data from the sensor104, such as by requesting an orientation angle from the sensor104or periodically receiving the head angle from the sensor. The head angle detector120can assign a time stamp to the head angle based on a time at which the head angle is identified, which can enable the head angle detector120to provide values associated with changes in head angle between points in time. The head angle detector120can provide the head angle as an azimuth angle in a plane parallel to the ground (e.g., perpendicular to a gravity vector). The head angle detector120can provide the head angle relative to a reference angle, which may be an angle associated with a forward direction of movement, or based on angles provided by the sensor104. The head angle detector120can identify the head angle at various points in time. For example, the head angle detector120can identify the head angle responsive to receiving a request for the angle from various other components of the processing circuitry116, such as audio signal generator124. The head angle detector120can maintain a database (e.g., head angle buffer) of head angles and identifiers assigned to the head angles, such as time stamps, unique identifiers, metadata received from the sensor104with respect to the head angles, or various combinations thereof. The head angle detector120may maintain head angle values for at least as far back as a round trip latency (e.g., if the round trip latency is 100 ms, maintain head angle values for the last 100 ms, or for a threshold or factor applied to the round trip latency, such as 100 ms plus 50 ms threshold for a total of 150 ms of head angles, or 100 ms times a factor of two for a total 200 ms of head angles). The head angle detector120can identify the head angle based at least on a latency associated with generating the audio signal (e.g., by audio signal generator124). For example, the head angle detector120can determine the latency and compare the latency to a current time to a time difference between the current time and a previous time at which the audio signal was generated to determine the previous time, and retrieve the head angle corresponding to the previous time to identify the head angle. As discussed further herein, by identifying the head angle corresponding to the previous time (at which the audio signal was generated), the processing circuitry116can determine how to compensate for latency effects when causing output of the audio. The head angle detector120can determine the latency using information such as time used to generate the audio signal and time used to transmit information to component(s) that generate the audio signal (e.g., via Bluetooth or other network communications between the HWD and a remote device that implements audio signal generator124). For example, the head angle detector120, angle error detector128, or audio signal generator124can measure a round trip time between providing a head angle to audio signal generator124and receiving the audio signal from audio signal generator124. The head angle generator120can provide the head angle to the audio signal generator124using a data structure that includes a time stamp, a unique identifier, or a combination thereof assigned to the head angle, and the audio signal generator124can output the audio signal using a data structure that includes the time stamp, unique identifier, or combination thereof assigned to the head angle, to facilitate determination of the round trip time when the head angle is received. The processing circuitry can include an audio signal generator124. The audio signal generator124can include any function, operation, routine, logic, or instructions to perform functions such as generating an audio signal to be outputted by audio output devices136for perception by the user. WhileFIG.1depicts the audio signal generator124as a component separate from simulation generator144, the audio signal generator124can be implemented as part of simulation generator144. The audio signal generator124can generate the audio signal using information received from simulation generator144, such as information indicating a spatial location (e.g., at least one of azimuth angle or elevation angle) at which to provide the audio signal. The audio signal generator124can generate the audio signal to include binaural audio data. The audio signal generator124can generate the audio signal to include amplitude and frequency information assigned to various spatial locations relative to the head of the user. The audio signal generator124can generate the audio signal as a multiplexed audio signal that can include multiple audio streams, such as multiple audio streams each assigned to separate audio channels. The audio signal generator124can generate the audio signal using the head angle identified by the head angle detector120. For example, the audio signal generator124can determine initial audio signal data in a frame of reference of the head of the user (e.g., frame of reference of the HWD), and apply the head angle to the initial audio signal data to map the audio signal to a location at which the audio is to be perceived by the user. For example, the audio signal generator124can identify at least one of an azimuth angle or an elevation angle assigned to the audio signal data (or a portion thereof), and adjust the at least one of the azimuth angle or the elevation angle using the head angle. For example, if an audio signal is to be perceived from an azimuth angle of ten degrees in a global frame of reference (e.g., to be perceived as coming from ten degrees to the right of a north direction), and the head angle is negative thirty degrees in azimuth (e.g., thirty degrees left of north), the audio signal generator124can cause the audio signal to be outputted to be perceived at negative twenty degrees in azimuth. The processing circuitry116can include an angle error detector128. The angle error detector128can include any function, operation, routine, logic, or instructions to perform operations such as determining errors between angles used to generate the audio signals and angles to which the HWD has moved, enabling the processing circuitry116to compensate for latency in the audio processing pipeline and more accurately output the audio signals. The angle error detector128can sample the head angle detector120(or sample the sensor104directly, where the sensor104implements at least some functionality of the head angle detector120) to retrieve head angles, such as to request head angles using a time stamp or other identifier that corresponds to the head angle to be retrieved. The angle error detector128can determine the angle error by comparing a head angle used to generate the audio signal to a head angle measured at a time at which the audio signal is to be outputted or a head angle predicted to be the angle at the HWD will be located when the audio signal is outputted. The angle error detector128can determine an angle error using head angles detected by the head angle detector120. The angle error detector128can determine the angle error using previous head angles used to generate the audio signal, predictions of where the HWD is expected to be angled, current head angles, or various combinations thereof to determine the angle error. The angle error detector128can use time stamps assigned to head angles to retrieve the head angles to be compared to one another to determine the angle error. As discussed below, the angle error detector128can determine various types of angle errors (e.g., based on known head angles, predicted angles, or various combinations thereof), and may assign an identifier of the type of angle error to the angle error, enabling other portions of the processing circuitry116, such as audio signal modifier132, to effectively perform operations in response to the angle error (e.g., apply modifications of different magnitude depending on whether the angle error is determined using actual head angles or predicted head angles). For example, the angle error detector128can determine the angle error using a difference between a head angle of a current point in time (e.g., a second head angle) and a previous head angle (e.g., a first head angle) used to generate the audio signal to be outputted at the current point in time. The angle error detector128can determine the angle error by comparing the head angles, such as by subtracting one of the head angles from the other (e.g., subtract the first head angle from the second head angle). The angle error detector128can use the head angle detector120to identify the first head angle using a time stamp (e.g., first time stamp) assigned to the first head angle, such as by causing the head angle detector120to determine a latency between when the first head angle was provided to the audio signal generator124and when the resulting audio signal is received to be outputted at the current point in time (e.g., subtract the latency from a second time stamp corresponding to the current point in time to identify the first time stamp, and retrieve the head angle corresponding to the first time stamp to use as the first head angle). In some implementations, the head angle detector120predicts what the head angle is expected to be at the current point in time (e.g., a future point in time at which the audio signal generated using the head angle is expected to be outputted), in order to provide the head angle to the angle error detector128for the angle error detector128to determine the head angle error. For example, the head angle detector120can include a head angle model. The head angle model can be any function, filter, algorithm, or machine learning model (e.g., neural network, regression function, classifier) that receives inputs and outputs predicted head angles responsive to the inputs. The head angle model can include a Kalman filter. The head angle model can receive various inputs, including but not limited to one or more previous head angles, information regarding measured movement of the head (e.g., position, velocity, or acceleration information received from sensors104), information regarding expected movements of the head (which may be received or determined based on information from simulation generator144), information regarding latency or other indications of time that will have passed since the audio signal was generated, distributions of expected head angles or movements of the head given a starting head angle (e.g., a histogram or function indicating a likelihood of the head angle being a particular head angle or range of head angles, such as given the starting head angle), or various combinations thereof. As an example, the head angle model can use the first head angle and a rate of angular velocity of the HWD to predict the current head angle. For example, the head angle detector120can determine a predicted head angle (e.g., third head angle) indicating where the HWD is expected to be at the current point in time (which may be a future point in time relative to when the head angle detector120predicts the predicted head angle) using (1) the first head angle that is provided to the audio signal generator124to generate the audio signal for output at the current point in time and (2) a time difference between when the first head angle is measured or provided to the head angle detector120and an expected time at which the audio signal is to be outputted (e.g., between when the first head angle is measured and the current point in time); this time difference can be used by the head angle detector120in determining how much the head angle is expected to change given the time difference. The head angle detector120can provide the first head angle to the head angle model to determine the predicted head angle. The angle error detector128can compare the predicted head angle to the current head angle to determine the angle error (e.g., subtract the predicted head angle from the current head angle to determine the angle error). As such, the angle error detector128can determine the angle error by comparing the current head angle to the previous head angle (e.g., comparing second head angle to first head angle), or by comparing the current head angle to the predicted head angle (e.g., comparing second head angle to third head angle, which can be used instead of the comparison of the current head angle and the previous head angle). The processing circuitry116can include an audio signal modifier132. The audio signal modifier132can include any function, operation, routine, logic, or instructions to perform functions such as adjusting how the audio signal is outputted, such as to use the angle error determined by the angle error detector128(e.g., based on comparing the current head angle to the previous head angle or to the predicted head angle) to compensate for the angle error. The audio signal modifier132can adjust the audio signal so that angles at which the audio is perceived by a user more accurately correspond to where a user would expect to the audio to be perceived, such as to more closely match spatial locations of objects presented to the user (e.g., via images) that the user would expect to be generating the audio signals. The audio signal modifier132can adjust the audio signal by controlling timing, levels, or various combinations thereof associated without output of the audio signal. The audio signal modifier132can be implemented as part of the audio signal generator124. As depicted inFIG.1, the audio signal modifier132can receive the audio signal from the audio signal generator124, receive the angle error from the angle error detector128, and adjust the audio signal based at least on the angle error to output an adjusted audio signal for output by audio output devices136. The audio signal modifier132can apply various adjustments to various audio channels or streams, such as if the audio signal is a multiplexed audio signal. The audio signal modifier132can apply at least one of a time difference or a level difference to the audio signal, using the angle error, to adjust the audio signal. For example, the audio signal modifier132can modify an interaural time difference (ITD) of the audio signal to apply the time difference to the audio signal. The ITD can be a difference in arrival time of sound corresponding to the audio signal between ears of the user, such as a difference between when the audio signal is provided via one or more left channels and via one or more right channels. The audio signal modifier132can use various mappings, lookup tables, functions, or other operations to determine the ITD using the angle error. For example, the audio signal modifier132can use a magnitude and direction (e.g., positive or negative value of the angle error) to determine the ITD. For example, if the angle error is negative twenty degrees in azimuth (indicating that the HWD is angled twenty degrees to the left in azimuth relative to the head angle that was used to generate the audio signal, such that the audio signal may be perceived to the right of where it should be), the audio signal modifier132can determine the ITD to at least partially decrease the angle error from negative twenty degrees to zero degrees, such as to delay a right channel of the audio signal relative to a left channel (or advance the left channel relative to the right channel). The audio signal modifier132can modify an interaural level difference (ILD) to apply the level difference to the audio signal. The ILD can correspond to a difference in perceived loudness and frequency distribution between ears of the user. The ILD can correspond to a difference in level of the audio signal provided via one or more left channels and via one or more right channels. The audio signal modifier132can use various mappings, lookup tables, functions, or other operations to determine the ILD using the angle error. For example, the audio signal modifier132can use a magnitude and direction (e.g., positive or negative value of the angle error) to determine the ILD. For example, if the angle error is negative twenty degrees in azimuth (indicating that the HWD is angled twenty degrees to the left in azimuth relative to the head angle that was used to generate the audio signal, such that the audio signal may be perceived to the right of where it should be), the audio signal modifier132can determine the ILD to at least partially decrease the angle error from negative twenty degrees to zero degrees, such as to decrease a level of a right channel of the audio signal relative to a left channel (or increase the left channel relative to the right channel). The audio signal modifier132can adjust the audio signal using both ITDs and ILDs. For example, the audio signal modifier132can modify both the ITD and the ILD (or apply changes using determined ITD and ILD values) to adjust the audio signal. The audio signal modifier132can use a single mapping, lookup table, function, or other operation that receives the angle error as an input and outputs both an ITD and an ILD that the audio signal modifier132applies to the audio signal to adjust the audio signal. For example, the audio signal modifier132can provide the angle error as input to a system of two equations that fit ITD and ILD outputs to the angle error (e.g., since time and level differences may not be uniform across azimuth, which may make it useful to cause both ITD and ILD changes) to determine the ITD and ILD changes. The audio signal modifier132can perform the adjustment to the audio signal, using ITDs, using a playback buffer. For example, the audio signal modifier132can maintain a playback buffer of a predetermined duration of audio data (e.g., 500 ms; 700 ms, which can correspond to maximum possible interaural time difference). The audio signal modifier132can retrieve a portion of audio data from the playback buffer corresponding to the determined ITD and delay (or advance) one or more audio channels using the retrieved portion of the audio data. The audio signal modifier132can perform the adjustment to the audio signal, using ILDs, by applying a scalar value (e.g., scalar multiplier) to one or both of left and right channels. The audio signal modifier132can perform the adjust to the audio signal, using ILDs, using a biquad filter or zero pole filter than allows an approximation of the frequency-dependent level differences that result from the acoustic shadow of the head. In some implementations, the audio signal modifier132identifies a frequency characteristic of the audio signal and controls the ITD and ILD using the frequency characteristic. For example, this can enable the audio signal modifier132to selectively combine ITD and ILD modifications at various frequency bands in which ITD or ILD modifications may have a more significant effect in the user's perception of the location of the audio signal. The frequency characteristic can include various characteristics, such as mean, standard deviation, or frequencies or frequency bands for which the level or intensity is above a threshold value. For example, the audio signal modifier132can implement one or more of the filters described above using the frequency characteristic as an input to the one or more filters. In some implementations, the audio signal modifier132selectively determines whether to adjust the audio signal. For example, if the angle error is relatively small, the audio signal modifier132may determine to not adjust the audio signal, but rather provide the audio signal as generated by the audio signal generator124to audio output devices136for output without adjustments. This can enable the system100to avoid any computational processing times associated with determining the adjustments to the audio signal, which may be useful if such computational processing times are of a similar magnitude as a latency associated with the angle error. For example, the audio signal modifier132can compare the angle error to a threshold angle error, perform the operations described herein for adjusting the audio signal responsive to the angle error being greater than the threshold angle error, and not perform such operations (e.g., control operation of audio output devices136using the audio signal generated by the audio signal generator124rather than an adjusted audio signal) responsive to the angle error being less than the threshold angle error. The threshold angle error may correspond to an error small enough that a user may not be expected to perceive latency. The threshold angle error may be an angle error representative of a delay of 60 ms in VR or 30 ms in AR operations. The direction in which ILDs or ILDs change may depend on whether the sound source being rendered is located in a front hemifield (e.g., azimuth angle greater than negative ninety degrees and less than positive ninety degrees) or a rear hemifield (e.g., azimuth angle less than negative ninety degrees or greater than positive ninety degrees). The audio signal modifier132may selectively determine whether to adjust the audio signal based on the angle for which the audio is to be perceived, such as if the angle is in the front hemifield or the rear hemifield. For example, the audio signal modifier132can determine to adjust the audio signal responsive to the angle of the audio signal being in the front hemifield (e.g., only perform latency compensation for signals at the front of the listener). The audio signal can include a multiplexed audio signal that includes multiple audio streams. The audio signal may be multiplexed (e.g., by audio signal generator124or simulation generator144) based on spatial locations (e.g., left and right channels), frequency spectra (e.g., low and high frequencies; low, mid, and high frequencies), or any combination thereof. This may enable the system100to perform a more faithful spatial rendering of the sound to be perceived by the user. The audio signal modifier132can perform adjustments to each of the audio streams, such as to perform ITD and ILD corrections to each of the audio streams. The processing circuitry116may compress each of the audio streams (e.g., using downsampling, decreased bitrate, psychoacoustically inspired spectrum multiplexing, or any combination thereof) of the multiplexed audio stream. As such, the system100can preserve a more stable spatial image (e.g., maximize source separate to increase spatial release from masking), even if the audio streams are compressed. For example, the audio signal can be multiplexed such that the audio scene is provided with a front hemisphere and a rear hemisphere (which may each be provided with respective binaural streams, such as using four total audio channels). The audio signal modifier132can modify the respective binaural stream provided to the front hemisphere and to the rear hemisphere, as the modification applied by the audio signal modifier132can be different for the front hemisphere and rear hemisphere (e.g., ITD/ILD changes in a first direction for signals at the front and a second direction opposite the first direction for signals at the rear). The audio signal modifier132can receive the head angle error from the angle error detector128, determine a first modification (e.g., at least one of a first change to ITD or a first change to ILD) for the front hemisphere using the head angle error, modify the audio channels for the front hemisphere using the first modification, determine a second modification (e.g., at least one of a second change to ITD or a second change to ILD) for the rear hemisphere using the head angle error, and modify the audio channels for the rear hemisphere using the second modification. The system100can include one or more audio output devices136that output the audio signal, including receiving the adjusted audio signal from the audio signal modifier132and output the adjusted audio signal. The audio output devices136can include various features of the audio system400described with reference toFIG.4, such as transducers, including speakers, that output the audio signal for perception by the user. The audio output devices136can include multiple audio output devices136that may each provide sound corresponding to distinct audio channels, such as left and right channels. By outputting the audio signal that has been adjusted by the audio signal modifier132using the angle error determined by the angle error detector128, the system100can compensate for spatial update latencies that may be introduced by movement of the HWD (e.g., movement of the head of the user) between when a head angle is used to generate the audio signal and when the audio signal is received for output by the audio output devices136. The audio output devices136may include haptic devices that generate haptic feedback (e.g., vibrations) responsive to control signals from audio signal generator124. The processing circuitry116can include a simulation generator144. The simulation generator144can include any function, operation, routine, logic, or instructions to perform functions such as operating an application, such as a game, trainer, or simulator, receive user input data, update the operation of the application based on the user input data, provide display data to the image renderer148to enable the image renderer148to render display images for displaying the virtual environment, and provide audio data to the audio signal generator124to enable the audio signal generator124to generate audio signals for output using audio output devices136. The simulation generator144can receive sensor data from the sensors104, such as data regarding movement of the head or hands of the user, process the sensor data or motion data to identify the user input data, and update the operation of the application based on the identified user input data. For example, the simulation generator144can detect a movement of a hand of the user, such as a swing, push, or pull, and use the movement as a user input for the application. The simulation generator144can generate depth buffer information corresponding to display data, enabling the image renderer148to render 3D image data. The processing circuitry116can include an image renderer148. The image renderer148can be a 3D image renderer. The image renderer148may use image related input data to process, generate and render display or presentation images to display or present on one or more display devices, such as via an HMD. The image renderer148can generate or create 2D images of a scene or view for display on display152and representing the scene or view in a 3D manner. The image renderer148can generate images for display on display164based on display data received from the simulation generator144(e.g., depth buffers received from the simulation generator144). The display or presentation data to be rendered can include geometric models of 3D objects in the scene or view. The image renderer148may determine, compute or calculate the pixel values of the display or image data to be rendered to provide the desired or predetermined 3D image(s), such as 3D display data for the images112captured by the sensor104. The image renderer148can render frames of display data to one or more displays152based on temporal and/or spatial parameters. The image renderer148can render frames of image data sequentially in time, such as corresponding to times at which images are captured by the sensors104or at which frames of display data are received from simulation generator144. The image renderer148can render frames of display data based on changes in position and/or orientation, such as the position and orientation of the HMD as indicated by sensors104. The image renderer148can render frames of display data based on left-eye view(s) and right-eye view(s) such as displaying a left-eye view followed by a right-eye view or vice-versa. The image renderer148can generate the display images using motion data regarding movement of the sensors104. For example, the sensors104may change in at least one of position or orientation due to movement of a head of the user wearing an HMD that includes the sensors104(e.g., as described with reference to HMD system200ofFIG.2). The processing circuitry116can receive the sensor data from a position sensor (e.g., position sensor220described with reference toFIG.2). Although the image renderer148is shown as part of the processing circuitry116, the image renderer may be formed as part of other processing circuitry of a separate device or component, such as the display device, for example within the HMD. The system100can include one or more displays152. The one or more displays152can be any type and form of electronic visual display. The displays may have or be selected with a predetermined resolution and refresh rate and size. The one or more displays can be of any type of technology such as LCD, LED, ELED or OLED based displays. The form factor of the one or more displays may be such to fit within the HMD as glasses or goggles in which the display(s) are the leans within the frame of the glasses or goggles. The displays152may have a refresh rate the same or different than a rate of refresh or frame rate of the processing circuitry116or the image renderer148, the simulation generator144, or the sensors104. Referring now toFIG.2, in some implementations, an HMD system200can be used to implement the system100. The HMD system200can include an HMD body202, a left sensor104a(e.g., left image capture device), a right sensor104b(e.g., right image capture device), and the display164. The HMD body202can have various form factors, such as glasses or a headset. The sensors104a,104bcan be mounted to or integrated in the UMD body202. The left sensor104acan capture first images corresponding to a first view (e.g., left eye view), and the right sensor104bcan capture images corresponding to a second view (e.g., right eye view). In some embodiments, the HMD system200does not include image capture devices. The HMD system200can be used to implement VR functionality, such as to present a virtual environment via the display164. The UMD system200can include a top sensor104c(e.g., top image capture device). The top sensor104ccan capture images corresponding to a third view different than the first view or the second view. For example, the top sensor104ccan be positioned between the left sensor104aand right sensor104band above a baseline between the left sensor104aand right sensor104b. This can enable the top sensor104cto capture images with depth information that may not be readily available to be extracted from the images captured by the left and right sensors104a,104b. For example, it may be difficult for depth information to be effectively extracted from images captured by the left and right sensors104a,104bin which edges (e.g., an edge of a table) are parallel to a baseline between the left and right sensors104a,104b. The top sensor104c, being spaced from the baseline, can capture the third image to have a different perspective, and thus enable different depth information to be extracted from the third image, than the left and right sensors104a,104b. The HMD system200can include processing circuitry116, which can perform at least some of the functions described with reference toFIG.1, including receiving sensor data from position sensors104(e.g., head tracking sensors) to detection movement of the HMD and generate warnings regarding potential collisions with obstacles based on the movement of the HMD. The HMD system200can include communications circuitry204. The communications circuitry204can be used to transmit electronic communication signals to and receive electronic communication signals from at least one of a client device208or a server212. The communications circuitry204can include wired or wireless interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals) for conducting data communications with various systems, devices, or networks. For example, the communications circuitry204can include an Ethernet card and port for sending and receiving data via an Ethernet-based communications network. The communications circuitry204can communicate via local area networks (e.g., a building LAN), wide area networks (e.g., the Internet, a cellular network), and/or conduct direct communications (e.g., NFC, Bluetooth). The communications circuitry204can conduct wired and/or wireless communications. For example, the communications circuitry204can include one or more wireless transceivers (e.g., a Wi-Fi transceiver, a Bluetooth transceiver, a NFC transceiver, a cellular transceiver). For example, the communications circuitry204can establish wired or wireless connections with the at least one of the client device208or the server212. The communications circuitry204can establish a USB connection with the client device208. The HMD system200can be deployed using different architectures. In some embodiments, the HMD (e.g., HMD body202and components attached to the HMD body202) comprises the processing circuitry116and is self-contained portable unit. In some embodiments, the HMD has portions of the processing circuitry116that work in cooperation with or in conjunction with any type of portable or mobile computing device or companion device that has the processing circuitry or portions thereof, such as in the form of a staging device, a mobile phone or wearable computing device. In some embodiments, the HMD has portions of the processing circuitry116that work in cooperation with or in conjunction with processing circuitry, or portions thereof, of a desktop computing device. In some embodiments, the HMD has portions of the processing circuitry116that works in cooperation with or in conjunction with processing circuitry, or portions thereof, of a server computing device, which may be deployed remotely in a data center or cloud computing environment. In any of the above embodiments, the HMD or any computing device working in conjunction with the HMD may communicate with one or more servers in performing any of the functionality and operations described herein. The client device208can be any type and form of general purpose or special purpose computing device in any form factor, such as a mobile or portable device (phone, tablet, laptop, etc.), or a desktop or personal computing (PC) device. In some embodiments, the client device can be a special purpose device, such as in the form of a staging device, which may have the processing circuitry or portions thereof. The special purpose device may be designed to be carried by the user while wearing the HMD, such as by attaching the client device208to clothing or the body via any type and form of accessory attachment. The client device208may be used to perform any portion of the image and rendering processing pipeline described in connection withFIGS.1and3. The HMD may perform some or other portions of the image and rendering processing pipeline such as generating display images of a virtual environment and rendering the display images to the display164. The HMD can transmit and receive data with the client device208to leverage the client device208's computing power and resources which may have higher specifications than those of the HMD. The server212can be any type of form of computing device that provides applications, functionality or services to one or more client devices208or other devices acting as clients. In some embodiments, the server212can be a client device208. The server212can be deployed in a data center or cloud computing environment accessible via one or more networks. The HMD and/or client device208can use and leverage the computing power and resources of the server212. The HMD and/or client device208can implement any portion of the image and rendering processing pipeline described in connection withFIGS.1and3. The server212can implement any portion of the image and rendering processing pipeline described in connection withFIGS.1and3, and in some cases, any portions of the image and rendering processing pipeline not performed by client device208or HMD. The server212may be used to update the HMD and/or client device208with any updated to the applications, software, executable instructions and/or data on the HMD and/or client device208. The system200can include a position sensor220. The position sensor220can output at least one of a position or an orientation of the body202. As the image capture devices104a,104b,104ccan be fixed to the body202(e.g., at predetermined locations relative to the position sensor220), the position sensor220can output at least one of a position or an orientation of each sensor104a,104b,104c, which can be used for depth mapping of obstacles detected via the image capture devices104a,104b,104c. The position sensor220can include at least one of an inertial measurement unit (IMU), an accelerometer, a gyroscope, or a magnetometer (e.g., magnetic compass). The system200can include at least one hand device224. The hand device224can be sized and shaped to be held by one or more hands of a user. The hand device224can operate as a user control device; for example, the hand device224can include various user interface elements (e.g., buttons, switches, toggles, etc.) that can be manipulated by a user to generate user inputs. For example, the hand device224can be used as a controller for interacting with a virtual environment being presented via the display164based on operation of an application by the HMD system200. The hand device224can communicate with the communications circuitry204, client device208, and/or server212using various wired or wireless connections. The hand device224can include one or more position sensors228, which can be similar to the position sensor220. For example, the position sensor228can include at least one of an inertial measurement unit (IMU), an accelerometer, a gyroscope, or a magnetometer (e.g., magnetic compass), which can output sensor data including at least one of a position, a velocity, an acceleration, or an orientation of the hand device224in order for processing circuitry116to use the sensor data to detect movement of one or more hands of the user to determine whether to generate warnings regarding potential collisions between the one or more hands of the user and obstacles in a real world environment around the HMD200. Referring now toFIG.3, a headset300can be implemented as an HWD (e.g., as an eyewear device). In some embodiments, the eyewear device is a near eye display (NED). The headset300may be worn on the face of a user such that content (e.g., media content) is presented using a display assembly and/or an audio system. Examples of media content presented by the headset300include one or more images, video, audio, or some combination thereof. The headset300includes a frame, and may include, among other components, a display assembly including one or more display elements320, a depth camera assembly (DCA), an audio system, and a position sensor390. WhileFIG.3illustrates the components of the headset300in example locations on the headset300, the components may be located elsewhere on the headset300, on a peripheral device paired with the headset300, or some combination thereof. The frame310holds the other components of the headset300. The frame310includes a front part that holds the one or more display elements320and end pieces (e.g., temples) to attach to a head of the user. The front part of the frame310bridges the top of a nose of the user. The length of the end pieces may be adjustable (e.g., adjustable temple length) to fit different users. The end pieces may also include a portion that curls behind the ear of the user (e.g., temple tip, ear piece). The one or more display elements320provide light to a user wearing the headset300. As illustrated the headset includes a display element320for each eye of a user. In some embodiments, a display element320generates image light that is provided to an eyebox of the headset300. The eyebox is a location in space that an eye of user occupies while wearing the headset300. For example, a display element320may be a waveguide display. A waveguide display includes a light source (e.g., a two-dimensional source, one or more line sources, one or more point sources, etc.) and one or more waveguides. Light from the light source is in-coupled into the one or more waveguides which outputs the light in a manner such that there is pupil replication in an eyebox of the headset300. In-coupling and/or outcoupling of light from the one or more waveguides may be done using one or more diffraction gratings. In some embodiments, the waveguide display includes a scanning element (e.g., waveguide, mirror, etc.) that scans light from the light source as it is in-coupled into the one or more waveguides. One or both of the display elements320may be opaque and not transmit light from a local area around the headset300. The local area is the area surrounding the headset300. For example, the local area may be a room that a user wearing the headset300is inside, or the user wearing the headset300may be outside and the local area is an outside area. One or both of the display elements320can be at least partially transparent, such that light from the local area may be combined with light from the one or more display elements to produce AR content. The display element320may include an additional optics block (not shown). The optics block may include one or more optical elements (e.g., lens, Fresnel lens, etc.) that direct light from the display element320to the eyebox. The optics block may, e.g., correct for aberrations in some or all of the image content, magnify some or all of the image, or some combination thereof. The DCA determines depth information for a portion of a local area surrounding the headset300. The DCA includes one or more imaging devices330and a DCA controller (not shown inFIG.3), and may also include an illuminator340. In some embodiments, the illuminator340illuminates a portion of the local area with light. The light may be, e.g., structured light (e.g., dot pattern, bars, etc.) in the infrared (IR), IR flash for time-of-flight, etc. In some embodiments, the one or more imaging devices330capture images of the portion of the local area that include the light from the illuminator340. The DCA controller computes depth information for the portion of the local area using the captured images and one or more depth determination techniques. The depth determination technique may be, e.g., direct time-of-flight (ToF) depth sensing, indirect ToF depth sensing, structured light, passive stereo analysis, active stereo analysis (uses texture added to the scene by light from the illuminator340), some other technique to determine depth of a scene, or some combination thereof. The audio system provides audio content. The audio system includes a transducer array, a sensor array, and an audio controller350. Functionality described with reference to the components of the audio system can be distributed among the components in a different manner than is described here. For example, some or all of the functions of the controller may be performed by a remote server. The transducer array presents sound to user. The transducer array includes a plurality of transducers. A transducer may be a speaker360or a tissue transducer370(e.g., a bone conduction transducer or a cartilage conduction transducer). Although the speakers360are shown exterior to the frame310, the speakers360may be enclosed in the frame310. In some embodiments, instead of individual speakers for each ear, the headset300includes a speaker array comprising multiple speakers integrated into the frame310to improve directionality of presented audio content. The tissue transducer370couples to the head of the user and directly vibrates tissue (e.g., bone or cartilage) of the user to generate sound. The sensor array detects sounds within the local area of the headset300. The sensor array includes a plurality of acoustic sensors380. An acoustic sensor380captures sounds emitted from one or more sound sources in the local area (e.g., a room). Each acoustic sensor is configured to detect sound and convert the detected sound into an electronic format (analog or digital). The acoustic sensors380may be acoustic wave sensors, microphones, sound transducers, or similar sensors that are suitable for detecting sounds. In some embodiments, one or more acoustic sensors380may be placed in an ear canal of each ear (e.g., acting as binaural microphones). In some embodiments, the acoustic sensors380may be placed on an exterior surface of the headset300, placed on an interior surface of the headset300, separate from the headset300(e.g., part of some other device), or some combination thereof. The number of acoustic detection locations may be increased to increase the amount of audio information collected and the sensitivity and/or accuracy of the information. The acoustic detection locations may be oriented such that the microphone is able to detect sounds in a wide range of directions surrounding the user wearing the headset300. The audio controller350processes information from the sensor array that describes sounds detected by the sensor array. The audio controller350may comprise a processor and a computer-readable storage medium. The audio controller350may be configured to generate direction of arrival (DOA) estimates, generate acoustic transfer functions (e.g., array transfer functions and/or head-related transfer functions), track the location of sound sources, form beams in the direction of sound sources, classify sound sources, generate sound filters for the speakers360, or some combination thereof. The position sensor390generates one or more measurement signals in response to motion of the headset300. The position sensor390may be located on a portion of the frame310of the headset300. The position sensor390may include an inertial measurement unit (IMU). Examples of position sensor390include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU, or some combination thereof. The position sensor390may be located external to the IMU, internal to the IMU, or some combination thereof. In some embodiments, the headset300may provide for simultaneous localization and mapping (SLAM) for a position of the headset300and updating of a model of the local area. For example, the headset300may include a passive camera assembly (PCA) that generates color image data. The PCA may include one or more RGB cameras that capture images of some or all of the local area. In some embodiments, some or all of the imaging devices330of the DCA may also function as the PCA. The images captured by the PCA and the depth information determined by the DCA may be used to determine parameters of the local area, generate a model of the local area, update a model of the local area, or some combination thereof. Furthermore, the position sensor390tracks the position (e.g., location and pose) of the headset100within the room. Referring now toFIG.4, an audio system400generates one or more acoustic transfer functions for a user. The audio system400may then use the one or more acoustic transfer functions to generate audio content for the user. The audio system400can include a transducer array410, a sensor array420, and an audio controller430. The transducer array410can configured to present audio content. The transducer array410includes a plurality of transducers, such as a speaker (e.g., the speaker360), a tissue transducer (e.g., the tissue transducer370), some other device that provides audio content, or some combination thereof. A tissue transducer may function as a bone conduction transducer or a cartilage conduction transducer. The transducer array410may present audio content via air conduction (e.g., via one or more speakers), via bone conduction (via one or more bone conduction transducer), via cartilage conduction audio system (via one or more cartilage conduction transducers), or some combination thereof. In some embodiments, the transducer array410may include one or more transducers to cover different parts of a frequency range. For example, a piezoelectric transducer may be used to cover a first part of a frequency range and a moving coil transducer may be used to cover a second part of a frequency range. The bone conduction transducers can generate acoustic pressure waves by vibrating bone/tissue in the user's head. A bone conduction transducer may be coupled to a portion of a headset, and may be configured to be behind the auricle coupled to a portion of the user's skull. The bone conduction transducer receives vibration instructions from the audio controller430, and vibrates a portion of the user's skull based on the received instructions. The vibrations from the bone conduction transducer generate a tissue-borne acoustic pressure wave that propagates toward the user's cochlea, bypassing the eardrum. The cartilage conduction transducers generate acoustic pressure waves by vibrating one or more portions of the auricular cartilage of the ears of the user. A cartilage conduction transducer may be coupled to a portion of a headset, and may be configured to be coupled to one or more portions of the auricular cartilage of the ear. For example, the cartilage conduction transducer may couple to the back of an auricle of the ear of the user. The cartilage conduction transducer may be located anywhere along the auricular cartilage around the outer ear (e.g., the pinna, the tragus, some other portion of the auricular cartilage, or some combination thereof). Vibrating the one or more portions of auricular cartilage may generate: airborne acoustic pressure waves outside the ear canal; tissue born acoustic pressure waves that cause some portions of the ear canal to vibrate thereby generating an airborne acoustic pressure wave within the ear canal; or some combination thereof. The generated airborne acoustic pressure waves propagate down the ear canal toward the ear drum. The transducer array410can generate audio content in accordance with instructions from the audio controller430. In some embodiments, the audio content is spatialized. Spatialized audio content can appear to originate from a particular direction and/or target region (e.g., an object in the local area and/or a virtual object). For example, spatialized audio content can make it appear that sound is originating from a virtual singer across a room from a user of the audio system400. The transducer array410may be coupled to a wearable device. The transducer array410may be a plurality of speakers that are separate from the wearable device (e.g., coupled to an external console). The sensor array420can detect sounds within a local area surrounding the sensor array420. The sensor array420may include a plurality of acoustic sensors that each detect air pressure variations of a sound wave and convert the detected sounds into an electronic format (analog or digital). The plurality of acoustic sensors may be positioned on a headset (e.g., headset300), on a user (e.g., in an ear canal of the user), on a neckband, or some combination thereof. An acoustic sensor may be, e.g., a microphone, a vibration sensor, an accelerometer, or any combination thereof. In some embodiments, the sensor array420is configured to monitor the audio content generated by the transducer array410using at least some of the plurality of acoustic sensors. Increasing the number of sensors may improve the accuracy of information (e.g., directionality) describing a sound field produced by the transducer array410and/or sound from the local area. The audio controller430controls operation of the audio system400. The audio controller430can include a data store435, a DOA estimator440, a transfer function450, a tracker460, a beamformer470, and a sound filter480. The audio controller430may be located inside a headset, in some embodiments. Some embodiments of the audio controller430have different components than those described here. Similarly, functions can be distributed among the components in different manners than described here. For example, some functions of the controller may be performed external to the headset. The data store435can store data for use by the audio system400. Data in the data store435may include sounds recorded in the local area of the audio system400, audio content, head-related transfer functions (HRTFs), transfer functions for one or more sensors, array transfer functions (ATFs) for one or more of the acoustic sensors, sound source locations, virtual model of local area, direction of arrival estimates, sound filters, and other data relevant for use by the audio system400, or any combination thereof. The DOA estimator440can localize sound sources in the local area based in part on information from the sensor array420. Localization is a process of determining where sound sources are located relative to the user of the audio system400. The DOA estimator440performs a DOA analysis to localize one or more sound sources within the local area. The DOA analysis may include analyzing the intensity, spectra, and/or arrival time of each sound at the sensor array420to determine the direction from which the sounds originated. In some cases, the DOA analysis may include any suitable algorithm for analyzing a surrounding acoustic environment in which the audio system400is located. For example, the DOA analysis may be designed to receive input signals from the sensor array420and apply digital signal processing algorithms to the input signals to estimate a direction of arrival. These algorithms may include, for example, delay and sum algorithms where the input signal is sampled, and the resulting weighted and delayed versions of the sampled signal are averaged together to determine a DOA. A least mean squared (LMS) algorithm may also be implemented to create an adaptive filter. This adaptive filter may then be used to identify differences in signal intensity, for example, or differences in time of arrival. These differences may then be used to estimate the DOA. The DOA may be determined by converting the input signals into the frequency domain and selecting specific bins within the time-frequency (TF) domain to process. Each selected TF bin may be processed to determine whether that bin includes a portion of the audio spectrum with a direct path audio signal. Those bins having a portion of the direct-path signal may then be analyzed to identify the angle at which the sensor array420received the direct-path audio signal. The determined angle may then be used to identify the DOA for the received input signal. Other algorithms not listed above may also be used alone or in combination with the above algorithms to determine DOA. The DOA estimator440may determine the DOA with respect to an absolute position of the audio system400within the local area. The position of the sensor array420may be received from an external system (e.g., some other component of a headset, an artificial reality console, a mapping server, a position sensor (e.g., the position sensor190), etc.). The external system may create a virtual model of the local area, in which the local area and the position of the audio system400are mapped. The received position information may include a location and/or an orientation of some or all of the audio system400(e.g., of the sensor array420). The DOA estimator440may update the estimated DOA based on the received position information. The transfer function450can generate one or more acoustic transfer functions. Generally, a transfer function is a mathematical function giving a corresponding output value for each possible input value. Based on parameters of the detected sounds, the transfer function450generates one or more acoustic transfer functions associated with the audio system. The acoustic transfer functions may be array transfer functions (ATFs), head-related transfer functions (HRTFs), other types of acoustic transfer functions, or some combination thereof. An ATF characterizes how the microphone receives a sound from a point in space. An ATF can include a number of transfer functions that characterize a relationship between the sound source and the corresponding sound received by the acoustic sensors in the sensor array420. Accordingly, for a sound source there is a corresponding transfer function for each of the acoustic sensors in the sensor array420. The sound source may be, e.g., someone or something generating sound in the local area, the user, or one or more transducers of the transducer array410. The ATF for a particular sound source location relative to the sensor array420may differ from user to user due to a person's anatomy (e.g., ear shape, shoulders, etc.) that affects the sound as it travels to the person's ears. Accordingly, the ATFs of the sensor array420can be personalized for each user of the audio system400. The transfer function450can determine one or more HRTFs for a user of the audio system400. The HRTF can characterize how an ear receives a sound from a point in space. The HRTF for a particular source location relative to a person can be unique to each ear of the person (and is unique to the person) due to the person's anatomy (e.g., ear shape, shoulders, etc.) that affects the sound as it travels to the person's ears. In some embodiments, the transfer function450may determine HRTFs for the user using a calibration process. In some embodiments, the transfer function450may provide information about the user to a remote system. The remote system determines a set of HRTFs that are customized to the user using, e.g., machine learning, and provides the customized set of HRTFs to the audio system400. The tracker460can track locations of one or more sound sources. The tracker460may compare current DOA estimates and compare them with a stored history of previous DOA estimates. In some embodiments, the audio system400may recalculate DOA estimates on a periodic schedule, such as once per second, or once per millisecond. The tracking module may compare the current DOA estimates with previous DOA estimates, and in response to a change in a DOA estimate for a sound source, the tracker460may determine that the sound source moved. In some embodiments, the tracker460may detect a change in location based on visual information received from the headset or some other external source. The tracker460may track the movement of one or more sound sources over time. The tracker460may store values for a number of sound sources and a location of each sound source at each point in time. In response to a change in a value of the number or locations of the sound sources, the tracker460may determine that a sound source moved. The tracker460may calculate an estimate of the localization variance. The localization variance may be used as a confidence level for each determination of a change in movement. The beamformer470can process one or more ATFs to selectively emphasize sounds from sound sources within a certain area while de-emphasizing sounds from other areas. In analyzing sounds detected by the sensor array420, the beamformer470may combine information from different acoustic sensors to emphasize sound associated from a particular region of the local area while deemphasizing sound that is from outside of the region. The beamformer470may isolate an audio signal associated with sound from a particular sound source from other sound sources in the local area based on, e.g., different DOA estimates from the DOA estimator440and the tracker460. The beamformer470may thus selectively analyze discrete sound sources in the local area. In some embodiments, the beamformer470may enhance a signal from a sound source. For example, the beamformer470may apply sound filters which eliminate signals above, below, or between certain frequencies. Signal enhancement acts to enhance sounds associated with a given identified sound source relative to other sounds detected by the sensor array420. The sound filter480can determine sound filters for the transducer array410. In some embodiments, the sound filters cause the audio content to be spatialized, such that the audio content appears to originate from a target region. The sound filter480may use HRTFs and/or acoustic parameters to generate the sound filters. The acoustic parameters describe acoustic properties of the local area. The acoustic parameters may include, e.g., a reverberation time, a reverberation level, a room impulse response, etc. In some embodiments, the sound filter480calculates one or more of the acoustic parameters. In some embodiments, the sound filter480requests the acoustic parameters from a mapping server (e.g., as described below with regard toFIG.6). The sound filter480can provide the sound filters to the transducer array410. In some embodiments, the sound filters may cause positive or negative amplification of sounds as a function of frequency. Referring now toFIG.5, a method500for spatial update latency compensation for head-tracked audio is illustrated. In some embodiments, the method500can include one or more of the following operations which may or may not be performed in sequence. The method500can include identifying a current head angle of a device, such as an HWD, that outputs sound for perception by a user (505). The method500can include identifying a latency associated with an audio signal to be outputted at a current point in time (510). The method510can include retrieving a previous head angle used to generate the audio signal based at least on the latency (515). The method500can include determining an angle error between the current head angle and the previous head angle (520). The method500can include predicting an expected value of the current head angle (525). The method can include determining the angle error between the current head angle and the expected value of the current head angle (530). The method can include determining ITDs and ILDs using the angle error (535). The method500can include applying the ITDs and ILDs to the audio signal to adjust the audio signal (540). The method can include outputting the adjusted audio signal (545). The method500can be executed using various devices and systems described herein, including but not limited to the system100, the HMD200, the headset300, the audio system400, and the computing environment described with reference toFIG.6. In more detail, at505, a head angle is identified. The head angle can be an angle of a head of a user, which may correspond to an angle of an HWD worn by the user. The head angle can be identified by for a current point in time (e.g., a point in time at which a most current audio signal is to be outputted). The head angle can be identified by receiving the head angle from a head tracker, such as a position sensor that may be coupled to the HWD or the head of the user, or that may be remote from the HWD or head of the user and monitors the head angle remotely (e.g., a camera-based head tracker). For example, the head tracker may periodically provide the head angle, or the head angle can be received by requesting the head angle. The head angle can be requested on a periodic basis or responsive to a request condition, such as initialization of an application used to generate audio content. At510, a latency associated with an audio signal to be outputted at the current point in time is identified. The latency may correspond to processing and/or network transmission times (e.g., round trip time) associated with providing a head angle to an audio engine that generates the audio signal, the audio engine generating the audio signal, and receipt of the audio signal from the audio engine. For example, the latency may include network transmission times for communicating the head angles and audio signals via a Bluetooth connection between the HWD and a device remote from the HWD that implements the audio engine. The latency may be a predetermined value, which can be retrieved in order to identify the latency. The latency may be measured, such as by assigning a time stamp or other identifier to a head angle, and causing the audio engine to include the identifier with the audio signal when the audio engine outputs the audio signal so that the time stamp of the head angle can be compared to a time stamp at which the audio signal is received in order to measure the latency. The latency may be measured and updated periodically or responsive to a trigger condition. For example, the latency may be measured on a periodic basis (e.g., every ten seconds; every minute; every hour) and a value of the latency maintained in memory may be updated responsive to measuring the latency. The latency may be measured responsive to trigger conditions such as initialization of the HWD or an application operated by the HWD, or detection of user behavior indicative of a request to calibrate the audio output, such as irregular or unexpected movements. At515, a previous head angle that was used to generate the audio signal that is now to be outputted at the current point in time retrieved based at least on the latency. For example, a buffer of head angles may be maintained, for a duration of head angles at least as long as the measured latency, that maps head angles to time stamps. The previous head angle can be retrieved by comparing the latency to the current point in time, in order to identify a previous point in time at which the previous head angle was measured, and retrieve the previous head angle using a time stamp corresponding to the previous point in time. For example, if the latency is measured to be 50 ms, the previous head angle can be identified as the head angle assigned a time stamp of negative 50 ms in the head angle buffer. The previous head angle can be a first head angle, while the current head angle is a second head angle. At520, an angle error is determined between the current head angle and the previous head angle. The angle error can be determined by comparing the current head angle to the previous head angle. For example, the angle error can be determined by subtracting the previous head angle from the current head angle. At525, an expected value of the current head angle can be predicted. For example, additionally or alternatively to determining the angle error using the current head angle and the previous head angle, the expected value of the current head angle can be predicted in order to determine the angle error. The expected value can be predicted using the previous head angle and the latency (e.g., a time difference between when the previous head angle was detected and a current time) to predict where the head would expected to be. The expected value can be predicted and provided to an audio engine used to generate the audio signal so that the audio engine generates the audio signal using the expected value. The expected value can be predicted using any of a variety of functions or models, such as a Kalman filter model, which can maintain a state of the head angle and update the state of the head angle using measured head angle data. At530, the angle error can be determined between the current head angle (e.g., as measured by sampling the head tracker) and the expected value of the current head angle (e.g., as provided by a model for the head angle). The angle error can be determined by subtracting the expected value of the current head angle from the measured value of the current head angle. The angle error can be determined responsive to determining the expected value of the current head angle. At535, one or more angle errors may be used to determine how to adjust the audio signal to compensate for the angle error (and in turn the latency represented by the angle error). For example, an ITD (e.g., change to ITD), an ILD (e.g., change to ILD), or a combination thereof may be determined using the angle errors. The ITD and ILD may be determined using the angle error determined by comparing the current head angle to the previous head angle, the angle error determined by comparing the current head angle to the expected value of the current head angle, or a combination thereof (e.g., a weighted average of the angle errors). The ITD and ILD may be determined responsive to determining the angle error. The ITD and ILD may be determined responsive to the angle error being greater than a threshold angle error. The ITD and ILD may be determined using various functions, models, lookup tables, or other operations that map the angle error to ITD, ILD, or a combination thereof. For example, the ITD and ILD may be determined using a system of equations that receive the angle error as input and output ITD and ILD (e.g., adjustments thereto) as output. In some implementations, the audio signal may include a multiplexed binaural audio signal that includes multiple audio streams or channels, and one or more of an ITD adjustment or an ILD adjustment can be determined for each audio stream or channel. At540, the adjustments to the audio signal, such as one or more ITD adjustments, ILD adjustments, or a combination thereof, can be applied to the audio signal or to each audio stream of the audio signal. For example, the ITD adjustment can be applied to the audio signal to delay one or more left channels or streams relative to one or more right channels or streams. The ITD adjustment can be applied by retrieving a predetermined duration of audio data corresponding to the ITD adjustment from a playback buffer (e.g., a playback buffer of 700 ms of audio data) and using the predetermined duration to delay the left channel(s) or right channel(s) (e.g., depending on the sign of the angle error). The ILD adjustment can be applied to the audio signal to increase or decrease a level or intensity of one or more left channels or streams relative to one or more right channels or streams. For example, the ILD adjustment can be determined by applying one or more of a scalar value, a biquad filter, or a zero pole filter to one or more channels or streams of the audio signal. The ITD and ILD adjustments can be applied to reduce the angle error. In some implementations, the ITD and ILD adjustments are selectively applied responsive to a frequency characteristic of the audio signal (or of a particular audio channel or audio stream). At545, the adjusted audio signal is outputted. The adjusted audio signal can be outputted using one or more audio output devices, such as speakers, bone conducting transducers, cartilage conducting transducers, haptic feedback devices, or any combination thereof. The adjusted audio signal can be outputted at a rate corresponding to a frame rate of images displayed (e.g., in VR or AR applications), or a rate of generation of the audio signal. The adjusted audio signal can be outputted at a rate less than the rate of generation of the audio signal, such as if the audio signal is downsampled prior to adjusting the audio signal, which may facilitate reducing processing demands associated with adjusting the audio signal while maintaining a target spatial image and compensating for latency effects. The adjusted audio signal can be outputted by controlling at least one of a timing (e.g., delay) and level of each channel (e.g., left and right channels) using the determined ITDs and ILDs. The method500or steps or operations thereof can be performed based on various conditions. For example, operations can be performed responsive to initialization of the HWD, and audio system associated with the HWD, or an application executed to provide audio content. Operations can be performed on a periodic basis, such as to recalibrate measurement of latency in the system. Operations can be performed at, above, or below an audio bitrate or audio sample rate associated with generation of or outputting of audio signals, or a video frame rate associated with generation and output of video frames. For example, if audio signals are generated every one ms, head angle detection and compensation (e.g., by adjusting the audio signal) can be performed every one ms, while latency measurement can be performed every ten seconds. Various operations described herein can be implemented on computer systems.FIG.6shows a block diagram of a representative server system600and client computer system614usable to implement the present disclosure. Server system600or similar systems can implement services or servers described herein or portions thereof. Client computer system614or similar systems can implement clients described herein. Each of the systems100,200and others described herein can incorporate features of the systems600,614. Server system600can have a modular design that incorporates a number of modules602(e.g., blades in a blade server); while two modules602are shown, any number can be provided. Each module602can include processing unit(s)604and local storage606. Processing unit(s)604can include a single processor, which can have one or more cores, or multiple processors. Processing unit(s)604can include a general-purpose primary processor as well as one or more special-purpose co-processors such as graphics processors, digital signal processors, or the like. Some or all processing units604can be implemented using customized circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). Such integrated circuits execute instructions that are stored on the circuit itself. Processing unit(s)604can execute instructions stored in local storage606. Any type of processors in any combination can be included in processing unit(s)604. Local storage606can include volatile storage media (e.g., conventional DRAM, SRAM, SDRAM, or the like) and/or non-volatile storage media (e.g., magnetic or optical disk, flash memory, or the like). Storage media incorporated in local storage606can be fixed, removable or upgradeable as desired. Local storage606can be physically or logically divided into various subunits such as a system memory, a read-only memory (ROM), and a permanent storage device. The system memory can be a read-and-write memory device or a volatile read-and-write memory, such as dynamic random-access memory. The system memory can store some or all of the instructions and data that processing unit(s)604need at runtime. The ROM can store static data and instructions that are needed by processing unit(s)604. The permanent storage device can be a non-volatile read-and-write memory device that can store instructions and data even when module602is powered down. The term “storage medium” as used herein includes any medium in which data can be stored indefinitely (subject to overwriting, electrical disturbance, power loss, or the like) and does not include carrier waves and transitory electronic signals propagating wirelessly or over wired connections. Local storage606can store one or more software programs to be executed by processing unit(s)604, such as an operating system and/or programs implementing various server functions such as functions of the system100, or any other system described herein, or any other server(s) associated with the system100or any other system described herein. “Software” refers generally to sequences of instructions that, when executed by processing unit(s)604cause server system600(or portions thereof) to perform various operations, thus defining one or more specific machine implementations that execute and perform the operations of the software programs. The instructions can be stored as firmware residing in read-only memory and/or program code stored in non-volatile storage media that can be read into volatile working memory for execution by processing unit(s)604. Software can be implemented as a single program or a collection of separate programs or program modules that interact as desired. From local storage606(or non-local storage described below), processing unit(s)604can retrieve program instructions to execute and data to process in order to execute various operations described above. In some server systems600, multiple modules602can be interconnected via a bus or other interconnect608, forming a local area network that supports communication between modules602and other components of server system600. Interconnect608can be implemented using various technologies including server racks, hubs, routers, etc. A wide area network (WAN) interface610can provide data communication capability between the local area network (interconnect608) and a larger network, such as the Internet. Conventional or other activities technologies can be used, including wired (e.g., Ethernet, IEEE 302.3 standards) and/or wireless technologies (e.g., Wi-Fi, IEEE 302.11 standards). Local storage606can provide working memory for processing unit(s)604, providing fast access to programs and/or data to be processed while reducing traffic on interconnect608. Storage for larger quantities of data can be provided on the local area network by one or more mass storage subsystems612that can be connected to interconnect608. Mass storage subsystem612can be based on magnetic, optical, semiconductor, or other data storage media. Direct attached storage, storage area networks, network-attached storage, and the like can be used. Any data stores or other collections of data described herein as being produced, consumed, or maintained by a service or server can be stored in mass storage subsystem612. Additional data storage resources may be accessible via WAN interface610(potentially with increased latency). Server system600can operate in response to requests received via WAN interface610. For example, one of modules602can implement a supervisory function and assign discrete tasks to other modules602in response to received requests. Conventional work allocation techniques can be used. As requests are processed, results can be returned to the requester via WAN interface610. Such operation can generally be automated. WAN interface610can connect multiple server systems600to each other, providing scalable systems capable of managing high volumes of activity. Conventional or other techniques for managing server systems and server farms (collections of server systems that cooperate) can be used, including dynamic resource allocation and reallocation. Server system600can interact with various user-owned or user-operated devices via a wide-area network such as the Internet. An example of a user-operated device is shown inFIG.4as client computing system614. Client computing system614can be implemented, for example, as a consumer device such as a smartphone, other mobile phone, tablet computer, wearable computing device (e.g., smart watch, eyeglasses), desktop computer, laptop computer, and so on. For example, client computing system614can communicate via WAN interface610. Client computing system614can include conventional computer components such as processing unit(s)616, storage device618, network interface620, user input device622, and user output device624. Client computing system614can be a computing device implemented in a variety of form factors, such as a desktop computer, laptop computer, tablet computer, smartphone, other mobile computing device, wearable computing device, or the like. Processor616and storage device618can be similar to processing unit(s)604and local storage606described above. Suitable devices can be selected based on the demands to be placed on client computing system614; for example, client computing system614can be implemented as a “thin” client with limited processing capability or as a high-powered computing device. Client computing system614can be provisioned with program code executable by processing unit(s)616to enable various interactions with server system600of a message management service such as accessing messages, performing actions on messages, and other interactions described above. Some client computing systems614can also interact with a messaging service independently of the message management service. Network interface620can provide a connection to a wide area network (e.g., the Internet) to which WAN interface610of server system600is also connected. Network interface620can include a wired interface (e.g., Ethernet) and/or a wireless interface implementing various RF data communication standards such as Wi-Fi, Bluetooth, or cellular data network standards (e.g., 3G, 4G, LTE, etc.). User input device622can include any device (or devices) via which a user can provide signals to client computing system614; client computing system614can interpret the signals as indicative of particular user requests or information. User input device622can include any or all of a keyboard, touch pad, touch screen, mouse or other pointing device, scroll wheel, click wheel, dial, button, switch, keypad, microphone, and so on. User output device624can include any device via which client computing system614can provide information to a user. For example, user output device624can include a display to display images generated by or delivered to client computing system614. The display can incorporate various image generation technologies, e.g., a liquid crystal display (LCD), light-emitting diode (LED) including organic light-emitting diodes (OLED), projection system, cathode ray tube (CRT), or the like, together with supporting electronics (e.g., digital-to-analog or analog-to-digital converters, signal processors, or the like). A device such as a touchscreen that function as both input and output device can be used. Output devices624can be provided in addition to or instead of a display. Examples include indicator lights, speakers, tactile “display” devices, printers, and so on. Some implementations include electronic components, such as microprocessors, storage and memory that store computer program instructions in a computer readable storage medium. Many of the features described in this specification can be implemented as processes that are specified as a set of program instructions encoded on a computer readable storage medium. When these program instructions are executed by one or more processing units, they cause the processing unit(s) to perform various operation indicated in the program instructions. Examples of program instructions or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter. Through suitable programming, processing unit(s)604and616can provide various functionality for server system600and client computing system614, including any of the functionality described herein as being performed by a server or client, or other functionality associated with message management services. It will be appreciated that server system600and client computing system614are illustrative and that variations and modifications are possible. Computer systems used in connection with the present disclosure can have other capabilities not specifically described here. Further, while server system600and client computing system614are described with reference to particular blocks, it is to be understood that these blocks are defined for convenience of description and are not intended to imply a particular physical arrangement of component parts. For instance, different blocks can be but need not be located in the same facility, in the same server rack, or on the same motherboard. Further, the blocks need not correspond to physically distinct components. Blocks can be configured to perform various operations, e.g., by programming a processor or providing appropriate control circuitry, and various blocks might or might not be reconfigurable depending on how the initial configuration is obtained. Implementations of the present disclosure can be realized in a variety of apparatus including electronic devices implemented using any combination of circuitry and software. Having now described some illustrative implementations, it is apparent that the foregoing is illustrative and not limiting, having been presented by way of example. In particular, although many of the examples presented herein involve specific combinations of method acts or system elements, those acts and those elements can be combined in other ways to accomplish the same objectives. Acts, elements and features discussed in connection with one implementation are not intended to be excluded from a similar role in other implementations or implementations. The hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, particular processes and methods may be performed by circuitry that is specific to a given function. The memory (e.g., memory, memory unit, storage device, etc.) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage, etc.) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. According to an exemplary embodiment, the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit and/or the processor) the one or more processes described herein. The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including” “comprising” “having” “containing” “involving” “characterized by” “characterized in that” and variations thereof herein, is meant to encompass the items listed thereafter, equivalents thereof, and additional items, as well as alternate implementations consisting of the items listed thereafter exclusively. In one implementation, the systems and methods described herein consist of one, each combination of more than one, or all of the described elements, acts, or components. Any references to implementations or elements or acts of the systems and methods herein referred to in the singular can also embrace implementations including a plurality of these elements, and any references in plural to any implementation or element or act herein can also embrace implementations including only a single element. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements to single or plural configurations. References to any act or element being based on any information, act or element can include implementations where the act or element is based at least in part on any information, act, or element. Any implementation disclosed herein can be combined with any other implementation or embodiment, and references to “an implementation,” “some implementations,” “one implementation” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the implementation can be included in at least one implementation or embodiment. Such terms as used herein are not necessarily all referring to the same implementation. Any implementation can be combined with any other implementation, inclusively or exclusively, in any manner consistent with the aspects and implementations disclosed herein. Where technical features in the drawings, detailed description or any claim are followed by reference signs, the reference signs have been included to increase the intelligibility of the drawings, detailed description, and claims. Accordingly, neither the reference signs nor their absence have any limiting effect on the scope of any claim elements. Systems and methods described herein may be embodied in other specific forms without departing from the characteristics thereof. Further relative parallel, perpendicular, vertical or other positioning or orientation descriptions include variations within +/−10% or +/−10 degrees of pure vertical, parallel or perpendicular positioning. References to “approximately,” “about” “substantially” or other terms of degree include variations of +/−10% from the given measurement, unit, or range unless explicitly indicated otherwise. Coupled elements can be electrically, mechanically, or physically coupled with one another directly or with intervening elements. Scope of the systems and methods described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalency of the claims are embraced therein. The term “coupled” and variations thereof includes the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent or fixed) or moveable (e.g., removable or releasable). Such joining may be achieved with the two members coupled directly with or to each other, with the two members coupled with each other using a separate intervening member and any additional intermediate members coupled with one another, or with the two members coupled with each other using an intervening member that is integrally formed as a single unitary body with one of the two members. If “coupled” or variations thereof are modified by an additional term (e.g., directly coupled), the generic definition of “coupled” provided above is modified by the plain language meaning of the additional term (e.g., “directly coupled” means the joining of two members without any separate intervening member), resulting in a narrower definition than the generic definition of “coupled” provided above. Such coupling may be mechanical, electrical, or fluidic. References to “or” can be construed as inclusive so that any terms described using “or” can indicate any of a single, more than one, and all of the described terms. A reference to “at least one of ‘A’ and ‘B’” can include only ‘A’, only ‘B’, as well as both ‘A’ and ‘B’. Such references used in conjunction with “comprising” or other open terminology can include additional items. Modifications of described elements and acts such as variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations can occur without materially departing from the teachings and advantages of the subject matter disclosed herein. For example, elements shown as integrally formed can be constructed of multiple parts or elements, the position of elements can be reversed or otherwise varied, and the nature or number of discrete elements or positions can be altered or varied. Other substitutions, modifications, changes and omissions can also be made in the design, operating conditions and arrangement of the disclosed elements and operations without departing from the scope of the present disclosure. References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below”) are merely used to describe the orientation of various elements in the FIGURES. The orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure.
116,158
11943603
DETAILED DESCRIPTION The following embodiments are exemplifying. Although the specification may refer to “an”, “one”, or “some” embodiment(s) in several locations of the text, this does not necessarily mean that each reference is made to the same embodiment(s), or that a particular feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments. Example embodiments relate to controlling an audio source device in order to reduce an effect of background noise generated by the audio source device on user experience of spatial audio content. According to an example embodiment, an apparatus is configured to receive information on a position of at least one user with respect to a spatial audio field provided by an audio source device, determine an audio volume level at the position of the at least one user, receive information relating to background noise generated by the audio source device when providing the spatial audio field, determine, based on the audio volume level at the position of the at least one user and the information relating to the background noise, control information for controlling the audio source device, and control the audio source device based on the control information when providing the spatial audio field. As spatial audio is becoming more popular in games, movies and music, a need for computational power increases as well. However, as a consequence of increased computational power, also a need for cooling increases. Many devices include a cooling fan for cooling the device, but the fan noise may be very disturbing, especially if a user is listening to audio content. FIG.1is a block diagram depicting an apparatus100operating in accordance with an example embodiment of the invention. The apparatus100may be, for example, an electronic device such as a chip or a chipset. The apparatus100comprises one or more control circuitry, such as at least one processor110and at least one memory160, including one or more algorithms such as computer program code120wherein the at least one memory160and the computer program code are120configured, with the at least one processor110to cause the apparatus100to carry out any of example functionalities described below. In the example ofFIG.1, the processor110is a control unit operatively connected to read from and write to the memory160. The processor110may also be configured to receive control signals received via an input interface and/or the processor110may be configured to output control signals via an output interface. In an example embodiment the processor110may be configured to convert the received control signals into appropriate commands for controlling functionalities of the apparatus100. The at least one memory160stores computer program code120which when loaded into the processor110control the operation of the apparatus100as explained below. In other examples, the apparatus100may comprise more than one memory160or different kinds of storage devices. Computer program code120for enabling implementations of example embodiments of the invention or a part of such computer program code may be loaded onto the apparatus100by the manufacturer of the apparatus100, by a user of the apparatus100, or by the apparatus100itself based on a download program, or the code can be pushed to the apparatus100by an external device. The computer program code120may arrive at the apparatus100via an electromagnetic carrier signal or be copied from a physical entity such as a computer program product, a memory device or a record medium such as a Compact Disc (CD), a Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile Disk (DVD) or a Blu-ray disk. FIG.2is a block diagram depicting an apparatus200in accordance with an example embodiment of the invention. The apparatus200may be an electronic device such as a hand-portable device, a mobile phone or a Personal Digital Assistant (PDA), a Personal Computer (PC), a laptop, a desktop, a tablet computer, a wireless terminal, a communication terminal, a game console, a music player, an electronic book reader (e-book reader), a positioning device, a digital camera, a household appliance, a loudspeaker, a CD-, DVD or Blu-ray player, or a media player. In the example embodiment ofFIG.2, the apparatus200is illustrated as comprising the apparatus100, a microphone array210and at least one loudspeaker230. Instead of comprising a microphone array210, and/or at least one loudspeaker230, the apparatus200may be operatively connected to the microphone array210and/or the at least one loudspeaker230. For example, the apparatus200may be configured to communicate with the microphone array210, and/or the loudspeaker230over a wireless radio connection, or the like. The apparatus200may further comprise a display configured to act as a user interface. For example, the display may be a touch screen display. In an example embodiment, the display and/or the user interface may be external to the apparatus200, but in communication with it. The user interface may also comprise a manually operable control such as a button, a key, a touch pad, a joystick, a stylus, a pen, a roller, a rocker, a keypad, a keyboard or any suitable input mechanism for inputting and/or accessing information. Further examples include a camera, a speech recognition system, eye movement recognition system, acceleration-, tilt- and/or movement-based input systems. Therefore, the apparatus200may also comprise different kinds of sensors such as one or more gyro sensors, accelerometers, magnetometers, position sensors and/or tilt sensors. According to an example embodiment, the apparatus200is configured to establish radio communication with at least one device using, for example, a Bluetooth, Wi-Fi, radio frequency identification (RFID), or a near field communication (NFC) connection. According to an example embodiment, the apparatus200comprises an audio source device. According to another example embodiment, the apparatus200is operatively connected to an audio source device. The audio source device may comprise, for example, a gaming console, a computer, a household appliance, or the like. An audio source device may comprise a device providing audio content such as spatial audio or multimedia content such as video content for playback. An audio source device may comprise an audio, video or other media player with a built-in hard disk or an audio/video/media player operatively connected to a hard disk such as a network server. The audio source device may be configured to control reproduction of audio content. Controlling reproduction of spatial audio may comprise, for example, providing a spatial audio field and/or controlling one or more loudspeakers230configured to create a spatial audio field. A spatial audio field may be provided in a physical space such as a room using one or more loudspeakers located in the space or using headphones. Spatial audio may comprise a full sphere surround-sound to mimic the way people perceive audio in real life. Spatial audio may comprise audio that appears from a user's position to be assigned to a certain direction and/or distance. Therefore, the perceived audio may change with the movement of the user or with the user turning. Spatial audio may comprise audio created by sound sources, ambient audio or a combination thereof. Ambient audio may comprise audio that might not be identifiable in terms of a sound source such as traffic humming, wind or waves, for example. The full sphere surround-sound may comprise a spatial audio field and the position of the user or the position of a capturing device may be considered as a reference point in the spatial audio field. According to an example embodiment, a reference point comprises the center of the audio field. According to an example embodiment, the apparatus200is configured to communicate with other devices using, for example, a wireless radio connection such as Bluetooth. According to an example embodiment, the apparatus200is configured to receive information relating to a spatial audio field. Information relating to a spatial audio field may comprise, for example, one or more characteristics of the spatial audio field, audio content provided in the spatial audio field, information on one or more users consuming audio content provided in the spatial audio field, or the like. The apparatus200may be configured to receive the information relating to the spatial audio field from one or more other devices or the apparatus200may be configured to determine the information relating to the spatial audio field based on, for example, measurement data. According to an example embodiment, the apparatus200is configured to receive information on a position of at least one user with respect to a spatial audio field provided by the audio source device. The information on a position of the at least one user may comprise information indicating the position of the at least one user or data based on which the apparatus200may determine the position of the at least one user. The apparatus200may be configured to receive the information on a position of the at least one user from at least one microphone, camera and/or a mobile computing device of the user. According to an example embodiment, the apparatus200is configured to determine a position of the at least one user with respect to a spatial audio field based on the information on a position of the at least one user. A position of the at least one user may comprise a physical position or a virtual position. A physical position of the at least one user may comprise a physical position of the at least one user with respect to a spatial audio field provided in a particular space such as a room and a virtual position of the at least one user may comprise a position of the at least one user with respect to one or more audio objects in the spatial audio field. An audio object may comprise one or more audio signals and associated metadata. An audio object may be associated with metadata that defines a location or trajectory of that object in the audio field. As well as specifying a location and/or movement of an object, the metadata may also define the type of object, for example, acoustic characteristics of an object, and/or the class of renderer that is to be used to render the object. According to an example embodiment, the position of at least one user comprises a physical position of the at least one user in a space where the spatial audio field is provided. The space may comprise a physical space. The physical position of the at least one user may comprise a position of the at least one user with respect to one or more physical devices in the space. For example, the position of the at least one user may comprise, for example, a position of the at least one user with respect to positions of one or more loudspeakers or the audio source device. A position of the at least one user with respect to a spatial audio field may comprise a position of the at least one user with respect to a reference point in the spatial audio field. For example, a position of the at least one user may comprise a position of the at least one user with respect to the center of the spatial audio field. As another example, a position of the at least one user may comprise a position of the at least one user with respect to at least one physical device providing the spatial audio field such as a position with respect to one or more loudspeakers, the audio source device, or the like. For example, a position of the at least one user may comprise a distance between the at least one user and one or more loudspeakers and/or a distance between the at least one user and the audio source device. According to an example embodiment, the information on the position of the at least one user comprises an orientation of the at least one user. An orientation of the at least one user may comprise an orientation of the at least one user with respect to a reference point in the spatial audio field or an orientation of the at least one user with respect to a physical device providing the spatial audio field such as one or more loudspeakers or the audio source device. According to an example embodiment, the apparatus is configured to analyze a direction of interest of the at least one user in the spatial audio field. A direction of interest of the at least one user may comprise a direction in the spatial audio field to which the at least one user pays attention. The apparatus200may be configured to analyze a direction of interest of the at least user by modifying the spatial audio field and determining whether a position and/or orientation of the at least one user changes in response to the modification. Without limiting the scope of the claims, an advantage of receiving information on a position of at least one user with respect to a spatial audio field is that one or more characteristics of audio content at the position of the at least one user may be determined and the spatial audio field may be controlled based on the one or more characteristics. According to an example embodiment, the apparatus200is configured to determine one or more characteristics of the spatial audio field. For example, the apparatus200may be configured to determine an audio volume level in different parts or different directions of the spatial audio field. The apparatus200may be configured to analyze spatial audio in different directions or parts of the spatial audio field by analyzing one or more loudspeaker outputs. Analyzing spatial audio may comprise, for example, analyzing a volume level or energy of the spatial audio in different directions or parts of the spatial audio field. For example, the apparatus200may be configured to estimate energy for 5.1 multi-channel audio in a time-frequency domain denoted as Si(k, n) where i is the channel index, k the frequency band index, and n the temporal frame index. The estimated energy may be used for estimating directions θj(k, n) for a pair of loudspeakers, an angle θ0between the loudspeakers in the pair and a mean angle θ12of the loudspeaker pair as follows: θj⁡(k,n)=arc⁢⁢tan(tan⁢⁢θ0⁡(g1-g2)g1+g2)⁢θ12whereg1=E1⁡(k,n)g2=E2⁡(k,n) The estimated directions may be used for determining where reproduced spatial audio is perceived and the energy for the direction may be obtained as a sum of the energies Ei(k, n) of the loudspeaker pair. In some examples, coherence of loudspeaker signals in a pair of loudspeakers may be utilized in analyzing the direction in terms of coherent sounds being perceived to form a phantom source in between the loudspeakers, whereas incoherent sounds may be perceived to originate from the directions of the loudspeakers. The apparatus200may also be configured to determine a dominant direction θ(k, n) in the spatial audio field. For example, the apparatus200may be configured to estimate energy for 5.1 multi-channel audio by forming direction vectors as follows: V⁡(k,n)=[x⁡(k,n),y⁡(k,n)]wherex⁡(k,n)=∑i⁢Ei⁡(k,n)⁢cos⁡(θi)y⁡(k,n)=∑i⁢Et⁡(k,n)⁢sin⁡(θi) where Eiis energy of the audio signal in a loudspeaker channel i and θiis an azimuth direction of the loudspeaker i. The dominant direction θ(k, n) may then be determined as follows: θ⁡(k,n)=atan⁢⁢2⁢(y⁡(k,n),x⁡(k,n)). According to an example embodiment, the apparatus200is configured to determine an audio volume level at the position of the at least one user. Determining an audio volume level at the position of the at least one user may comprise, for example, measuring a volume level at the position of the at least one user or receiving information indicating an audio volume level at the position of the at least one user. The volume level at the position of the at least one user may be measured, for example, using one or more microphones. According to an example embodiment, determining an audio volume level at the position of the at least one user comprises analyzing output provided by at least one audio rendering device. The at least one audio rendering device may comprise, for example, one or more loudspeakers. The audio source device may be configured to provide a spatial audio field and/or, for example, perform graphics rendering, which may be a computationally heavy process. In order to avoid overheating the audio source device may comprise one or more cooling systems such as a cooling fan, a cooling water pump, or the like. The audio source device may be configured to turn on a cooling system when a temperature of the audio source device is above a threshold value such as a threshold temperature. However, the cooling system may cause audio interference such as background noise that may be disturbing for a user. According to an example embodiment, the apparatus200is configured to receive information relating to background noise generated by the audio source device when providing the spatial audio field. The background noise may comprise noise that is separate from the audio content provided in the spatial audio field. The background noise may be disturbing for a user. According to an example embodiment, the apparatus200is configured to receive information relating to background noise generated by the audio source in response to analyzing spatial audio in different directions or parts of the spatial audio field. For example, the apparatus200may be configured to determine that a direction of a particular type of audio corresponds to the direction of the audio source device and thereby deduce that the detected audio is background noise. According to an example embodiment, analyzing spatial audio in different directions or parts of the spatial audio field may comprise determining a rate of change of an audio signal amplitude in a particular frequency band. According to an example embodiment, the apparatus200is configured to receive information relating to background noise generated by the audio source from one or more microphones associated with the audio source device. For example, the one or more microphones may be close to the audio source device such that the background noise is dominating the sound level in the microphone signal thereby enabling the apparatus200to use the level of the microphone signal as a noise estimate. According to an example embodiment, the apparatus200is configured to receive information relating to background noise generated by the audio source using acoustic echo cancellation (AEC). For example, the apparatus200may be configured to remove reproduced spatial audio from a captured microphone signal in order to determine the background noise level. According to an example embodiment, the apparatus200is configured to receive information relating to background noise generated by the audio source in response to estimating the noise level based on a rotation speed of a cooling fan. For example, the apparatus200may receive information on different noise levels associated with different cooling fan speeds and estimate a noise level based on the cooling fan speed. The apparatus200may be configured to receive information relating to background noise generated by the audio source device based on different combinations of analyzing spatial audio, receiving information from a microphone associated with the audio source device, AEC and/or estimating the noise level based on a rotation speed of a cooling fan. According to an example embodiment, the background noise comprises sound generated by at least one system used for preventing the audio source device from overheating. According to another example embodiment, the background noise comprises sound generated by at least one system used for providing media content for playback by the audio source device. For example, the apparatus200may comprise a feature that causes indexing of a database on a hard disk. The database may comprise, for example, an audio database such as a music database or a graphics database. Indexing may take minutes or hours and thereby cause background noise. According to an example embodiment, the audio source device comprises at least one cooling fan generating the background noise. According to an example embodiment, the information relating to the background noise generated by the audio source device comprises a noise level. According to an example embodiment, the apparatus200is configured to receive the information relating to the background noise generated by the audio source device from at least one microphone. The microphone may comprise, for example, a microphone of the audio source device or a separate microphone such as a microphone of a mobile computing device of the at least one user. According to an example embodiment, the apparatus200is configured to determine, based on the audio volume level at the position of the at least one user and the information relating to the background noise, control information for controlling the audio source device. The control information may comprise information for controlling a function of the audio source device or controlling the spatial audio field provided by the audio source device. For example, the control information may comprise an instruction to control the audio source device or a parameter value such as a target noise level, a maximum allowed noise level or a an instruction to modify the spatial audio field provided by the audio source device. According to an example embodiment, the control information comprises a particular noise level. The particular noise level may comprise, for example, a maximum allowed noise level. According to an example embodiment, the control information comprises a particular direction of particular audio content in the spatial audio field. For example, the control information may comprise an instruction to provide particular audio content in a particular direction. Without limiting the scope of the claims, an advantage of determining control information for controlling the audio source device based on the audio volume level at the position of the at least one user and the information relating to the background noise is that customized control may be provided taking into account the position of the at least one user. According to an example embodiment, the apparatus200is configured to control the audio source device based on the control information when providing the spatial audio field. In other words, the apparatus200may be configured to control the audio source device based on the control information during playback of audio content. Controlling the audio source device may comprise controlling at least one function of the audio source device or controlling the spatial audio field provided by the audio source device. The at least one function may comprise, for example a cooling fan of the audio source device and controlling the spatial audio field provided by the audio source device may comprise, for example, controlling rendering of the spatial audio field. According to an example embodiment, controlling the audio source device comprises controlling a rotation speed of the cooling fan. Controlling a rotation speed of the cooling fan may comprise increasing the speed of the cooling fan or decreasing the speed of the cooling fan. The audio source device may comprise a plurality of cooling methods such as a plurality of cooling fans. In such a case, controlling the audio source device may comprise controlling the plurality of cooling methods by, for example, applying a control curve for controlling the plurality of cooling methods or switching one or more of cooling methods from a first mode to a second mode. According to an example embodiment, controlling the audio source device comprises controlling rendering of the spatial audio field. Controlling rendering of the spatial audio field may comprise, for example, providing instructions and/or parameters to one or more rendering devices. According to an example embodiment, controlling rendering of the spatial audio field comprises rotating the spatial audio field. Rotating the spatial audio field comprises modifying one or more audio parameters such that the orientation of the spatial audio field with respect to a reference point is changed. Modifying one or more audio parameters may be performed in different manners for different formats of audio. For example, rotatable audio may be rotated by modifying metadata and ambisonics may be rotated by modifying rotation matrices. According to an example embodiment, rotating a spatial audio field comprises moving an audio object from a first direction to a second direction. For example, assuming an audio object in a spatial audio field a located to the right from the reference point, rotating the spatial audio field may comprise moving the audio object to the left from the reference point. Without limiting the scope of the claims, an advantage of moving an audio object from a first direction to a second direction is that even if the user does not move, the user can better concentrate on the audio object as it is in a direction different from the device generating background noise. Another advantage of moving an audio object from a first direction to a second direction is that if the user moves away from the device generating background noise, it is easier to mask the background noise. The apparatus200may be configured to rotate the spatial audio field based on one or more directions with the largest amount of energy. For example, the apparatus200may be configured to rotate the spatial audio field such that the directions with the largest amount of energy correspond with a direction of the audio source device. The apparatus200may further be configured to determine whether the audio content in a direction with the largest energy remains relatively static in order to avoid constantly rotating the spatial audio field. The apparatus200may be configured to maintain the rotated spatial audio field until the background noise is below a predetermine threshold value. According to an example embodiment, the apparatus200is configured to provide a plurality of spatial audio fields and rotate the plurality of spatial audio fields independent of each other. Without limiting the scope of the claims, an advantage of rotating a spatial audio field is that, for example, a cooling system may continue running as needed, but the disturbance caused by the cooling system for a user may be reduced by masking the background noise. Another advantage is that by rotating the spatial audio field, a user may be lured to a position with less disturbance caused by the background noise. According to an example embodiment, the apparatus200comprises means for performing the features of the claimed invention, wherein the means for performing comprises at least one processor110, at least one memory160including computer program code120, the at least one memory160and the computer program code120configured to, with the at least one processor110, cause the performance of the apparatus200. The means for performing the features of the claimed invention may comprise means for receiving information on a position of at least one user with respect to a spatial audio field provided by an audio source device, means for determining an audio volume level at the position of the at least one user, means for receiving information relating to background noise generated by the audio source device when providing the spatial audio field, means for determining, based on the audio volume level at the position of the at least one user and the information relating to the background noise, control information for controlling the audio source device, and means for controlling the audio source device based on the control information when providing the spatial audio field. The apparatus200may further comprise means for receiving the information relating to the background noise generated by the audio source device from at least one microphone. FIG.3illustrates an example of a spatial audio field provided by audio source device and rendered by a plurality of loudspeakers. In the example ofFIG.3, it is assumed that the audio source device comprises the apparatus200and a cooling fan that generates background noise320. The spatial audio field is provided in a physical space such as a room300. In the example ofFIG.3, a spatial audio field315is provided by the apparatus200and rendered by the plurality of loudspeakers302,303. The apparatus200receives information on a position of a user301in terms of a distance305between the user301and the apparatus200and a distance310between the user301and the loudspeaker302, and determine that the user301is closer to the apparatus200than the loudspeaker302. The apparatus200may receive the information on a position of the user301from, for example, at least one microphone, camera and/or a mobile computing device of the user301. The apparatus200determines an audio volume level at the position of the user301and receives information relating to background noise320generated by the apparatus200when providing the spatial audio field315. In the example ofFIG.3, it is assumed that the information relating to the background noise320comprises a noise level. The apparatus200determines, based on the audio volume level at the position of the user301and the noise level, control information for controlling the cooling fan of the apparatus200. As the user301is closer to the apparatus200than the loudspeaker302, the apparatus200may determine, depending on the noise level and the audio volume level at the position of the user301, that the rotation speed of the cooling fan needs to be reduced in order to reduce disturbances caused by the cooling fan for the user. FIG.4illustrates another example of a spatial audio field provided by audio source device and rendered by a plurality of loudspeakers. Similarly toFIG.3, it is assumed that the audio source device comprises the apparatus200and a cooling fan that generates background noise320. The spatial audio field is provided in a physical space such as a room300. In the example ofFIG.4, a spatial audio field315is provided by the apparatus200and rendered by the plurality of loudspeakers302,303. The apparatus200receives information on a position of a user301in terms of a distance405between the user301and the apparatus200and a distance410between the user301and the loudspeaker302, and determine that the user301is closer to the loudspeaker302than the apparatus200. The apparatus200may receive the information on a position of the user301from, for example, at least one microphone, camera and/or a mobile computing device of the user301. The apparatus200determines an audio volume level at the position of the user301and receives information relating to background noise320generated by the apparatus200when providing the spatial audio field315. In the example ofFIG.4, it is assumed that the information relating to the background noise320comprises a noise level. The apparatus200determines, based on the audio volume level at the position of the user301and the noise level, control information for controlling the cooling fan of the apparatus200. As the user301is closer to the loudspeaker302than the apparatus200, the apparatus200may determine, depending on the noise level and the audio volume level at the position of the user301, that the rotation speed of the cooling fan may be increased in order to improve cooling of the apparatus200. FIGS.5A and5Billustrate an example of a spatial audio field provided by audio source device and rendered by a plurality of loudspeakers. Similarly toFIGS.3and4, it is assumed that the audio source device comprises the apparatus200and a cooling fan that generates background noise320. The spatial audio field is provided in a physical space such as a room300. In the example ofFIG.5A, a spatial audio field315is provided by the apparatus200and rendered by the plurality of loudspeakers302,303. The apparatus200is further configured to analyze audio content in different directions of the spatial audio field and determine a direction with the most energy. In the example ofFIG.5Aa direction with the most energy is illustrated by arrow510. The apparatus200receives information on a position of a user301and determines an audio volume level at the position of the user301and receives information relating to background noise320generated by the apparatus200when providing the spatial audio field315. In the example ofFIG.5A, it is assumed that the information relating to the background noise320comprises a noise level. The apparatus200determines, based on the audio volume level at the position of the user301and the noise level, control information for controlling rendering of the spatial audio field315. In the example ofFIG.5, the apparatus200determines that the background noise level is relatively low and may be masked by the spatial audio field315. Thereby, the apparatus200determines that the spatial audio field315needs to be rotated in order to mask the background noise320. FIG.5Billustrates a rotated spatial audio field315. In the example ofFIG.5Bthe spatial audio field is rotated such that the direction510with the most energy corresponds to the direction of the apparatus200thereby masking the background noise320. Rotating the spatial audio field may comprise modifying one or more audio parameters such that the orientation of the spatial audio field with respect to a reference point is changed. In the example ofFIG.5B, the reference point comprises the position of the user301. As mentioned above, modifying one or more audio parameters may be performed in different manners for different formats of audio. For example, rotatable audio may be rotated by modifying metadata, and ambisonics may be rotated by modifying rotation matrices. FIGS.6A and6Billustrate an example of a spatial audio field provided by audio source device and rendered by a plurality of loudspeakers. It is assumed that the audio source device comprises the apparatus200and a cooling fan that generates background noise320. The spatial audio field is provided in a physical space such as a room300. In the example ofFIG.6A, a spatial audio field315is provided by the apparatus200and rendered by the plurality of loudspeakers302,303. The apparatus200is further configured to analyze audio content in different directions of the spatial audio field and determine a direction with the most energy. In the example ofFIG.6Aa direction with the most energy is illustrated by arrow610. The apparatus200receives information on a position of a user301and determines an audio volume level at the position of the user301and receives information relating to background noise320generated by the apparatus200when providing the spatial audio field315. In the example ofFIG.6A, it is assumed that the information relating to the background noise320comprises a noise level. The apparatus200determines, based on the audio volume level at the position of the user301and the noise level, control information for controlling rendering of the spatial audio field. In the example ofFIG.6A, the apparatus200determines that the background noise level is relatively high and it cannot be properly masked by the spatial audio field315assuming the position of the user301remains substantially the same. In the example ofFIG.6A, the apparatus200determines that the spatial audio field need to be rotated in order to lure the user301further away from the apparatus200. FIG.6Billustrates a rotated spatial audio field315. In the example ofFIG.6Bthe spatial audio field315is rotated such that the direction610with the most energy corresponds to the direction different from the direction of the apparatus200. In this way the user may be lured away from the apparatus200such that the user experience is less affected by the background noise320. Rotating the spatial audio field315may comprise modifying one or more audio parameters such that the orientation of the spatial audio field315with respect to a reference point is changed. In the example ofFIG.6B, the reference point comprises the position of the user301. As mentioned above, modifying one or more audio parameters may be performed in different manners for different formats of audio. For example, rotatable audio may be rotated by modifying metadata and ambisonics may be rotated by modifying rotation matrices. FIG.7illustrates an example method700incorporating aspects of the previously disclosed embodiments. More specifically the example method700illustrates controlling an audio source device. The method may be performed by the apparatus200. The method starts with receiving705information on a position of at least one user with respect to a spatial audio field provided by an audio source device. The apparatus200may receive the information on a position of the user from, for example, at least one microphone, camera and/or a mobile computing device of the user. The method continues with determining710an audio volume level at the position of the at least one user. Determining the audio volume level at the position of the at least one user may comprise analyzing output provided by at least one audio rendering device. The method continues with receiving715, information relating to background noise generated by the audio source device when providing the spatial audio field. The information relating to background noise may comprise, for example, background noise level. The method further continues with determining720, based on the audio volume level at the position of the at least one user and the information relating to the background noise, control information for controlling the audio source device. The method further continues with controlling725the audio source device based on the control information when providing the spatial audio field. Controlling the spatial audio field may comprise, for example, controlling at least one cooling fan generating the background noise or controlling rendering of the spatial audio field. Controlling the fan may comprise controlling a rotation speed of the cooling fan. Controlling rendering of the spatial audio field may comprise rotating the spatial audio field. Without limiting the scope of the claims, an advantage of controlling an audio source device based on an audio volume level at a position of at least one user and information relating to the background noise is that disturbance caused by the audio source device during playback of audio content may be reduced. Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is that an audio source device may be controlled in different ways in different situations while minimizing disturbance caused by the audio source device. As used in this application, the term “circuitry” may refer to one or more or all of the following: (a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and (b) combinations of hardware circuits and software, such as (as applicable): (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation. This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device. Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on the apparatus, a separate device or a plurality of devices. If desired, part of the software, application logic and/or hardware may reside on the apparatus, part of the software, application logic and/or hardware may reside on a separate device, and part of the software, application logic and/or hardware may reside on a plurality of devices. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a ‘computer-readable medium’ may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a computer described and depicted inFIG.2. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer. If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims. It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.
42,668
11943604
DESCRIPTION OF SOME EMBODIMENTS FIG.1illustrates a block diagram of some components and/or entities of a spatial audio processing system100that may serve as framework for various embodiments of a spatial audio processing technique described in the present disclosure. The audio processing system comprises an audio capturing entity110for capturing a plurality of input audio signals115-jthat represent an audio scene in proximity of the audio capturing entity110, an external audio capturing entity112for capturing one or more further input audio signals117-kthat represent at least part of the audio scene represented by the input audio signals115-j, a spatial audio processing entity120for processing the captured input audio signals115-jinto a spatial audio signal125and for processing the further input audio signal(s)117-kinto a complementary audio signal127, a spatial mixer140for combining the spatial audio signal125and the complementary signal127into a reconstructed spatial audio signal145, and an audio reproduction entity150for rendering the reconstructed spatial audio signal145. The audio capturing entity110may comprise e.g. a microphone array of a plurality of microphones arranged in predefined positions with respect to each other. The audio capturing entity110may further include processing means for recording a plurality of digital audio signals that represent the sound captured by the respective microphone of the microphone array. The recorded digital audio signals carry information that may be processed into one or more signals that enable conveying the audio scene at the location of capture for presentation to a human listener. The audio capturing entity110provides the plurality of digital audio signals to the spatial processing entity120as the respective input audio signals115-jand/or stores these digital audio signals in a storage means for subsequent use. Each microphone of the microphone array employs a respective predefined directional pattern, selected according to the desired audio capturing characteristics. As non-limiting examples, all microphones of the microphone array may be omnidirectional microphones, all microphones of the microphone array may be directional microphones, or the microphone array may include a mix of omnidirectional and directional microphones. The external audio capturing entity112may comprise one or more further microphones arranged into predefined positions with respect to each other and with respect to the plurality of microphones of the microphone array of the audio capturing entity110. The one or more further microphones may comprise one or more separate, independent microphones and/or a further microphone array. The external audio capturing entity112may further include processing means for recording one or more further digital audio signals that represent the sound captured by the respective ones of the one or more further microphones. The recorded one or more further digital audio signals carry information that may be processed into one or more signals that enable complementing or modifying the audio scene derivable (or derived) from the input audio signals115-jprovided by the audio capturing entity110. The external audio capturing entity112provides the one or more further digital audio signals to the spatial processing entity120as the respective one or more further input audio signals117-kand/or stores these further digital audio signals in a storage means for subsequent use. Each of the one or more further microphones provided in the external audio capturing entity112employs a respective predefined directional pattern, selected according to the desired audio capturing characteristics. The further microphone(s) may comprise omnidirectional microphones, directional microphones, or a mix of omnidirectional and directional microphones. In this regard, the directional pattern of any directional microphone may be further arranged to have its directional pattern pointed towards a respective predefined part of the audio scene. In case the audio capturing entity110and/or the external audio capturing entity112makes use of one or more directional microphones, a directional microphone may be provided using any suitable microphone type known in the art that provides a directional pattern, for example, a cardioid directional pattern, a super cardioid, directional pattern or a hyper cardioid directional pattern. The spatial audio processing entity120may comprise spatial audio processing means for processing the plurality of the input audio signals115-jinto the spatial audio signal125that conveys the audio scene represented by the input audio signals115-j, possibly modified in view of spatial audio analysis carried out in the spatial audio processing entity120and/or in view of user input received therein. The spatial audio processing entity120may further process the further one or more input audio signals117-kinto the complementary audio signal127in view of the spatial audio analysis carried out on basis of the input audio signal115and/or in view of user input received in the spatial audio processing entity120. The spatial processing entity120may also be referred to as a spatial encoder or as a spatial encoding entity. The spatial audio processing entity120may provide the spatial audio signal125and the complementary audio signal127for further processing by the spatial mixer140and/or for storage in a storage means for subsequent use. The spatial mixer140may process the spatial audio signal125and the complementary audio signal127into the reconstructed spatial audio signal145in a predefined format that is suitable for audio reproduction by the audio reproduction entity150. The audio reproduction entity150may comprise, for example, headphones, a headset or a loudspeaker arrangement of one or more loudspeakers. Instead of using the audio capturing entity110as a source of the input audio signals115-jand the further input audio signal(s)117-k, the audio processing system100may include a storage means for storing pre-captured or pre-created plurality of input audio signals115-jtogether with the corresponding one or more further input audio signals117-k. Hence, the audio processing chain may be based on the audio input signals115-jand the further audio inputs signal(s)117-kthat are read from the storage means instead of relying on input audio signals115-j,117-kreceived (directly) from the respective audio capturing entity110,112. In the following, some aspects of operation of the spatial audio processing entity120are described via a number of examples, whereas other entities of the audio processing system100are referred to extent they necessary for understanding of the respective aspect of operation of the spatial audio processing entity120. In this regard,FIG.2illustrates a block diagram of some components and/or entities of a spatial audio encoder220according to an example. The spatial audio encoder220may include further components and/or entities in addition to those depicted inFIG.2. The spatial audio encoder220may be provided, for example, as the audio encoding entity120or as part thereof in the framework of the audio processing system100. In other examples, the spatial audio encoder220may be provided e.g. as an element of an audio processing system different from the audio processing system100or it may be provided as an independent processing entity that reads the input audio signals115-jand the further input audio signal(s)117-kfrom and/or writes the spatial audio signal125and the complementary audio signal127to a storage means (e.g. a memory). FIG.2further illustrates a block diagram of some components and/or entities of an audio capturing entity210and an external audio capturing entity212according to respective examples. Each of the audio capturing entity210and the external audio capturing entity212may include further components and/or entities in addition to those depicted inFIG.2. The audio capturing entity210may be employed, for example, as the audio capturing entity110or as a part thereof in the framework of the audio processing system100, whereas the external audio capturing entity212may be employed, for example, as the external audio capturing entity112or as a part thereof in the framework of the audio processing system100. In an example, the audio capturing entity210is arranged in the same device with the spatial audio encoder220, whereas the external audio capturing entity212is provided in another device that is communicatively coupled to the device hosting the spatial audio encoder220and the audio capturing entity210. In an example, the audio capturing entity210is arranged to write the plurality of input audio signals115-jto a storage means (e.g. a memory) and the external audio capturing entity212is arranged to write the one or more further input audio signals117-kto the storage means. In the example ofFIG.2, the audio capturing entity210is illustrated with a microphone array111that includes microphones111-1,111-2and111-3arranged in predefined positions with respect to each other. The microphones111-1,111-2and111-3serve to capture sounds that are recorded as respective digital audio signal and conveyed from the audio capturing entity210to the spatial audio encoder220as respective input audio signals115-1,115-2and115-3. The external audio capturing entity212includes further microphones113-1and113-2that serve to capture sounds that are recorded as respective further digital audio signals and conveyed from the external audio capturing entity212to the spatial audio encoder220as respective further input audio signals117-1and117-2. The example ofFIG.2generalizes into receiving, at the spatial audio encoder220, two or more input audio signals115-jthat may be jointly referred to as an input audio signal115and one or more further input audio signals117-kthat may be jointly referred to as an further input audio signal117. In the spatial audio encoder220, the input audio signals115-jare received by a spatial analysis portion222, whereas the further input audio signal(s)117-kare received by the ambience generation portion224. The input audio signals115-jserve to represent an audio scene captured by the microphone array111. The audio scene may also be referred to as a spatial audio image. The spatial analysis portion222operates to process the input audio signals115-jto form two or more processed audio signals that convey the audio scene represented by the input audio signals115-j. The further input audio signals117-kserve to represent at least part of the audio scene represented by the digital audio signals115-j. The audio scene represented by the input audio signals115-jmay be considered to comprise a directional sound component and an ambient sound component, where the directional sound component represents one or more directional sound sources that each have a respective certain position in the audio scene and where the ambient sound component represents non-directional sounds in the audio scene. Each of the directional sound component and the ambient sound component may be represented by one or more respective audio signals, possibly complemented by spatial audio parameters that further characterize the audio scene. The directional and ambient sound components may be formulated into the spatial audio signal125in a number of ways. An example in this regard involves processing the input audio signals115-jinto a first signal and a second signal such that they jointly convey information that can be employed by the spatial mixer140to create the reconstructed spatial audio signal145that represents or at least approximates the audio scene. In such an approach the first signal may be employed to (predominantly) represent the one or more directional sound sources while the second signal may be employed to represent the ambience. In an example, the first signal may comprise a mid signal and the second signal may comprise a side signal As a non-limiting example, the operation of the spatial encoder220to generate the spatial audio signal125on basis of the plurality of input audio signals115-jand to generate the complementary audio signal127on basis of the one or more further input audio signals117-kis outlined by steps of a method300depicted by the flow diagram ofFIG.3. The method300proceeds from receiving the plurality of input audio signals115-jthat represent an audio scene and the one or more further input audio signals117-kthat represents at least part of the audio scene, as indicated in block302. The method300continues by identification of a portion of interest (POI) in the audio scene, as indicated in block304, and processing of the input audio signal115into the spatial audio signal125where the POI in the audio scene is suppressed, as indicated in block306. Moreover, the method300further proceeds into generating one or more audio signals on basis of the further input audio signal117to serve as the complementary audio signal127that represents the POI in the audio scene, as indicated in block308, the complementary audio signal127hence serving as a substitute for the POI in the audio scene represented by the input audio signal115. The method300further proceeds to combining the complementary audio signal127with the spatial audio signal125to create the reconstructed spatial audio signal145. While examples pertaining to operations of block302are described in the foregoing, examples pertaining to operations of each of the blocks304to310are provided in the following. In the following, the description of examples pertaining to operations of blocks304to310assumes above-described approach of using the first signal to represent the one or more directional sound sources of the audio scene and the second signal to represent the ambience of the audio scene by referring to the first signal as the mid signal and to the second signal as the side signals as the spatial audio signal125. This, however, serves as a non-limiting example chosen for clarity and brevity of the description and a different format of the spatial audio signal125may be applied instead without departing from the scope of the present disclosure. The spatial analysis portion222may carry out a spatial audio analysis that involves deriving the one or more spatial audio parameters and identification of the POI at least in part on basis of the derived spatial audio parameters. In this regard, the derived spatial audio parameters may be such that they are useable both for creation of the spatial audio signal125on basis of the input audio signals115-jand for identification of the POI within the audio scene they serve to represent. As a pre-processing step before that actual spatial audio analysis, the spatial analysis portion222may subject each of the digital audio signals115-jto short-time discrete Fourier transform (STFT) to convert the input audio signals115-jinto respective frequency domain signals using a predefined analysis window length (e.g. 20 milliseconds), thereby segmenting each of the input audio signals115-jinto a respective time series of frames. For each of the input audio signals115-j, each frame is further divided into a predefined frequency bands (e.g. 32 frequency bands), thereby resulting a time-frequency representation of the input audio signals115-jthat serves as basis for the spatial audio analysis. A certain frequency band in a certain frame may be referred to as a time-frequency tile. The spatial analysis by the spatial analysis portion222may involve deriving at least the following spatial parameters for each time-frequency tile:a direction of arrival (DOA), defined by an azimuth angle and/or an elevation angle derived on basis of the input audio signals115-jin the respective time-frequency tile; anda direct-to-ambient ratio (DAR) derived at least in part on basis of coherence between the digital audio signals115-jin the respective time-frequency tile. The DOA may be derived e.g. on basis of time differences between two or more audio signals that represent the same sound(s) and that are captured using respective microphones having known positions with respect to each other (e.g. the input audio signals115-jobtained from the respective microphones111-j). The DAR may be derived e.g. on basis of coherence between pairs of input audio signals115-jand stability of DOAs in the respective time-frequency tile. In general, the DOA and the DAR are spatial parameters known in the art and they may be derived by using any suitable technique known in the art. An exemplifying technique for deriving the DOA and the DAR is described in WO 2017/005978. The spatial analysis may optionally involve derivation of one or more further spatial parameters for at least some of the time-frequency tiles. As an example in this regard, the spatial analysis portion222may compute one or more delay values that serve to indicate respective delays (or time shift values) that maximize coherence between a reference signal selected from a subset of the input audio signals115-jand between other signal of the subset of the input audio signals115-j. Regarding an example of selecting the subset of the input audio signals115-j, please refer to the following description regarding derivation of the mid and side signals to represent, respectively, the directional sounds of the audio scene and the ambience of the audio scene. For each time-frequency tile, the spatial analysis portion222selects a subset of the input audio signals115-jfor derivation of a respective mid signal component. The selection is made in dependence of the DOA, for example such that a predefined number of input audio signals115-j(e.g. three) obtained from respective microphones111-jthat are closest to the DOA in the respective time-frequency tile are selected. Among the selected input audio signals115-jthe one originating from the microphone111-jthat is closest to the DOA in the respective time-frequency tile is selected as a reference signal and the other selected input audio signals115-jare time-aligned with the reference signal. The mid signal component for the respective time-frequency tile is derived as a combination (e.g. a linear combination) of the time-aligned versions of the selected input audio signals115-jin the respective time-frequency tile. In an example, the combination is provided as a sum or as an average of the selected (time-aligned) input audio signals115-jin the respective time-frequency tile. In another example, the combination is provided as a weighted sum of the selected (time-aligned) input audio signals115-jin the respective time-frequency tile such that a weight assigned for a given selected input audio signal115-jis inversely proportional to the distance between DOA and the position of the microphone111-jfrom which the given selected input audio signal115-jis obtained. The weights are typically selected or scaled such that their sum is equal or approximately equal to unity. The weighting may facilitate avoiding audible artefacts in the reconstructed the reconstructed spatial audio signal155in a scenario where the DOA changes from frame to frame. For each time-frequency tile, the spatial analysis portion222makes use of all input audio signals115-jfor derivation of a respective side signal component. The side signal component for the respective time-frequency tile is derived as a combination (e.g. a linear combination) of the input audio signals115-jin the respective time-frequency tile. In an example, the combination is provided as a weighted sum of the input audio signals115-jin the respective time-frequency tile such that the weights are assigned an adaptive manner, e.g. such that the weight assigned for a given input audio signal115-jin a given time-frequency tile is inversely proportional to the DAR derived for the given input audio signal115-jin the respective time-frequency tile. The weights are typically selected or scaled such that their sum is equal or approximately equal to unity. The side signal components may be further subjected decorrelation processing before using them for constructing the side signal. In this regard, there may be a respective predefined decorrelation filter for each of the frequency bands (and hence for the side signal component of the respective frequency band), and the spatial analysis portion222may provide the decorrelation by convolving each side signal with the respective predefined decorrelation filter. The spatial analysis portion222may derive the mid signal for a given frame by combining the mid signal components derived for frequency bands of the given frame, in other words by combining the mid signal components across frequency tiles of the given frame. Along similar lines, the spatial analysis portion222may derive the side signal for the given frame by combining the side signal components derived for frequency bands of the given frame, in other words by combining the side signal components across frequency tiles of the given frame. The mid signal and the side signal so derived constitute an initial spatial audio signal223for the respective frame. The initial spatial audio signal223typically further comprises spatial parameters derived for the respective frame, e.g. one or more of the DOA and DAR or derivatives thereof to enable creating the reconstructed spatial audio signal145by the spatial mixer140. Referring to operations pertaining to block304, according to an example, the identification of the POI comprises identifying the POI at least in part on basis of one or more spatial parameters extracted from the input audio signal115(e.g. the input audio signals115-j). In another example, the identification of the POI comprises receiving an indication of the POI from an external source, e.g. as user input received via a user interface. In an example, the POI may serve to indicate a problematic portion in the audio scene that is to be replaced in order to improve perceivable quality of the audio scene in the reconstructed spatial audio signal145. In such a scenario, the POI may be identified, for example, via analysis of one or more extracted spatial parameters or on basis of input from an external source. In another example, the POI may serve to indicate a portion of the audio scene that is to be replaced for aesthetic and/or artistic reasons. In such a scenario, the POI is typically identified on basis of input from an external source. The POI may concern e.g. one of the following:a specified spatial portion in the ambient sound component of the audio scene;a specified spatial portion in the directional sound component of the audio scene;a specified spatial portion in both the ambient sound component and in the directional sound component of the audio scene. Regardless of a POI concerning the ambient sound component, the directional sound component or both, the POI may be defined to cover a specific direction or as a range of directions. The direction covered by the POI may be expressed by an azimuth angle and/or an elevation angle that identify a specific direction of arrival that constitutes a spatial region of interest within the audio scene. In another example, the direction(s) covered by the POI may be defined via a range of azimuth angles and/or a range of elevation angles that identify a sector within the audio scene that constitutes the region of interest therein. A range of angles (either azimuth or elevation) may be defined, for example, by a pair of angles that specify respective endpoints of the range or by a center angle that defines specific direction of arrival together with the width of the range. In case the POI is defined only by its direction, it theoretically defines a spatial portion of the audio scene that spatially extends from the listening point to infinity. In another example, a POI is further defined to cover the specified direction(s) up to a first specified radius that hence defines the spatial distance from the listening point, thereby leaving a spatial portion of the audio scene that is in the direction covered by the POI but that is further away from the listening point than the first specified radius outside of the POI. In a further example, a POI is further defined to cover the specified direction(s) from a second specified radius to infinity, thereby leaving a spatial portion of the audio scene that is in the direction covered by the POI but that is closer to the listening point than the second specified radius outside the POI. According to an example, the spatial analysis portion222may further employ at least some of the DOA and the DAR in identification of the POI for a frame of the input audio signal115. The identification of the POI may rely on one or more POI identification criteria pertaining to one or more of the above-mentioned spatial parameters. The audio scene may be divided into predefined spatial portions (or spatial segments) for the POI identification, and the spatial analysis portion222may apply the POI identification criteria separately for each of the predefined spatial portions of the audio scene. The predefined spatial portions may be fixed e.g. such that the same predefined division into spatial portions is applied regardless of the audio scene under consideration. In another example, the division to the spatial portion is predefined in that it is fixed for analysis of the audio scene under consideration. In the latter scenario, the information that defines the division into the spatial portions may be received and/or derived on basis of input received from an external source, e.g. as user input received via a user interface. As an example of predefined spatial portions, the spatial portions may be defined as spherical sectors of a (conceptual) sphere that surrounds the position of the audio capturing entity210(and hence position of the assumed listening point of the reconstructed audio signal145). In this regard, the full range of azimuth angles (360°) and/or the full range of elevation angles (360°) may be equally divided into a respective predefined number of sectors of equal width, e.g. to four sectors (of 90°) or to eight sectors (of 45°). In another example, an uneven division into sectors may be applied for one or both of the azimuth angle and the elevation angle, e.g. such that narrower sectors are used in an area of the audio scene that is considered (perceptually) more important (e.g. in front of the assumed listening point) whereas wide sectors are used in an area of the audio scene that is considered (perceptually) less important (e.g. behind the assumed listening point). According to an example, the identification criteria applied by the spatial analysis portion222may require that a certain spatial portion in a certain frame is designated as the POI in case one or more of the following conditions are met:the DOAs computed for the frequency bands of the certain frame within the certain spatial portion of the audio scene are stable;the DARs computed for the frequency bands of the certain frame within the certain spatial portion of the audio scene are sufficiently high;the input audio signals115-jof the certain frame represent an undesired directional sound source in the certain spatial portion of the audio scene. As an example of a POI identification criterion concerning stability of the DOAs, the stability may be estimated in dependence of circular variance computed over DOAs within the spatial portion under consideration: this POI identification criterion may be considered met in response to the circular variance exceeding a predefined threshold. As an example in this regard, the circular variance may have a value in the range from 0 to 1 and the predefined threshold may be e.g. 0.9. The circular variance may be computed according to the following equation ga=1-❘"\[LeftBracketingBar]"1N⁢∑n=1N⁢θn❘"\[RightBracketingBar]", where θndenote the DOAs considered in the computation and N denotes the number of DOAs considered in the computation. In an example, the DOAs considered in the computation include all DOAs (across the frequency bands) that fall within spatial portion under consideration. In a variation of this example, the circular variance is computed separately for two or more subgroups or clusters of DOAs that fall within spatial portion under consideration and the criterion is met in response to each of the respective circular variances exceeding the predefined threshold. In this regard, the subgroups or clusters may be defined based on closeness of the circular mean of DOAs, for example by using a suitable clustering algorithm. In an example, the k-means clustering method known in the art may be employed for subgroup definition: As a first step, a predefined number of initial cluster centers are defined. The predefined number may be a predefined value stored in the spatial analysis portion222or a value received from an external source, e.g. as user input received via a user interface, while the initial cluster centers may be e.g. randomly selected from the DOAs computed in the spatial analysis portion222. Each of the remaining DOAs is assigned to the closest cluster center, and after having assigned all DOAs each of the cluster centers is recomputed as an average of the DOAs assigned to the respective cluster. The clustering method continues by running one or more iteration rounds such that at each iteration round each of the DOAs is assigned to the closest cluster center and after having assigned all DOAs the iteration round is completed by re-computing the cluster centers as an average of the DOAs assigned to the respective cluster. The iteration may be repeated until the cluster centers do not change from the previous iteration round or until the change (e.g. a maximum change or an average change) from the previous iteration round is less than a predefined threshold. The circular variance may be computed according to the equation above separately for each cluster, thereby implementing the DOA stability estimation. As an example of a POI identification criterion concerning sufficiently high values of the DARs, this criterion may be considered met in response to an average of the DARs (across frequency bands) within the spatial portion under consideration exceeding a predefined threshold. As an example, the predefined threshold in this regard may be set on basis of experimental data, e.g. such that first DAR values within a spatial portion of interest are derived on basis of for a first set of training data known to have one or more directional sound sources within the spatial portion of interest and second DAR values with the same spatial portion are derived on basis of second training data that is known not have any directional sound sources within the spatial portion of interest. The predefined threshold that denotes sufficiently high value of DAR may be defined in view of the first DAR values and the second DAR values such that the threshold serves to sufficiently discriminate between the DARs derived for the first and second sets. As another example, the predefined threshold value for the POI identification criterion that concerns sufficiently high DAR values may be received from an external source, e.g. as user input received via a user interface or the threshold value defined on basis of experimental data may be adjusted on basis of information received from an external source (e.g. as user input received via the user interface). As an example of a POI identification criterion concerning a spatial portion under consideration including an undesired directional sound source, this condition may be considered met in response to a directional sound source identified within the spatial portion under consideration (e.g. based on DOAs) exhibits predefined audio characteristics, e.g. with respect to its frequency content. According to an example, the predefined audio characteristics in this regard may be defined based on experimental data that represents sound sources considered to represent an undesired signal type. A suitable classifier type known in the art may be arranged to carry out detection of signals that exhibit predefined audio characteristics so defined. In another example, an indication of presence of an undesired directional sound source within a spatial portion under consideration may be received from an external source, e.g. as user input received via a user interface. In case the POI identification criteria is not met, there is no identified POI in the certain frame and the initial spatial audio signal223(e.g. one including the mid and side signals together with the spatial parameters) may be provided as the spatial audio signal125from the spatial audio encoder220without further processing or modification. In case the POI identification criteria is met, the certain frame is identified as one including a POI that is to be suppressed from the audio scene. Consequently, information that defines the POI identified in the audio scene is passed to a spatial filter226for modification of the audio scene therein. The information that defines the POI may be further passed to an ambience generator224and/or to the spatial mixer140. The information that defines the POI may identify one of the predefined spatial portions of the audio scene as the POI. The spatial analysis portion222may further pass the initial spatial audio signal223derived therein and/or at least some of the input audio signals115-jto the spatial filter226to facilitate modification of the audio scene therein. In some examples, the spatial analysis portion222may proceed to derivation of the side signal (as described in the foregoing) after having applied the POI identification criteria: the spatial analysis portion222may proceed with deriving the side signal for inclusion in the initial spatial audio signal223for the certain frame in case there is no identified POI in the certain frame, whereas the spatial analysis portion222may refrain from deriving the side signal in case the certain frame is identified as one including a POI. In the latter scenario, the side signal may be derived in by the spatial filter226on basis of at least some of the audio input signals115-j, as described in the following. Referring now to operations pertaining to block306, the spatial filter226may process the input audio signals115-jin order to suppress the POI in the audio scene in response to receiving an indication of the POI being present therein. Herein, the expression ‘spatial filtering’ is to be construed in a broad sense, encompassing various approaches for providing the spatial audio signal125such that it conveys an audio scene different from that directly derivable from the input audio signals115-jand that may have been encoded in the side signal by the spatial analysis portion222, as described in the foregoing. As an example of spatial filtering in this framework, the spatial filter226may modify the side signal provided as part of the initial spatial audio signal223such that the signal components that represent the POI therein are suppressed, e.g. completely removed or at least significantly attenuated. As an example in this regard, beamforming in parametric domain may be applied, for example according to a technique described in Politis, A. et al., “Parametric spatial audio effects”, Proceedings of the 15thInternational Conference on Digital Audio Effects (DAFx-12), York, UK, Sep. 17-21, 2010. In another example, the spatial filter226may derive (or re-derive) the side signal on basis of the input audio signal115(e.g. the digital audio signals115-j) such that the signal components that represent the POI are suppressed or excluded, thereby deriving the side signal for the spatial audio signal125. In an example of the latter approach, the spatial filter226may process the input audio signals115-jusing a beamforming technique known in the art, arranged to suppress the portion of the audio scene indicated by the POI, e.g. such that one or more nulls of the beamformer are steered towards direction(s) of arrival that correspond to the POI. Such beamforming results in providing a respective steered audio signal for each of the input audio signals115-j, where the steered audio signals serve to represent a modified audio scene where the spatial portion of the audio scene corresponding to the POI is completely cancelled or at least significantly attenuated and hence substantially excluded from the resulting modified audio scene, thereby creating a gap in the audio scene. Such beamforming may be referred to as brickwall beamforming due to cancellation or substantial attenuation of the desired spatial portion of the audio scene recorded in the input audio signals115-j. The spatial audio filter226may proceed into creating the side signal components and combining them in to the side signal as described in the foregoing, with the exception of basing the side signal component creation on the steered audio signals obtained from the beamformer instead of using the respective input audio signals115-jas such as basis for creating the side signal. The side signal so generated may be provided together with the main signal of the initial spatial audio signal223and the spatial parameters of the initial spatial audio signal223as the spatial audio signal125to the spatial mixer140for generation of the reconstructed spatial audio signal145therein. Referring to operations pertaining to block308, according to an example, generation of the complementary audio signal127on basis of the one or more further input audio signals117-kis carried out by the ambience generator224. Generation of the complementary audio signal127comprises identifying one or more of the further input audio signal(s)117-kthat originate from respective further microphones113-kthat are within or close to the POI, thereby representing audio content that is relevant for the POI. In this regard, the ambience generator224may have a priori knowledge regarding positions of the respective further microphones113-kwith respect to the audio scene represented by the input audio signal125, and identification of the further microphones113-kthat are applicable for generation of the complementary audio signal127may be the based on their position information, such that further input audio signal(s)117-kto be applied for generating the complementary audio signal127are those received from the identified further microphones113-k. Identification of the further microphone(s)113-kand hence the further input audio signal(s)117-kapplicable for generation of the complementary audio signal127may be carried out by the ambience generator224on basis of the information regarding respective positions of the further microphones113-k, based on an indication received from an external source, e.g. as user input received via a user interface, or as a combination of these two approaches (e.g. such that an automated identification of the applicable further microphones113-kis refined or confirmed by the user). As an example of microphone identification by the ambience generator224, the identification may involve identifying one or more further microphones113-kthat have respective positions coinciding with the POI. Optionally, the microphone identification by the ambience generator224may further consider directional pattern of the further microphones113-k: in an example, in case there are two or more microphones, the one(s) having a directional pattern pointing away from the microphone array111(that serves to capture the input audio signals115-j) may be preferred and hence identified as source(s) for the further input audio signals117-kthat are applicable for generation of the complementary audio signal127. The microphone identification by the ambience generator224may further consider position of the further microphones113-kwithin the POI: as an example, in case several further microphone(s) are identified within the POI, the one that is closest to the center of the POI may be identified as the one that is most suitable for generation of the complementary audio signal127. In this regard, the center of the POI may be indicated e.g. by a circular mean of the (azimuth and/or elevation) angles that define the edges of the spatial portion identified as the POI. As another example of further microphone identification based on microphone position, the ambience generator224may identify multiple further microphones113-kwithin the POI and use the respective further input audio signals117-kfor generation of a respective intermediate complementary audio signal for the respective sub-portions of the POI, which intermediate complementary audio signals are further combined to form the complementary audio signal127. As an example in this regard, respective further input audio signals117-kfrom two further microphones113-kmay be applied such that a first further input audio signal117-k1is applied for generating a first intermediate complementary audio signal for (azimuth and/or elevation) angles from one edge of the spatial portion identified as POI to the center of the spatial portion (see an example of defining the center in the foregoing) whereas a second further input audio signal117-k2is applied for generating a second intermediate complementary audio signal for (azimuth and/or elevation) angles from the center of the spatial portion to the other edge of the spatial portion. In another example, a first further input audio signal117-k1is applied for generating a first intermediate complementary audio signal that represents the spatial portion identifies as POI up to a certain radius, whereas a second further input audio signal117-k2is applied for generating a second intermediate complementary audio signal that represents the spatial portion from the certain radius. The ambience generator224carries out ambience signal synthesis on basis of the respective further digital audio signals117-kfrom the identified ones of the further microphones113-kto generate the complementary audio signal127that is applicable for filling the gap in the audio scene resulting from operation of the spatial filter226. In other words, the complementary audio signal127serves to substitute the POI of the audio scene in the reconstructed spatial audio signal145. In this regard, the ambience signal synthesis is further provided with an indication of the POI within the audio scene to be covered by the complementary audio signal127. The ambience generator224passes the generated complementary audio signal127to the spatial mixer140for generation of the reconstructed spatial audio signal145therein. The ambience generator224may carry out the ambience signal synthesis by using the technique described in the co-pending patent application no. GB 1706290.2. An outline of this ambience synthesis technique is provided in the following. In this regard, the ambience synthesis makes use of the one or more selected further input audio signals117-k, originating from respective ones of the identified further microphones113-kdescribed in the foregoing. Ambience synthesis involves computing a further ambience signal as a weighted sum of the selected further input audio signals117-kand applying spatial extent synthesis to the further ambience signal. The ambience synthesis may further comprise application of reverberation processing to the further ambience signal before using as a source signal for the spatial extent synthesis processing. Computation of the further ambience signal comprises deriving a respective weight for each of the selected further input audio signals117-k, preferably such that the sum of the weights is equal or substantially equal to unity. In case there is only one selected further input audio signal117-k, derivation of the weights may be omitted and the selected further input audio signal117-kmay be used as such as the further ambience signal. Computation of a weights may be obtained via analyses of respective selected further input audio signals117-k, where the analysis determines a likelihood of the respective selected further input audio signal117-krepresenting ambient background noise instead of representing a specific sound source: in case the likelihood is high(er), the respective the weight is assigned a high(er) value, whereas a low(er) likelihood results in assigning the respective weight a low(er) value. The analysis carried out for determination of the weights is carried out using frames of predefined (temporal) length, which may different from the frame length applied in processing the input audio signals115-jand the further input audio signal(s)117-kfor generation of the reconstructed spatial audio signal145. As an example, the determination of weights may be carried out using frames of one second. As an example, the procedure of assigning the weights may commence from setting a predefined initial value for each of the weights, followed by one or more analysis steps that each may change the weight value according to an outcome of the respective analysis step. As a non-limiting example in this regard, one or more of the following analysis steps may be applied for deriving the final weight for each selected further input audio signal117-k:A selected further input audio signal117-kmay be subjected to voice activity detection (VAD) processing: in case the VAD indicates inactivity (i.e. indicates a signal that does not include speech), the respective weight may be increased, whereas in case the VAD indicates activity (i.e. indicates a signal that does include speech) the respective weight may be decreased. In this regard, any VAD technique known in the art may be applied.A selected further input audio signal117-kmay be subjected to analysis of spectral flatness: in case the analysis suggests noise-like signal (e.g. a flatness that is close to one), the respective weight may be increased, whereas in case the analysis suggests tone-like signal (e.g. a flatness that is close to zero), the respective weight may be decreased. In this regard, any spectral flatness analysis technique known in the art may be applied.A selected further input audio signal117-kmay be subjected to harmonicity analysis: in case the analysis suggests harmonic signal content (such as presence of features like fundamental frequency (pitch), harmonic concentration, harmonicity, . . . ) the respective weight may be decreased, whereas in case the analysis suggests absence of harmonic signal content the respective weight may be increased. In this regard, any harmonicity analysis technique known in the art may be employed.A selected further input audio signal117-kmay be subjected percussiveness analysis: in case the analysis suggests rhythmic signal content, the respective weight may be decreased, whereas in case the analysis does not suggest rhythmic signal content, the respective weight may be increased. In this regard, any percussiveness analysis technique known in the art may be applied.A selected further input audio signal117-kmay be subjected to classifier that serves to classify the respective signal into one of two or more predefined classes. The predefined classes may include, for example, noise, speech and music: in case the classification suggests noise content, the respective weight may be increased, whereas in case the classification suggests speech or music content, the respective weight may be decreased. The classifier is pre-trained using suitable training data that represents signals in the above-mentioned predefined classes. In this regard, a suitable classifier known in the art, such as a deep neural network, may be employed. After having derived the weights for the selected further input audio signals117-k, the weights may be normalized such that their sum is equal or substantially equal to one. In addition to or instead of one or more of the exemplifying analysis steps outlined in the foregoing, the derived weights may be adjusted or set on basis of information received from an external source, e.g. as user input received via a user interface. The further ambience signal is created by computing a weighted sum of the selected further input audio signals117-kusing the derived weights, thereby providing the further ambience signal to be employed as the source signal for the spatial extent synthesis processing. As pointed out in the foregoing, optional reverberation processing may be applied to the further ambience signal before using it for spatial synthesis. In this regard, a suitable (digital) reverberator known in the art may be employed. Reverberation introduced by this processing serves to improve spaciousness of the further ambience signal. The further ambience signal may be subjected spatial extent synthesis, for example by using a spatial extent synthesizer400according to a block diagram depicted inFIG.4, operation of which is outlined in the following. The spatial extent synthesizer400may be applied to implement the spatial extent synthesis described in detail e.g. in Pihlajamäki, T. et al., “Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals”, the Journal of Audio Engineering Society (JAES), Volume 62, Issue 7/8, pp. 467-484, July 2014. The spatial extent synthesizer400receives the further ambience signal and processes it in frames of predefined (temporal) length (i.e. duration). Assuming 48 kHz sampled further ambience signal, the processing may be carried out on overlapping 1024-sample analysis frames, such that each analysis frame includes 512 new samples together with the most recent 512 samples of the immediately preceding frame. The analysis frame is zero-padded to twice its size (to 2048 samples) and windowed using a suitable analysis window, such as the Hann window. Each analysis frame is subjected to the STFT402, thereby obtaining a frequency-domain representation of the analysis frame including 2048 frequency-domain samples. Due to symmetry of the frequency-domain representation, it is sufficient to process a truncated frequency-domain frame that is formed by its positive (first) half of 1024 samples together with the DC component, including 1025 frequency-domain samples per frame. The truncated frequency-domain frame is processed by a filterbank404, thereby decomposing the frequency-domain representation into predefined number of non-overlapping frequency bands. In an example, nine frequency bands may be used. The operation of the filter bank404may be implemented, for example, by storing a respective set of predefined filterbank coefficients for each of the frequency bands and by multiplying the frequency-domain samples of the truncated frequency-domain frame by sets of predefined filterbank coefficients to derive the respective frequency band outputs from the filterbank404. In parallel, information that defines the POI identified for the (temporally) corresponding frame of the input audio signals115-jis provided to a band position calculator406. As described in the foregoing, the POI may be defined, for example, as spatial portion that spans a range of certain azimuth and/or elevation angles. In this regard, the band position calculator406computes a respective spatial position for each of the frequency band signals obtained from the filterbank404. As an example, the frequency band signals may be evenly distributed across the range of azimuth and/or elevation angles that define the POI. As a concrete example in this regard, assuming a POI that covers a sector having a width of 90 degrees positioned directly in front of the assumed listening point (e.g. azimuth angles from −45 to 45 degrees), the band position calculator406may set nine frequency band signals to be centered, respectively, at the following azimuth angles: 45, 33.75, 22.5, 11.25, 0, −11.25, −22.5, −33.75 and −45 degrees. The band position calculator406provides an indication of the computed frequency band positions coefficient computation portion408, which derives gain coefficients that implement spatial extent synthesis on basis of the frequency band signals provided from the filterbank404in view of loudspeaker positions of a predefined loudspeaker arrangement. As a non-limiting example, the spatial extent synthesizer400ofFIG.4employs four output channels (e.g. front left (FL), front right (FR), rear left (RL) and rear right (RR) channels/loudspeakers). The gain coefficients that implement panning to a desired spatial position (i.e. the spatial portion defined by the POI) may be computed by using a Vector Base Amplitude Panning (VBAP) in view of the frequency band positions obtained from the band position calculator406. The output of the VBAP is a respective audio channel signal for each loudspeaker of the predefined loudspeaker arrangement, which audio channel signals are further subjected to inverse STFT by respective one of the inverse STFT entities410-1to410-4, thereby arriving at respective time-domain audio signals that constitute the complementary audio signal127. In an example, the ambience generator224may generate a plurality of (e.g. two or more) candidate complementary audio signals and select one of the candidate complementary audio signals as the complementary audio signal127based on a similarity measure that compares one or more characteristics of each candidate complementary audio signal to those of the POI in the audio scene conveyed by the input audio signals115-j. In this regard, each of the candidate complementary audio signals may be generated on basis of a different further input audio signal117-kor on basis of a different combination of two or more further input audio signals117-k. The similarity measure may consider, for example, spectral and/or timbral similarity between a candidate complementary audio signal and the POI in the audio scene conveyed by the input audio signals115-j. The ambience generator224may select the candidate complementary audio signal that according to the similarity measure provides the closest match with the POI in the audio scene conveyed by the input audio signals115-j. In an example, the ambience generator224may generate the complementary audio signal in two or more parts, such that each part is generated on basis of a different further input audio signal117-kor on basis of a different combination of two or more further input audio signals117-k. As an example in this regard, a first complementary signal may be derived on basis of a first further input audio signal117-k1, a second complementary signal may be derived on basis of a second further input audio signal117-k2, and the first and second complementary signals may be combined (e.g. summed) to form the complementary audio signal127for provision to the spatial mixer140. In such a scenario, as an example, the first further input audio signal117-k1and the second further input audio signal117-k2may originate from respective further microphones113-k1,113-k2that are arranged in opposite sides of the audio scene. The ambience generator224may further carry out spectral envelope matching for the generated complementary audio signal127before passing it to the spatial mixer150. The spectral envelope matching may comprise estimating the spectral envelope of the POI in the audio scene conveyed by the input audio signals115-jand modifying the spectral envelope of the generated complementary audio signal127to match or substantially match the estimated spectral envelope. This may serve to provide a more naturally-sounding complementary audio signal127, thereby facilitating improved perceivable quality of the reconstructed spatial audio signal145. Referring to operations pertaining to block310, the manner and details of combining the complementary audio signal127with the spatial audio signal125depends on the format applicable for the audio reproduction entity150. As an example, in case the audio reproduction entity150comprises headphones or a headset, the spatial mixer140may prepare the reconstructed audio signal145for binaural rendering. In this regard, the spatial mixer may store a plurality of pairs of head-related transfer functions (HRTFs), each pair corresponding to a respective predefined DOA, select the predefined pair of HRTFs in view of the DOA received in the spatial audio signal125and apply the selected pair of HRTFs to the spatial audio signal125and to the complementary audio signal127to generate the left and right channels of the reconstructed spatial audio signal145. As an example, the selected pair of HRTFs may be applied to the main signal to generate left and right main signal components, to the side signal to generate left and right side signal components and to the complementary audio signal to generate left and right complementary signal components. The spatial mixer140may compose the left channel of the reconstructed spatial audio signal145as a sum of the left main signal component, the left side signal component and the left complementary signal component, whereas the right channel of the reconstructed spatial audio signal145may be composed as a sum of the right main signal component, the right side signal component and the right complementary signal component. As an example, in case the audio reproduction entity150comprises a multi-channel loudspeaker arrangement, the spatial mixer140may employ a respective Vector Base Amplitude Panning (VBAP) in view of the DOA received in the spatial audio signal125to derive respective components of the main signal, the side signal and the complementary audio signal127for each output channel and compose the, for each output channel, the respective channel of the reconstructed spatial audio signal145as a sum of the main signal component, the side signal component and complementary signal component derived for the respective output channel. In an example, the spatial audio signal125and the complementary audio signal127are combined into the reconstructed spatial audio signal145in frequency domain. In such a scenario, the spatial mixer140may convert the reconstructed spatial audio signal145from frequency domain to time domain using an inverse STFT e.g. by using the overlap-add method known in the art before passing the reconstructed spatial audio signal145to the audio reproduction entity150(and/or providing it for storage in a storage means). In another example, the spatial mixer140may transform each of the spatial audio signal125and the complementary audio signal127from frequency domain to time domain before combining them into the reconstructed spatial audio signal145and carry out the combination procedure to obtain the reconstructed spatial audio signal145along the lines described in the foregoing, mutatis mutandis, for the respective time domain signals. In the foregoing, the method300has been described, at least implicitly, with a reference to a single POI in the spatial audio scene. The method300, however, readily generalizes into an approach where the operations pertaining to block304may serve to identify two or more POIs within the audio scene represented by the input audio signal115. In such a scenario, operations pertaining to block306are carried out to suppress all identified POIs from the audio scene, operations pertaining to block308are carried out to generate a respective complementary audio signal127for each of the identified POIs, while operations pertaining to block310are carried out to combine each of the generated complementary audio signals127with the spatial audio signal125. In another variation, alternatively or additionally, the operations pertaining to blocks304to310are based on spatial audio signal format different from the described in the foregoing. As an example in this regard, the spatial analysis portion222may extract a dedicated set of spatial parameters, e.g. the DOAs, the DARs and the delay values described in the foregoing, for a plurality of predefined spatial portions, e.g. for a plurality of spherical sectors. In such a scenario, identification of the POI via usage of the POI identification criteria may hence be carried out directly for each predefined spatial portion by considering the set spatial parameters extracted for the respective predefined spatial portion (block304), whereas suppressing the identified POI (block306) may be carried out in a straightforward manner by excluding the spatial parameters extracted for the predefined spatial portion identified as POI. Operations pertaining to blocks308and310may be carried as described in the foregoing also for this scenario. FIG.5illustrates a block diagram of some components of an exemplifying apparatus600. The apparatus600may comprise further components, elements or portions that are not depicted inFIG.5. The apparatus600may be employed in implementing the spatial audio encoder220, possibly together with the spatial mixer140and/or further audio processing entities. The apparatus600comprises a processor616and a memory615for storing data and computer program code617. The memory615and a portion of the computer program code617stored therein may be further arranged to, with the processor616, to implement the function(s) described in the foregoing in context of the spatial audio encoder220and/or the spatial mixer140. The apparatus600may comprise a communication portion612for communication with other devices. The communication portion612comprises at least one communication apparatus that enables wired or wireless communication with other apparatuses. A communication apparatus of the communication portion612may also be referred to as a respective communication means. The apparatus600may further comprise user I/O (input/output) components418that may be arranged, possibly together with the processor616and a portion of the computer program code617, to provide a user interface for receiving input from a user of the apparatus600and/or providing output to the user of the apparatus600to control at least some aspects of operation of the spatial audio encoder220and/or the spatial mixer140implemented by the apparatus600. The user I/O components618may comprise hardware components such as a display, a touchscreen, a touchpad, a mouse, a keyboard, and/or an arrangement of one or more keys or buttons, etc. The user I/O components618may be also referred to as peripherals. The processor616may be arranged to control operation of the apparatus600e.g. in accordance with a portion of the computer program code617and possibly further in accordance with the user input received via the user I/O components618and/or in accordance with information received via the communication portion612. The apparatus600may comprise the audio capturing entity110, e.g. the microphone array111including the microphones111-jthat serve to record the digital audio signals115-jthat constitute the input audio signal115. Although the processor616is depicted as a single component, it may be implemented as one or more separate processing components. Similarly, although the memory615is depicted as a single component, it may be implemented as one or more separate components, some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/dynamic/cached storage. The computer program code617stored in the memory615, may comprise computer-executable instructions that control one or more aspects of operation of the apparatus600when loaded into the processor616. As an example, the computer-executable instructions may be provided as one or more sequences of one or more instructions. The processor616is able to load and execute the computer program code617by reading the one or more sequences of one or more instructions included therein from the memory615. The one or more sequences of one or more instructions may be configured to, when executed by the processor616, cause the apparatus600to carry out operations, procedures and/or functions described in the foregoing in context of the spatial audio encoder220and/or the spatial mixer140. Hence, the apparatus600may comprise at least one processor616and at least one memory615including the computer program code617for one or more programs, the at least one memory615and the computer program code617configured to, with the at least one processor616, cause the apparatus600to perform operations, procedures and/or functions described in the foregoing in context of the spatial audio encoder220and/or spatial mixer. The computer programs stored in the memory615may be provided e.g. as a respective computer program product comprising at least one computer-readable non-transitory medium having the computer program code617stored thereon, the computer program code, when executed by the apparatus600, causes the apparatus600at least to perform operations, procedures and/or functions described in the foregoing in context of the spatial audio encoder220and/or the spatial mixer140. The computer-readable non-transitory medium may comprise a memory device or a record medium such as a CD-ROM, a DVD, a Blu-ray disc or another article of manufacture that tangibly embodies the computer program. As another example, the computer program may be provided as a signal configured to reliably transfer the computer program. Herein, reference(s) to a processor should not be understood to encompass only programmable processors, but also dedicated circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processors, etc. Features described in the preceding description may be used in combinations other than the combinations explicitly described. In the following, further illustrative and non-limiting example embodiments of the spatial audio processing technique described in the present disclosure are described in a form of a list of numbered clauses.Clause 1. An apparatus for spatial audio processing on basis of two or more input audio signals that represent an audio scene and at least one further input audio signal that represents at least part of the audio scene, the apparatus configured toidentify a portion of interest, POI, in the audio scene;process the two or more input audio signals into a spatial audio signal where the POI in the audio scene is suppressed;generate, on basis of the at least one further input audio signal, a complementary audio signal that represents the POI in the audio scene; andcombine the complementary audio signal with the spatial audio signal to create a reconstructed spatial audio signal.Clause 2. An apparatus according to clause 1, further comprising a microphone array of two or more microphones, configured to record said two or more input audio signals on basis of a sound captured by a respective microphone of the microphone array.Clause 3. An apparatus according to clause 1 or 2, further configured to receive the at least one further input audio signal from one or more external microphones configured to record a respective further input audio signal on basis of a sound captured by respective one of said one or more further microphones.Clause 4. An apparatus according any of clauses 20 to 22, wherein identification of the POI comprises identifying, for a plurality of predefined spatial portions of the audio scene, whether the respective spatial portion represents a POI.Clause 5. An apparatus according to clause 4, wherein said plurality of predefined spatial portions comprises a plurality of spherical sectors.Clause 6. An apparatus according to any of clauses 20 to 5, wherein identification of the POI comprises receiving an indication of the POI as user input.Clause 7. An apparatus according to any of clauses 20 to 5, wherein identification of the POI comprisesextracting, on basis of the two or more input audio signals, spatial parameters that are descriptive of the audio scene represented by the two or more input audio signals; andidentifying the POI on basis of one or more POI identification criteria evaluated at least in part on basis of the extracted spatial parameters.Clause 8. An apparatus according to clause 7, whereinextracting said spatial parameters comprises extracting a respective dedicated set of spatial parameters for the plurality of predefined spatial portions of the audio scene; andidentifying the POI comprises identifying a predefined spatial portion at least in part on basis of the dedicated set of spatial parameters extracted for the respective predefined spatial portion.Clause 9. An apparatus according to clause 7 or 8, wherein said spatial parameters include a respective direction of arrival, DOA, and a direct to ambient ratio, DAR, for a plurality of frequency bands and wherein said POI identification criteria comprise one or more of the following:the DOAs across the plurality of frequency bands exhibit variation that is smaller than a respective predefined thresholdthe DARs across the plurality of frequency bands are higher than a respective predefined threshold.Clause 10. An apparatus according to clause 9, wherein the DOAs across the plurality of frequency bands are considered to exhibit variation that is smaller than said respective predefined threshold in response to a circular variance computed over said DOAs being smaller than a respective predefined threshold value.Clause 11. An apparatus according to clause 9 or 10, the DARs across the plurality of frequency bands are considered to be higher than said respective predefined threshold in response to the average of said DARs exceeding a respective predefined threshold value.Clause 12. An apparatus according to any of clauses 20 to 11, wherein processing the two or more input audio signals comprises suppressing ambience of the audio scene within the POI.Clause 13. An apparatus according to any of clauses 20 to 12, wherein processing the two or more input audio signals comprises generating, on basis of the two or more input audio signals,a first signal that represents directional sound sources of the audio scene, anda second signal that represents ambience of the audio scene such that the ambience corresponding to the POI is suppressed,Clause 14. An apparatus according to clause 13, wherein generating the first signal comprisesidentifying a predefined number of input audio signals originating from respective microphones that are closest to the direction of arrival identified for a directional sound source of the audio scene;time-aligning other identified input audio signals with the one that originates from a microphone that is closest to the direction of arrival identified for said directional sound source;providing the first signal as a linear combination of the identified and time-aligned input audio signals.Clause 15. An apparatus according to clause 13 or 14, wherein generating the second signal comprises providing the second signal as a linear combination of said one or more input audio signals.Clause 16. An apparatus according to any of clauses 13 to 15, wherein generating the second signal comprises applying a beamforming to the two or more input audio signals such that directions of arrival corresponding to the POI are suppressed.Clause 17. An apparatus according to clause 16, wherein applying the beamforming comprises steering one or more nulls of a beamformer towards directions of arrival corresponding to the POI.Clause 18. An apparatus according to any of clauses 20 to 17, wherein generating the complementary audio signal comprisesidentifying at least one of the at least one further input audio signal that originates from a respective microphone that is within or close to the POI;generating, on basis of the identified at least one further input audio signal, the complementary audio signal that represents the POI in the audio scene.Clause 19. An apparatus according to clause 18, wherein generating the complementary audio signal comprisesderiving an ambience signal as a weighted sum of said identified at least one further input audio signal;defining a respective spatial position within the POI for a plurality of frequency bands of the ambience signal;deriving, in dependence of the respective spatial position, respective one or more gain coefficients that implement panning to said spatial position; andgenerating the complementary audio signal by multiplying the ambience signal in each of said plurality of frequency bands by the respective one or more gain coefficients.Clause 20. An apparatus for spatial audio processing on basis of two or more input audio signals that represent an audio scene and at least one further input audio signal that represents at least part of the audio scene, the apparatus comprisingmeans for identifying a portion of interest, POI, in the audio scene;means for processing the two or more input audio signals into a spatial audio signal where the POI in the audio scene is suppressed;means for generating, on basis of the at least one further input audio signal, a complementary audio signal that represents the POI in the audio scene; andmeans for combining the complementary audio signal with the spatial audio signal to create a reconstructed spatial audio signal.Clause 21. An apparatus for spatial audio processing on basis of two or more input audio signals that represent an audio scene and at least one further input audio signal that represents at least part of the audio scene, wherein the apparatus comprises at least one processor; and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to:identify a portion of interest, POI, in the audio scene;process the two or more input audio signals into a spatial audio signal where the POI in the audio scene is suppressed;generate, on basis of the further audio signal, a complementary audio signal that represents the POI in the audio scene; andcombine the complementary audio signal with the spatial audio signal to create a reconstructed audio signal.Clause 22. A computer program product comprising computer readable program code tangibly embodied on a non-transitory computer readable medium, the program code configured to cause performing the method according to any of clauses 1 to 19 when run a computing apparatus. Throughout the present disclosure, although functions have been described with reference to certain features, those functions may be performable by other features whether described or not. Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not.
74,663
11943605
DESCRIPTION OF EXAMPLE EMBODIMENTS System Overview The present invention relates to a system and method of rendering an audio signal for a reproduction audio environment defined by a target loudspeaker system. The methodologies (described below) are adapted to be performed by one or more computer processors or dedicated rendering device in an object-based audio system such as the Dolby Atmos™ cinema or Dolby AtmosTM home system. A system-level overview of such an audio system from audio capture to audio playback is illustrated schematically inFIG.2. System1includes an audio content capture subsystem3responsible for the initial capture of audio from an array of spatially separated microphones5-7. Optional storage, processing and format conversion can also be applied at block9. Additional mixing is also possible within some embodiments of subsystem3. The output of capture subsystem3is a plurality of output audio channels11corresponding to the signals captured from each microphone. These channel signals are input to a content authoring subsystem13, which, amongst other functions, performs spatial audio processing15to identify audio objects from the channel signals and determine position data corresponding to those audio objects. The output of spatial audio processing block15is a number of audio objects17having associated metadata. The metadata includes position data, which indicates the two-dimensional or three-dimensional position of the audio object in an audio environment (typically initially based on the environment in which the audio was captured), rendering constraints as well as content type (e.g. dialog, effects etc.). Depending on the implementation, the metadata may include other types of data, such as object width data, gain data, trajectory data, etc. Some audio objects may be static, whereas others may move through an audio scene. The number of output audio objects17may be greater, fewer or the same as the number of input channels11. Although the outputs are designated as audio objects17, it will be appreciated that, in some embodiments, the audio data associated with each audio object17includes data relating to more than one object source in the captured audio scene. For example, one object17may include audio data indicative of two different vehicles passing through the audio scene. Furthermore, a single object source from the captured audio scene may be present in more than one audio object17. For example, audio data for a single person speaking may be encapsulated into two separate objects17to define a stereo object having two audio signals with metadata. Objects17are able to be stored on non-transient media and distributed as data for various additional content authoring such as mixing, and subsequent rendering by an audio rendering subsystem19. At subsystem19, rendering21is performed on objects17to facilitate representation and playback of the audio on a target loudspeaker system23. Rendering21may be performed by a dedicated rendering tool or by a computer configured with software to perform audio rendering. The rendered signals are output to loudspeaker system23of a playback subsystem25. Loudspeaker system23includes a predefined spatial layout of loudspeakers to reproduce the audio signal within an audio environment27defined by the loudspeaker system. Although five loudspeakers are illustrated in system23, it will be appreciated that the methodologies described herein are applicable to a range of loudspeaker layouts including layouts with two surround loudspeakers (as illustrated), four surround loudspeakers or higher, height plane loudspeakers, etc., in addition to the front loudspeaker pair. Audio object details may be authored or rendered according to the associated metadata which, among other things, may indicate the position of the audio object in a three-dimensional space at a given point in time. When audio objects are monitored or played back in a reproduction loudspeaker environment, the audio objects may be rendered according to the position metadata using the reproduction loudspeakers that are present in the reproduction environment, rather than being output to a predetermined physical channel, as is the case with traditional channel-based systems such as Dolby 5.1.x and Dolby 7.1.x systems. Typically, the functions of the various subsystems are performed by separate hardware devices, often at separate locations. In some embodiments, additional processes are performed by the hardware of either subsystems including initial rendering at subsystem13and further signal manipulation at subsystem19. In alternative implementations, subsystem13may send only the metadata to subsystem19and subsystem19may receive audio from another source (e.g., via a pulse-code modulation (PCM) channel, via analog audio or over a computer network). In such implementations, subsystem19may be configured to group the audio data and metadata to form the audio objects. The present invention is primarily concerned with the rendering21performed on objects17to facilitate playback of audio on loudspeaker system23that are independent of the recording system used to capture the audio data. Method Overview Referring toFIG.3, there is illustrated a process flow diagram illustrating the primary steps in a method30of rendering an audio signal for a reproduction audio environment defined by a target loudspeaker system. Method30is adapted to be performed by a rendering device such as a dedicated rendering tool or a computer configured to perform a rendering operation. The operations of method30are not necessarily performed in the order shown. Moreover, method30(and other processes provided herein) may include more or fewer operations than those that are indicated in the drawings and/or described. Further, although method30is described herein as processing a single audio channel containing a single audio object, it will be appreciated that this description is for the purposes of simplifying the operation and method30is capable of being performed, simultaneously or sequentially, on a plurality of audio channels, each of which may include a plurality of audio objects. Method30includes the initial step31of receiving the audio signal in the form of an audio object17. As mentioned above, the audio signal includes audio data relating to an audio object and associated position metadata indicative of a position of the object within a defined audio environment. Initially, the audio environment is defined by the specific layout of microphones5-7used to capture the audio. However, this may be modified in the content authoring stage so that the audio environment differs from the initial defined environment. The position metadata includes coordinates of the object in the current audio environment. Depending on the environment, the coordinates may be two-dimensional or three-dimensional. At step32loudspeaker layout data is received for the target loudspeaker system23for which the audio signal is to be reproduced. In some embodiments, the layout data is provided automatically from loudspeaker system23upon connection of a computer to system23. In other embodiments, the layout data is input by a user through a user interface (not shown), or received from a system, either internal or external to the rendering subsystem, configured to perform an automated detection and calibration process for determining loudspeaker setup information, such as size, number, location, frequency response, etc. of loudspeakers. At step33, control data is received that is indicative of a position modification to be applied to the audio object in the reproduction audio environment during audio rendering process. The control data is specified during the content authoring stage and is received from an authoring device in the content authoring subsystem13. In some embodiments, the control data is packaged into the metadata and sent in object17. In other embodiments, the control data is transmitted from a content authoring device to a renderer separately to the audio channel. The control data may be user specified or automatically generated. When user specified, the control data may include specifying a degree of position modification to perform and what type of position modification to perform. One manner of specifying a degree of position modification is to specify a preference to preserve audio timbre over the spatial accuracy of an audio object or vice versa. Such preservation would be achieved by imposing limitations on the position modification such that degradation to spatial accuracy is favored over degradation to audio timbre or vice versa. Generally, the greater the modification to the position of an audio object in the direction from an original object position towards a loudspeaker, the greater the audio timbre and the lesser the spatial object accuracy during playback. Thus, with no position modification applied, the spatial object accuracy is maximized. A maximum position modification, on the other hand, favors reproduction of the object by a single loudspeaker by increasing the panning gain of one loudspeaker, preferably one relatively close the object position indicated by the metadata, at the expense of reducing the panning gains of remote loudspeakers. Such change in effective panning gains, effectively increasing the dominance of one loudspeaker to reproduce the object, reduces the magnitude of comb-filter interactions perceived by the listener as a result of differences in the acoustical pathway length compared to the comb-filter interactions of the unmodified position, thereby thus improving the timbre of the perceived object, at the expense of a less accurate perceived position. Further, the control data may be object specific or object independent. For example, in object-specific position modification, the control data may include data to apply a position modification to voice audio that is different to a modification applied to background audio. Further, the control data may specify a degree of position modification to be applied to the audio object during the rendering of the audio signal. The control data also includes a position modification control flag which indicates that position modification should be performed. In some embodiments, the position modification flag is conditional based on the loudspeaker layout data. By way of example, the position modification flag may indicate that position modification is required for a speaker layout with only two surround speakers, while it should not be applied when the speaker layout has four surround speakers. At decision34, it is determined whether the flag is set or not. If the flag is not set, no position modification is applied and, at step35, rendering of the audio signal is performed based on the original position coordinates of the object. In this case, at block36the audio object is output at the original object position within the reproduction audio environment. If, at decision34, the position modification flag is set, the process proceeds to step37where a determination is made as to an amount and/or type of position modification to be applied during rendering. This determination is made based on control data specified during the content authoring stage and may be dependent upon user specified preferences and factors including the type of audio object, an audio overall scene in which the audio signal is to be played. At step38, rendering modification data is generated in response to the received object position data, loudspeaker layout data and control data (including the determination made in step37above). As will be described below, this rendering modification data and the method of modifying the object position can take a number of different forms. In some embodiments, steps37and38are performed together as a single process. Finally, at step35, rendering of the audio signal is performed with the rendering modification data. In this case, at block39the audio signal is output with the audio object at a modified object position that is between loudspeakers within the reproduction audio environment. For example, the modified object position may be a position nearer to one or more loudspeakers in the audio environment than the original object position or may be a position nearer to a closest loudspeaker in the audio environment relative to the original object position. In some embodiments, the modified object position can be made to be equal to a specific loudspeaker such that the entire audio signal corresponding to that audio object is produced from that single loudspeaker. The rendering modification data is applied as a rendering constraint during the rendering process. The effect of the rendering modification data is to modify a drive signal for one or more of the loudspeakers within loudspeaker system23by modifying their respective panning gains as a function of time. This results in the audio object appearing to originate from a source location different to that of its original intended position. As mentioned above, to reproduce the audio signal each loudspeaker is driven with a drive signal s(t) which is a combination of a time varying panning gain g(t) and a time varying object audio signal x(t). That is, for a single loudspeaker and a single audio object: s(t)=g(t)x(t)  (Eq 6) More generally, for a plurality of audio objects represented across a plurality of loudspeakers, the rendered audio signal is expressed by equation 1. Thus, a loudspeaker drive signal is modified by modifying the panning gain applied to that loudspeaker. The panning gain applied to an individual speaker is expressed as a predefined panning law, which is dependent upon the loudspeaker layout data P and object position metadata M(t). That is: g(t)=(P, M(t))  (Eq 7) The loudspeaker layout data P is represented in the same coordinate system as the audio object position metadata M(t). Thus, in a 5 loudspeaker Dolby 5.1 system, includes coordinates for the five loudspeakers. From equation 7 it can be seen that modification of the panning gain requires modification of one or more of the position metadata M(t), loudspeaker layout data P or the panning lawitself. A decision as to which parameter to vary is based upon a number of factors including the type of audio object to be rendered (voice, music, background effects etc), the original position of the audio object relative to the loudspeaker positions and the number of loudspeakers. This decision is made in steps37and38of method30. Typically, there is a preference to modify the position metadata or loudspeaker layout data over modifying the panning law itself. In one embodiment, the amount of position modification to be applied is dependent upon the target speaker layout data. By way of example, a position modification applied to a loudspeaker system having two surround loudspeakers is larger than a position modification applied to a loudspeaker system having four surround loudspeakers. The flexible control of these three factors permits the continuous mapping of an audio object position from its original intended position to another position anywhere within the reproduction audio environment. For example, an audio object moving in a smooth trajectory through the audio environment can be mapped to move in a modified but similarly smooth trajectory. Of particular importance is the ability to reposition an audio object in the front-rear direction of the reproduction audio environment, which is otherwise difficult to achieve without significant loss to signal timbre or spatial object position accuracy. The flexibility described above permits a number of different position modification routines to be performed. In particular, the option is provided to trade off audio timbre or the size of a listener's ‘sweet spot’ with the accuracy of the spatial intent of the audio object, or vice versa. If a preference for timbre is provided, the sweet spot within which a listener can hear an accurate reproduction of the audio signal is enhanced. However, if a preference for accuracy of spatial object intent, then the timbre and sweet spot size is traded off for more accurate object position reproduction in the rendered audio. In the latter case, ideally the rendering is performed such that an azimuth angle of the audio object between the object position and modified object position from the perspective of a listener is substantially unchanged so that the perceived object position (from a listener's perspective) remains essentially the same. Clamping A first position modification routine that can be performed is referred to as ‘clamping’. In this routine, the rendering modification data determines an effective position of the rear loudspeaker pairs in the reproduction audio environment in terms of their y coordinate (or front-rear position) depending on the loudspeaker layout. As a result, during rendering the perceived loudspeaker layout is clamped into a smaller sized arrangement. This process is illustrated inFIG.4, which illustrates a five loudspeaker system40but being driven with six audio channels (the ‘Rss’ channel having no corresponding loudspeaker). System40defines reproduction audio environment27. The original position of surround loudspeakers ‘Ls’ and ‘Rs’ is modified within the audio environment27resulting in modified positions ‘Ls*’, ‘Rs’*. The magnitude of the displacement is controlled by the control data and is dependent upon the original object position (in the front-rear direction) and the loudspeaker layout. The result of modifying the positions of ‘Ls’ and ‘Rs’ is that the new positions ‘Ls*’ and ‘Rs*’ are much closer to the audio object and the right side surround ‘Rss’ audio channel (which has no corresponding loudspeaker). Mathematically, this transformation is performed by modifying P in equation7. As a result, the panning gains of these channels for loudspeakers ‘Ls*’ and ‘Rs*’ will increase, and hence comb-filter artifacts will generally reduce. This improved timbre comes at the cost of a displacement of the perceived location of the audio object and/or ‘Rss’ channel, because the actual location of the physical loudspeakers is not being modified, and hence the perceived location of the object and ‘Rss’ will move backwards and the object position accuracy decreases during playback. A second consequence is that moving audio objects having a time varying trajectory through audio environment27involving changes in Y coordinate beyond the y coordinate of ‘Ls*’ or ‘Rs*’ will not have an effect and therefore object trajectories may become discontinuous over time. As one example, the Y coordinate of the surround loudspeakers (that is, a Y value of P in equation 7) is controlled by one or more of the object position metadata and control data, provided that the target loudspeaker setup has only two surround loudspeakers (such as a Dolby 5.1.x setup). This control results in a dependency curve such as that illustrated inFIG.5. The ordinate gives the Y coordinate of the surround loudspeakers, while the abscissa reflects the (normalized) control value (determined from object position metadata and received control data). By way of example, an object position may be at a normalized position of 0.6 in the Y axis and the control data may permit a 50% modification to the speaker layout. This would result in a modification of the Y coordinate of the surround speakers from a position of 1.0 to 0.8. Alternatively, if the control data permits a 100% modification, then the Y coordinate of the surround speakers would be modified from a position of 1.0 to 0.6. The output of this calculation is the rendering modification data which is applied during the rendering of the audio signal. For the above example, the clamping process would be applied only when two surround loudspeakers are provided, and would not be applied when ‘Lss’ and ‘Rss’ (side surround) loudspeakers are available. Hence the modification of loudspeaker positions is dependent on the target loudspeaker layout, object position and the control data. Generally speaking, methods referred to above as Clamping may include a manipulation (modification) of the (real) loudspeaker layout data (relating to an audio environment) wherein generating a modified speaker drive signal is based on the modified loudspeaker layout data, resulting in a modified object position. During rendering of an audio object, a rendering system may thus make use of modified loudspeaker layout data which is not corresponding to the real layout of loudspeakers in the audio environment. The loudspeaker layout data may be based on the positions of the loudspeakers in the audio environment. The modified loudspeaker layout data do not correspond to the positions of the loudspeakers in the audio environment. Warping A similar effect to clamping, referred to as ‘warping’, can be obtained by modifying or warping Y coordinates of the audio object depending on (1) the target loudspeaker layout, and (2) the control data. This warping process is depicted inFIG.6, which illustrates loudspeaker system60. In this warping procedure, the Y coordinate values of objects are modified prior to calculating panning gains for the loudspeakers. As shown inFIG.6, the Y coordinates are increased (i.e. audio objects are moved towards the rear of audio environment27) to increase their amplitude panning gains for the surround loudspeakers. Exemplary warping functions are shown inFIG.7. The warping functions map an input object position to an output modified object position for various amounts of warping. Which curve is to be employed is controlled by the control data. Note that the illustrated warping functions are exemplary only and; in principle, substantially any input-output function can be applied, including piece-wise linear functions, trigonometric functions, polynomials, spline functions, and the like. Furthermore, instead of, or in addition to, control data indicating one of a number of pre-defined warping functions to use, warping may be controlled by control data indicating a degree and/or type of interpolation to be applied between two pre-defined warping functions (e,g., no warping and max warping ofFIG.7). Such control data may be provided as metadata, and/or determined by a user through, e.g., a user interface. In the previous sections, coordinate warping was discussed in the context of processing Y coordinates. In general sense, all object coordinates can be processed by some function that (1) depends on provided position metadata, (2) is conditional upon the target loudspeaker setup and (3) is constrained by the control data. Warping of Y coordinates for Dolby 5.1 loudspeaker systems is, in this context, one specific embodiment of a generic function: M′j(t)=H(P, Mj(t),Cj(t)),  (Eq 8) with H a coordinate processing function, Mjthe object position metadata, Cjthe warping metadata, P indicates the target loudspeaker setup, and MI denoting the processed audio object position metadata for object j that are used to compute panning gains gi,jas in equations 3 or 7. In alternative formulation, the panning gain function can be expressed as follows: gi,j(t)=(P, M′j(t))=(P, Mj(t),Cj(t)).  (Eq 9) In this formulation, the modified position metadata M′jis used to produce panning gains for loudspeaker setup P and warping metadata Cj. In addition to simply modifying Y coordinates as described in the previous sections, other types of position modification are possible. In a first alternative position modification arrangement, generic warping of coordinates is performed to move audio objects in two or three dimensions towards the corners or walls of the audio reproduction environment. In general, if the number of available loudspeakers is small (such as in a Dolby 5.1 rendering setup), it can be beneficial to modify audio object position metadata in such a way that the modified position is closer to the walls or the corners of the audio environment. An example of such a modification process is illustrated inFIG.8in loudspeaker system80. Here an appropriate warping function modifies the audio object position coordinates in such a way that the modified object position is closer to a side and/or corner of the environment. In one embodiment, this process is applied such that the object's azimuth angle, as seen from the listener's position, is essentially unchanged. Although the example inFIG.8is applied in a 2-dimensional plane, the same concept can be equivalently applied in 3-dimensions. Another alternative position modification arrangement includes performing generic warping of position coordinates to move object positions closer to the actual loudspeaker positions or a nearest loudspeaker position. In this embodiment, the warping functions are designed such that the object is moved in two or three dimensions towards the closest loudspeaker based on the distance between the object and its nearest neighbor loudspeaker location. Generally speaking, methods referred to above as Warping may include modifying object position data by moving the object towards the rear side of an audio environment and/or by moving the object closer to an actual loudspeaker position in the audio environment and/or by moving the object closer to a side boundary and/or a corner of the audio environment. Side boundaries and corners of the audio environment may thereby be defined by loudspeaker layout data based on the positions of the loudspeakers in the audio environment. Specifying Control Data During Audio Content Authoring As mentioned above, in some embodiments the control data which constrains the position modification during rendering can be received from a content authoring system or apparatus. Accordingly, referring toFIG.9, one aspect of the invention relates to an audio content creation system90. System90includes an input92for receiving audio data94from one or more audio input devices96. The audio data includes data indicative of one or more audio objects. Example input devices include microphones generating raw audio data or databases of stored pre-captured audio. An audio processing module98processes the audio data and, in response, generates an audio signal100having associated metadata including object position data indicative of a spatial position of the one or more audio objects. The audio signal100may include single or plural audio channels. The position data is specified in coordinates of a predefined audio environment, which may be the environment in which the audio data was captured or an environment of an intended playback system. Module98is configured to perform spatial audio analysis to extract the object metadata and also to perform various other audio content authoring routines. A user interface102allows users to provide input to the content authoring of the audio data. System90includes a control module104configured to generate rendering control data to control the performing of audio object position modification to be performed on the audio signal during rendering of that signal in an audio reproduction environment. The rendering control data is indicative of the control data referred to above in relation to the rendering process. Module104is configured to perform automatic generation of rendering control data based on the metadata. Module104is also able to receive user input from interface102for receiving user preferences to the rendering modification and other user control. The object position modification may be dependent upon a type of audio object identified in the audio data. The rendering control data is adapted to perform a number of functions, including:Providing an instruction to perform audio object position modification on a subset or each of the audio objects identified within the audio data. That is, whether or not to perform position modification during subsequent audio rendering. This is received at a rendering device as the position modification control flag in step34of method30.Determining a type of object position modification to be performed during rendering. For example, a clamping operation may be preferred over a warping operation or vice versa.Determining a degree of object position modification to be applied to the one or more audio objects. The user may wish to allow full modification of the object position or partial position modification. The degree of position modification to be applied inherently controls the trade off between audio timbre and spatial object accuracy. If no position modification is applied, the spatial object accuracy is preserved at the expense of audio timbre. If full position modification is applied, the spatial object accuracy is compromised to preserve audio timbre. The rendering control data is attached to the metadata and output as part of the output audio signal106through output108. Alternatively, the rendering control data may be sent separate to the audio signal. The audio signal output from system90is transmitted (directly or indirectly) to a rendering system for subsequent rendering of the signal. Referring still toFIG.9, another aspect of the invention relates to an audio rendering system110for rendering audio signals including the rendering control data. System110includes an input112configured to receive audio signal106including the rendering control data. System110also includes a rendering module114configured to render the audio signal based on the rendering control data. Module114outputs a rendered audio signal116through output118to a reproduction audio environment where the audio objects are reproduced at respective modified object positions within the reproduction audio environment. Preferably, the modified object positions are between the positions of the loudspeakers in the reproduction audio environment. A user interface120is provided for allowing user input such as specification of a desired loudspeaker layout, control of clamping/warping, etc. As such, systems90and110are configured to work together to provide a full audio processing system which provides for authoring audio content and embedding selected rendering control for selectively modifying the spatial position of objects within an audio reproduction environment. The present invention is particularly adapted for use in a Dolby Atmos™ audio system. Audio content authoring system90and rendering system110are able to be realized as dedicated hardware devices or may be created from existing computer hardware through the installation of appropriate software. Conclusions It will be appreciated that the above described invention provides significant methods and systems for providing spatial position modification of audio objects during rendering of an audio signal. The invention allows a mixing engineer to provide a controllable trade-off between spatial object position intent and timbre of dynamic and static objects within an audio signal. In one extreme case, spatial intent is maintained to the full extent, at the cost of a small sweet spot and timbre degradation due to (position-dependent) comb-filter problems. The other extreme case is optimal timbre and a large sweet spot by reducing or eliminating the application of phantom imaging, at the expense of a modification of the perceived position of audio objects. These two extreme cases and intermediate scenarios can be controlled by adding dedicated control metadata alongside with audio content that controls how a renderer should render content. Interpretation Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining”, analyzing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities. In a similar manner, the term “processor” may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A “computer” or a “computing machine” or a “computing platform” may include one or more processors. The methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing system that includes one or more processors. Each processor may include one or more of a CPU, a graphics processing unit, and a programmable DSP unit. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM. A bus subsystem may be included for communicating between the components. The processing system further may be a distributed processing system with processors coupled by a network. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT) display. If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth. The term memory unit as used herein, if clear from the context and unless explicitly stated otherwise, also encompasses a storage system such as a disk drive unit. The processing system in some configurations may include a sound output device, and a network interface device. The memory subsystem thus includes a computer-readable carrier medium that carries computer-readable code (e.g., software) including a set of instructions to cause performing, when executed by one or more processors, one of more of the methods described herein. Note that when the method includes several elements, e.g., several steps, no ordering of such elements is implied, unless specifically stated. The software may reside in the hard disk, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system. Thus, the memory and the processor also constitute computer-readable carrier medium carrying computer-readable code. Furthermore, a computer-readable carrier medium may form, or be included in a computer program product. In alternative embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a user machine in server-user network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Note that while diagrams only show a single processor and a single memory that carries the computer-readable code, those in the art will understand that many of the components described above are included, but not explicitly shown or described in order not to obscure the inventive aspect. For example, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. Thus, one embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that is for execution on one or more processors, e.g., one or more processors that are part of web server arrangement. Thus, as will be appreciated by those skilled in the art, embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium, e.g., a computer program product. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause the processor or processors to implement a method. Accordingly, aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium. The software may further be transmitted or received over a network via a network interface device. While the carrier medium is shown in an example embodiment to be a single medium, the term “carrier medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “carrier medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present invention. A carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus subsystem. Transmission media also may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications. For example, the term “carrier medium” shall accordingly be taken to include, but not be limited to, solid-state memories, a computer product embodied in optical and magnetic media; a medium bearing a propagated signal detectable by at least one processor or one or more processors and representing a set of instructions that, when executed, implement a method; and a transmission medium in a network bearing a propagated signal detectable by at least one processor of the one or more processors and representing the set of instructions. It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (e.g., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system. Reference throughout this specification to “one embodiment”, “some embodiments” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment”, “in some embodiments” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments. As used herein, unless otherwise specified the use of the ordinal adjectives “first”, “second”, “third”, etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner. In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising. It should be appreciated that in the above description of example embodiments of the disclosure, various features of the disclosure are sometimes grouped together in a single embodiment, FIG., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this disclosure. Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the disclosure, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination. In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the disclosure may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limited to direct connections only. The terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. “Coupled” may mean that two or more elements are either in direct physical, electrical or optical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other. Thus, while there has been described what are believed to be the best modes of the disclosure, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the disclosure, and it is intended to claim all such changes and modifications as fall within the scope of the disclosure. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present disclosure. REFERENCES 1. Breebaart, J. (2013).Comparison of interaural intensity differences evoked by real and phantom sources., J. Audio Eng. Soc. 61 (11), 850-859.2. V. Pulkki (2002),Compensating displacement of amplitude-panned virtual sources, Audio Engineering Society 22th Int. Conf. on Virtual, Synthetic and Entertainment Audio pp. 186-195, Espoo, Finland.3. ITU-R, recommendation BS.1116-1 (1997),Methods for the subjective assessment of small impairments in audio systems including multichannel sound systems, Intern. Telecom Union: Geneva, Switzerland.
45,265
11943606
DETAILED DESCRIPTION OF SOME EMBODIMENTS OF THE INVENTION Audio rendering aimed at providing natural and realistic effects to a listener typically includes rendering of an acoustic environment. The rendering is based on a model of the acoustic environment which typically includes modelling a direct path, (early) reflections, and reverberation. The following description will focus on an efficient approach for generating a suitable model for (early) reflections in a real or virtual room. The approach will be described with reference to an audio rendering apparatus as disclosed inFIG.2. The audio rendering apparatus comprises a receiver201which is arranged to receive room data characterizing a room which represents an acoustic environment to be emulated by the rendering. The room data specifically describes boundaries of the first room as well as at least one sound source position for a sound source in the room. The room will in the following also be referred to as the original room (or the first room) and the sound source in the original room will also be referred to as the original sound source in order to differentiate to generated virtual (mirrored) rooms and virtual (mirrored) sound sources generated for the described reflection model. The receiver201may be implemented in any suitable way including e.g. using discrete or dedicated electronics. The processing circuit203may for example be implemented as an integrated circuit such as an Application Specific Integrated Circuit (ASIC). In some embodiments, the circuit may be implemented as a programmed processing unit, such as for example as firmware or software running on a suitable processor, such as a central processing unit, digital signal processing unit, or microcontroller etc. It will be appreciated that in such embodiments, the processing unit may include on-board or external memory, clock driving circuitry, interface circuitry, user interface circuitry etc. Such circuitry may further be implemented as part of the processing unit, as integrated circuits, and/or as discrete electronic circuitry. The receiver201may receive the room data from any suitable source and in any suitable form, including e.g. as part of an audio signal. The room data may be received from an internal or external source. The receiver201may for example be arranged to receive the room data via a network connection, radio connection, or any other suitable connection to an internal source. In many embodiments, the receiver may receive the data from a local source, such as a local memory. In many embodiments, the receiver201may for example be arranged to retrieve the room data from local memory, such as local RAM or ROM memory. The boundaries define the outline of the room and typically represent walls, ceiling, and floor (or for a 2D application typically only walls). The room is a 2D or 3D orthopod, such as a 2D rectangle or 3D rectangle. The boundaries are pairwise parallel and are substantially planar. Further the boundaries of one pair of parallel boundaries is perpendicular to the boundaries of the other pair(s) of parallel boundaries. The boundaries specifically define an orthopod (2D or 3D). The boundaries may reflect any physical property, such as any material etc. The boundaries may also represent any acoustic property. The room being described by the room data corresponds to the intended acoustic environment for the rendering and as such may represent a real room/environment or a virtual room/environment. The room may be any region/area/environment which can be delimited/demarcated by four (for 2D) or six (for 3D) substantially planar boundaries that are pairwise parallel and substantially perpendicular between the pairs. The room data may in some embodiments represent a suitable approximation of an intended room that is not pairwise parallel and/or exhibiting right angles between connected boundaries. In most embodiments, the room data may further include acoustic data for one, more, or typically all of the boundaries. The acoustic property data may specifically include a reflection attenuation measure for each wall which indicates the attenuation caused by the boundary when sound is reflected by the boundary. Alternatively, a reflection coefficient may indicate the portion of signal energy that is reflected in a specular reflection off of the boundary surface. In many embodiments, the attenuation measure may be frequency dependent to model that the reflection may be different for different frequencies. Furthermore, the acoustic property may be dependent on the position on the boundary surface. The receiver201is coupled to a processing circuit203which is arranged to generate a reflection model for the room/acoustic environment representing the (early) reflections in the room and allowing these to be emulated when performing the rendering. Specifically, the processing circuit203is arranged to determine virtual sound sources that represent reflections of the original sound source in the original room. The processing circuit203may be implemented in any suitable form including e.g. using discrete or dedicated electronics. The processing circuit203may for example be implemented as an integrated circuit such as an Application Specific Integrated Circuit (ASIC). In some embodiments, the circuit may be implemented as a programmed processing unit, such as for example as firmware or software running on a suitable processor, such as a central processing unit, digital signal processing unit, or microcontroller etc. It will be appreciated that in such embodiments, the processing unit may include on-board or external memory, clock driving circuitry, interface circuitry, user interface circuitry etc. Such circuitry may further be implemented as part of the processing unit, as integrated circuits, and/or as discrete electronic circuitry. The processing circuit203is coupled to a rendering circuit205which is arranged to render an audio signal representing the audio source, and typically also a number of other audio sources to provide a rendering of an audio scene. The rendering circuit205may specifically receive audio data characterizing the audio from the original sound source and may render this in accordance with any suitable rendering approach and technique. The rendering of the original sound source may include the generation of reflected audio based on the reflection model generated by the processing circuit203. In addition, signal components for the original sound source corresponding to the direct path and reverberation will typically also be rendered. The person skilled in the art will be aware of many different approaches for rendering audio (including for spatial speaker configurations and headphones, e.g. using binaural processing) and for brevity these will not be described in further detail. The rendering circuit205may be implemented in any suitable form including e.g. using discrete or dedicated electronics. The rendering circuit205may for example be implemented as an integrated circuit such as an Application Specific Integrated Circuit (ASIC). In some embodiments, the circuit may be implemented as a programmed processing unit, such as for example as firmware or software running on a suitable processor, such as a central processing unit, digital signal processing unit, or microcontroller etc. It will be appreciated that in such embodiments, the processing unit may include on-board or external memory, clock driving circuitry, interface circuitry, user interface circuitry etc. Such circuitry may further be implemented as part of the processing unit, as integrated circuits, and/or as discrete electronic circuitry. The processing circuit203is specifically arranged to generate a mirror source model for the reflections. In a mirror source model, reflections are modelled by separate virtual sound sources where each virtual sound source is a replicate of the original sound source and has a (virtual) position that is outside of the original room but at such a position that the direct path from the virtual position to a listening position exhibits the same properties as the reflected path from the original sound source to the listening position. Specifically, the path length for the virtual sound source representing a reflection will be equal to the path length of the reflected path from the original source to the listening position. Further, the direction of arrival at the listening position for the virtual sounds source path will be equal to the direction of arrival for the reflected path. Further, for each reflection by a boundary (e.g. wall) for the reflected path, the direct path will pass through a boundary corresponding to the reflection boundary. The transmission through the model boundary can accordingly be used to directly model the reflection effect, for example an attenuation corresponding to the reflection attenuation for the boundary may be assigned to the transmission through the corresponding model boundary. A particularly significant property of the mirror source model is that it can be independent of the listening position. The determined positions and room structures are such that they will provide correct results for all positions in the original room. Specifically, virtual mirror sound sources and virtual mirror rooms are generated, and these can be used to model the reflection performance for any position in the original room, i.e. they can be used to determine path length, reflections, and direction of arrival for any position in the original room. Thus, the generation of the mirror source model may be done during an initialization process and the generated model may be used and evaluated continuously and dynamically as e.g. the user is considered to move around (translation and/or rotation) in the original room. The generation of the mirror source model is thus performed without any consideration of the actual listening position but rather a more general model is generated. The process of generating the mirror source model is an iterative process and an example of the method for generating the model by generating virtual sound sources representing reflections is shown inFIG.3. The method starts in step301wherein the process is initialized. This includes for example initializing the method to use the specific properties of the room, i.e. to initialize the method to be based on the properties retrieved from the room data. The process is based on an iterative mirroring of rooms around boundaries of those rooms and on the corresponding mirroring of sound sources around boundaries of rooms. In each iteration, new rooms and sounds sources are generated by mirroring rooms and sound sources (and specifically sound source positions) generated in the (immediately previous iteration) around (some of) the boundaries of the rooms generated in the previous iteration. When the process is initialized, the original room is initialized/considered to be a room of an immediately previous iteration and the original sound source is initialized/considered to be a sound source of an immediately previous iteration. Thus, the first iteration is based on considering the single original room and sound source as the outcome/result of an immediately previous iteration. The first iteration starts in step303wherein a set of mirror boundaries are determined for a room that was generated in the previous iteration. Specifically, a set of source rooms are determined as the rooms that were generated in the previous iteration. For the first iteration, the set of source rooms comprises the original room (and only this). One of these source rooms is then processed in step303. All of the boundaries for the source room are initially candidate boundaries for the set of mirror boundaries and out of these none, one, some, or all may be selected to be included in the set of mirror boundaries. The selection will be described in detail later. Step303is followed by step305in which a mirroring is performed around each boundary in the set of mirror boundaries (henceforth referred to as a mirror boundary). Each mirror comprises a mirroring of the source room around the mirror boundary. In addition, it includes a mirroring of the sound source of the source room around the mirror boundary. Thus, the mirroring around a mirror boundary generates a new (virtual) mirror room as well as a new (virtual) mirror sound source. The mirroring is accordingly of a source room and source sound source into a new mirror room and mirror sound source (being mirrored duplicates of the source room and source sound source respectively). Mirroring of the source sound source can be done by determining a line going through the boundary and the source sound source such that the line is perpendicular to the surface of the boundary and then positioning the mirror sound source at the same distance from the boundary (but on the opposite side, i.e. in the mirror room). The mirroring inherently defines a direction from one side of the mirror boundary to the other, i.e. a direction from the source room to the mirror room. The direction can be considered as the direction perpendicular to the mirror surface or equivalently the relative position of mirror boundary in the room can be considered to indicate the direction. The direction may for example be related to the original room. For example, the position of each boundary in the original room may be considered to represent a direction, i.e. six discrete directions may be defined for a 3D room and four discrete directions may be defined for a 2D room. As the mirror rooms are generated by mirroring, the alignments of the boundaries do not change and thus the boundaries of the mirror rooms also align with the four or six directions of the original room (although a mirroring of course reverses the relative position of the boundaries, e.g. the positions of the left and right boundaries reverse when mirroring around the left or right boundary). In the first iteration, the original room is considered a mirror room of an immediately previous iteration and the original sound sources is considered a mirror sound source of an immediately previous iteration, and accordingly a set of mirror boundaries may be generated comprising boundaries of the original room. Typically, the set of mirror boundaries of the first iteration will include all boundaries of the original room. Mirroring is then performed around the boundaries of the set of mirror boundaries, and this results in a number (typically up to four or six) new mirror rooms being generated, each comprising a new mirror sound source. The method then continues in step307, it is determined if all source rooms of the set of source rooms have been processed, i.e. if all the mirror rooms that were generated in the previous iteration have been processed. If not, the method proceeds in step309wherein the next source room is selected and the method then returns to step303. Otherwise it proceeds to step311wherein it is determined whether more iterations are to be performed. If so, the method continues to step313wherein the next iteration is set up, e.g. by determining a new set of source rooms comprising all the mirror rooms that were generated in the current iteration. The method then returns to step303wherein this new set of source rooms is processed and potentially mirrored. Thus, in each iteration, the number of mirror rooms/mirror sound sources is grown based on mirroring of the results of the previous iteration. The iterations may for example continue until a predetermined number of iterations have been performed. If this is detected in step311, the method may proceed to step315where the method may e.g. stop or where rendering based on the generated model e.g. may be performed. The approach may generate a mirror source model where the reflections in the original room can be emulated by direct paths from the virtual mirror sound sources. As illustrated inFIG.4, a reflected sound component can be rendered as the direct path of a mirrored sound source with this representing the correct distance and direction of incidence for the listener. This will be true for all the positions in the original room and no new mirror sound source positions need to be determined for different listening positions. Rather, the virtual mirror sources are valid for every user position within the original room. When generating this virtual mirror source, the reflection effect may as mentioned be taken into account. This may typically be by assigning to each transition between rooms, an attenuation or frequency dependent filtering representing the portion of the sound source's energy that is specularly reflected by the surface of the boundary being crossed. As sound may reach the user through multiple boundary reflections, the approach can be repeated as illustrated inFIG.5. The described iterative approach allows for multiple “layers” of mirror rooms and sources to be generated thereby allowing multiple reflections to be modelled. Each iteration increases the number of reflections of the path, i.e. the first iteration represents sound components reaching the listening position through one reflection, the second iteration represents sound components reaching the listening position through two reflections etc. The approach typically results in a diamond-shaped representation of the original room and mirrored rooms when subsequently mirroring rooms until a certain order (fixed number of iterations). This is illustrated in 2D inFIG.6for up to the 2ndorder, i.e. with two iterations. In 3D there would be a similar structure when looking at a cross-section through the original room (i.e. the same pattern would be seen in a perpendicular plane through the row of five rooms). However, whereas principles of the described approach may seem relatively straightforward, the practical implementation is not and indeed the practical considerations are critical to the performance of the approach. For example, in many applications, the coordinate system used to represent the room and sound source may not be aligned with the directions of the boundaries. This makes the mirroring less straightforward to calculate as it impacts more than one dimension at a time. In such a cases, either the room boundaries and the sound sources have to be rotated to aligned with the coordinate system and all subsequently determined virtual mirror sources have to be rotated inversely, or the mirroring itself has to be performed in more than one dimension (e.g. using normal vectors of the boundary). In many situations, the latter approach will be more efficient. A particular issue with the approach is that it tends to be resource demanding and specifically has a high computational resource requirement. The Inventor has realized that a substantial issue is that a high number of duplicate mirror rooms are generated, and that the high resource usage is not only due to the resource usage in performing the many mirror operations but also due to the requirement for post-processing of the resulting mirror rooms and mirror sound sources in order to identify and eliminate duplicates. As an example,FIG.7illustrates 2D examples showing how 2ndand 4thorder mirroring sequences result in duplication of mirror rooms. The relative number of duplicates grows progressively with the order of the reflections, and indeed for 3D rooms and fifth order reflections, up to 7776 virtual sources are found with the straightforward application of the image source method of which only 230 are unique. In the method ofFIG.3, a specific approach is used to select boundaries for the set of mirror boundaries such that the generation of duplicate mirror rooms and mirror sound sources can be reduced and in most applications can be prevented completely. The method is accordingly configured to choose a subset of room boundaries for subsequent mirrors of the original room and each mirrored room. The subsets for each room is chosen such that it does not result in duplicate rooms and therefore avoids any duplicate virtual mirror sources. This is achieved by selecting the boundaries for the set of mirror boundaries in accordance with a selection criterion that includes a number of rules/constraints for the selection. The selection criterion is used to control the progression of mirroring of rooms/sound sources through the array of rooms within a certain amount of steps from the original room. The approach may specifically be seen as selecting one path to each (potential) mirror room and excluding all other paths to that room. As the different possible paths all cross corresponding boundaries, the different possible paths all include the same boundaries although in different orders. However, as reflections can typically be considered to be linear operations, the order in which they are reached from the sound source to the listener is not important, and thus the order of the boundaries being crossed do not matter. The selection criterion may first be considered for a 2D application, which in the following will be considered to consider four directions, namely Up and Down which correspond to two pairwise parallel boundaries, and Backwards and Forwards which correspond to another two pairwise parallel boundaries (perpendicular to the first pair). The selection criterion specifically includes a constraint/requirement that for a candidate boundary of the source room to be included in the set of mirror boundaries, a first direction of mirroring for the candidate boundary must not be in an opposite direction of a direction of mirroring for any previous mirroring leading to the source room. Thus, the method may in step303e.g. sequentially consider all mirror rooms generated in the previous iteration as the source room for potential further mirroring. It may then for the currently considered source room evaluate all the boundaries for inclusion, e.g. for the 2D example it will consider all the walls of this source room, and for a 3D example it may further consider the ceiling and floor. Further, (apart from the first iteration) the current source room is a mirror room that has been generated by a sequence of one or more mirrorings, and thus the current source room is linked to a sequence of one or more mirror directions reflecting which mirror operations have led to the source room. The requirement then results in all boundaries of the source room which correspond to a mirror direction that is in the opposite direction to a direction that is already included in the sequence of past mirror directions are excluded for further consideration. For example, if the source room has been generated by a sequence that includes a mirroring in the Up direction, then the boundary corresponding to a Down mirror direction is excluded from being selected for the set of mirror boundaries. Similarly, if the prior direction sequence includes a mirroring in the Forward direction, then a boundary corresponding to a Backward mirror direction is excluded. Thus, considering the sequential generation of the mirror rooms through a path/sequence of mirrorings, once a mirroring has been performed in a given direction, no mirroring is allowed in the opposite direction. The selection criterion further comprises a requirement which relates to the direction of mirroring that was performed for the original room and which led to the current source room. Specifically, each of the boundaries of the first room are linked to an excluded linked direction. The excluded direction for a given boundary is specifically one that is perpendicular to the direction of mirroring for that boundary. Thus, a boundary belonging to a first pair of parallel boundaries is an excluded mirror direction for a boundary belonging to a different pair of parallel boundaries. The two pairs of boundaries correspond to the two dimensions of a 2D application or to two dimensions out of three for a 3D case. Specifically, four boundaries belonging to two pairs of parallel boundaries of the source room each have a linked excluded direction with the excluded direction for each boundary being a mirror direction for a boundary belonging to the other pair of parallel boundaries. Further, the four linked excluded directions for the four boundaries are all different and thus the four excluded directions correspond to the four mirror directions. As a specific example, the linked directions may be as follows: Boundary Mirror DirectionLinked Excluded DirectionForwardRightLeftForwardBackwardLeftRightBackward The selection criterion includes a constraint/requirement that for the candidate boundary to be included in the set of mirror boundaries, the direction of mirroring for the candidate boundary must not be in an excluded direction where the excluded direction is dependent on a boundary of the first room around which mirroring leading to the source room was performed. Thus, when the method in step303considers all the boundaries of a given source room in order to select the boundaries for the set of mirror boundaries, it specifically considers the first mirroring that was performed, i.e. the mirroring of the original room that eventually led to the current source room. It may then identify the excluded direction. For example, if the first mirroring was in the Forward direction, it determines that the linked excluded direction was Right. It then proceeds to exclude the boundary that has a mirror direction corresponding to the excluded direction. The requirement then results in the boundary of the source room which corresponds to a mirror direction in the excluded direction is excluded for further consideration and will not be included in the set of mirror boundaries. Thus, no mirroring will be performed in the excluded direction. For example, if the source room has been generated by a sequence that started by a mirroring in the Left direction, then the boundary corresponding to the Forward direction is excluded from being selected for the set of mirror boundaries, and thus the progression of generation of mirror rooms will always be in one direction for the dimension/boundary pair that correspond to the excluded direction. The selection criterion further includes a constraint/requirement that for a candidate boundary to be included in the set of mirror boundaries, the direction of mirroring for the candidate boundary must not be the same direction as any previous direction of mirroring that has led to the source room, except for the direction of mirroring that led to the source room being generated in the immediately previous iteration. Thus, the direction of mirroring for a mirror boundary must not be the same as direction in which a previous mirroring was performed, unless that direction is the same as the mirror direction that was applied in the previous iteration, i.e. unless it is the same direction as the one that was used to generate the source room itself. Thus, the selection requirement is such that a direction of mirroring is never repeated unless it was also used in the previous iteration, i.e. unless it is a continuation of a mirroring in the given direction. Thus, the selection criterion includes a requirement such that a given mirroring sequence never returns to a previously applied mirroring direction that was then deviated from. Thus, once mirroring begins in a first direction, this may be continued for as long as desired but once a mirroring occurs in a different direction, the mirroring cannot return to the first direction. As mirroring in one direction excludes mirroring in the opposite direction, this leads to a situation wherein only one direction of mirroring is allowed for each dimension, and once the mirroring sequence switches from one dimension to mirroring in a direction of another dimension, then it cannot return to the first dimension, i.e. mirroring in one dimension is only possible in one direction and in a continuous sequence of mirrorings. Thus, when the method in step303considers all the boundaries of a given source room in order to select the boundaries for the set of mirror boundaries, it specifically considers all the previous mirror directions that lead to the source room and it excludes all the boundaries that have a mirror direction the same as a previous mirror direction, except for the boundary having a mirror direction which is the same as the one of the mirroring generating the source room. For example, the source room may have been generated by a sequence that started by a mirroring in the Forwards direction twice, then in the Left direction twice, corresponding to a sequence of (F, F, L, L). The requirement not to return to a previous mirror direction except the most recent one will then lead to an exclusion of the boundary with a Forwards mirror direction, but it will not exclude the boundary with a Left mirror direction. The described constraints and requirements may closely interwork to ensure that the mirroring performed in step305does not generate any duplicate rooms (when considering the 2D application). In addition, it allows all the possible mirror rooms to be generated and thus may automatically result in all potential reflections (e.g. up to a given number of reflections) being modelled. Specifically, in many embodiments, the set of mirror boundaries is selected to include all boundaries that meet the selection criterion. Thus, for any mirror room when being considered as a source room, the set of mirror boundaries is generated to include all the boundaries that are not excluded by the requirements. Typically, this includes one or two boundaries for the 2D case. As previously mentioned, in the first iteration, the original room is considered as the only source room/mirror. Further, the set of mirror boundaries is generated to include all the boundaries of the original room. It is also noted that for the first iteration, there is no previous direction or excluded direction and thus all four boundaries will inherently meet the described criterion. Further, the first iteration will determine the excluded direction for each new mirror room. The selection requirements may interact closely and synergistically to allow a determination in a 2D plane of mirror rooms and virtual sources which represents all reflections up to a given order without any duplication. This may be illustrated byFIG.8which shows the rooms of a model generated by the described process after four iterations, i.e. representing up to fourth order reflections. The requirement introducing the excluded direction essentially divides the space into four quadrants and the other requirements ensure that each mirror room can only be reached through one specific sequence/path of mirroring. In addition, they provide for all of the possible mirror rooms to be generated. The approach may in many embodiments be used to generate a 3D model and also include modelling of reflections that include e.g. the ceiling and floor of the original room. In this case, the above requirements described with a focus on a 2D model will still be used but in addition the selection criterion may include a specific requirement to deal with the third dimension. Specifically, the processing circuit203may comprise a requirement that if a second direction of mirroring for any previous mirroring leading to the source room is perpendicular to the excluded direction and to a mirror direction of the mirroring for the first room leading to the source room, then for the candidate boundary to be included in the set of mirror boundaries, the direction of mirroring for the candidate boundary must be the same as the second direction. The previous requirements considered mainly a 2D scenario where each room had two pairs of parallel boundaries. However, in the more typical 3D case, each room also have a third pair of parallel boundaries around which mirroring can be performed. Thus, each set of mirror boundaries can also include additional boundaries in the two directions of the third dimension, such as specifically in the Up and Down direction corresponding to the ceiling and floor of the room. The previous requirements do not prevent any such mirroring from being performed from a room in the original 2D plane. Thus, for every new mirror room in the original 2D plane generated in the previous iteration, a new mirror room may be generated respectively above and below in the current iteration. The previously described requirements operate in two dimensions and provide an approach of expanding the model in two dimensions by mirroring of previously generated rooms. Specifically, the requirements allow a mirroring to be performed in the two dimensions as long as the requirements are met, and this leads to a diamond coverage of the 2D plane. The 2D plane is determined by the direction for the first mirroring performed, i.e. the direction of the mirroring of the original room that led to the current source room being considered and by the excluded direction which is associated therewith. In the specific case, the direction of the first mirroring is one of (Forward, Left, Backward, Right) and similarly the excluded direction is one of (Forward, Left, Backward, Right). The requirement thus considers whether any other mirror directions have previously been performed, i.e. whether there have previously been any mirrorings in other directions than these, i.e. whether there has been a mirroring in the Up or Down direction. If not, the requirement imposes no restrictions and thus it does not restrict any mirroring in the 2D plane, and nor does it restrict a first mirroring out of the 2D plane, i.e. it does not restrict a first Up or Down mirroring. However, if a previous mirroring has been performed out of the 2D plane, i.e. if a first Up or Down mirroring has been performed, then the requirement poses the strict constraint that only a mirroring in the same direction can be performed. Thus, once a mirroring in the Up (or Down) direction has been performed, all subsequent mirrorings must be in the Up (or down) direction. Thus, once direction has changed out of the 2D plane, this direction must be maintained, and a change of direction is not allowed. Thus, the direction of the first step in a mirror sequence in a third dimension disallows any other direction following from this first step. In some embodiments, the third dimension may not be specifically determined by the excluded direction and the direction of the first mirroring. Rather, the third dimension may simply be a designated reference dimension, and specifically may be a predetermined reference dimension. Since, a dimension represents mirroring around opposite boundaries, it represents directions of mirroring in two directions, namely in opposite directions. Thus, the original room may be linked/associated with a pair of reference directions of mirroring that are in opposite directions. In the specific example, the pair of reference directions may specifically be the Up and Down directions. In such an embodiment, the selection criterion may comprise a requirement that if a second direction of mirroring for any previous mirroring leading to the source room is in a direction that is one of the two reference directions of mirroring, then for the candidate boundary to be included in the set of mirror boundaries, the direction of mirroring for the candidate boundary must be the same as the second direction. Thus, in such an embodiment, once a mirroring has been performed in a reference direction, specifically Up or Down in the example, then all subsequent mirrorings must be in the same direction. Thus, as soon as a mirroring in the Up direction has been performed, only the Up direction is possible for subsequent mirrorings and no mirroring in any other direction (whether Right, Left, Backwards, Forwards, or Down) can be performed. This approach ensures that the method may generate a typically symmetric set of mirror rooms for modelling reflections in three dimensions. It may interact closely and synergistically with the previously described requirements to result in an efficient generation of a 3D model which may include accurate modelling of reflections yet have low complexity. In particular, a 3D model of mirror rooms can be generated without duplication. In addition, they provide for all of the possible mirror rooms to be generated. The criterion used to determine how many iterations are performed may depend on the preferences and requirements of the individual application. In many embodiments, a predetermined number of iterations may be performed corresponding to a predetermined maximum number of reflections. In other embodiments, a more adaptive criterion may be used, such as for example the iterations being continued until the combined attenuation factor (the combined reflection factor) for all the generated mirror sound sources are below a threshold. Thus, for such an implementation, the iterations may be repeated until the reflected signal is considered so weak that it can be ignored. It will be appreciated that any stop criterion may be used to generate a model with desired properties and/or to ensure that the process has desired properties. For example, the iterations may be continued until either all attenuation factors are below a threshold or a predetermined number of iterations have been performed. In many embodiments, the generated model may be used to render an audio signal for the original sound source at a given listening position in the original room. Step315may specifically include a rendering by the renderer205based on the model generated in the previous step. The rendering may specifically include determining audio components for each sound source corresponding to direct (non-reflected) paths from each source to the listening position. Further, for each path, the signal may be attenuated by an attenuation factor that is determined to correspond directly to the path length and a combined attenuation factor which corresponds to the combined attenuation by all boundaries crossed by the path. Furthermore, many embodiments may delay the signal by a delay which corresponds directly to the path length, simulating the time-of-flight from the (virtual) sound source to the listener, using a speed of sound to determine the delay from the path length. Thus, each audio component emulates one early reflection and the combined audio reaching the listening position can be generated by combining all the audio components (including that directly from the original source) plus optionally a late reverberation component (which may be generated using any suitable means, such as e.g. a Jot reverberator). The rendering of audio components as direct, non-reflected propagation from the virtual sound sources provides an efficient emulation of reflections in the room/acoustic environment thereby allowing a rendering to be generated that is perceived as a natural and realistic sound. It will be appreciated that many rendering algorithms are known, including spatial rendering algorithms using spatial speaker configurations or binaural processing for headphone reproduction), and that any suitable approach may be used. As described, each boundary of the original room may be associated with acoustic properties, and specifically the room data may describe an attenuation or reflection factor for each boundary. The attenuation/reflection factor may specifically indicate an attenuation of an acoustic signal being reflected by the wall, i.e. it may indicate the level difference/ratio between an incoming audio signal and a reflected audio signal. The attenuation/reflection factor may be frequency dependent and for example may directly correspond to a frequency dependent filtering of the incoming audio signal. The attenuation factor for a boundary will depend on the acoustic properties of the boundary, and specifically the material of the element making up the boundary. Some materials will result in a strong reflection (e.g. tiles) whereas other materials are more acoustically dead (e.g. shag carpet) and will attenuate the sound so that only a substantially smaller signal is reflected. This may be indicated by the attenuation factor. For each virtual source, the path to the original room crosses a number of boundaries with the number being equal to the iteration in which the room was generated. Further, each boundary being crossed corresponds to/models a reflection in the real room. For example, a virtual sound source crossing two boundaries in order to reach the original room, models a path in the original room formed by two reflections. Further, the two reflections have attenuation factors, and when assigning these attenuation factors to the mirror boundaries, the attenuation factor for crossing a mirror boundary directly reflects the effect of the reflection that this models. In many embodiments, a combined reflection/attenuation factor may be determined for each virtual sound source by combining the attenuation factors of the boundaries around which mirroring has been performed in order to generate the virtual mirror sound source and corresponding mirror room. Thus, a combined attenuation factor can be generated for a mirror sound source by combining attenuation factors for all boundaries included in mirroring leading to the mirror room comprising the mirror sound source. Thus, this combined attenuation factor reflects the combined reflection attenuation of all reflections for the early reflection modelled by the virtual sound source. The combined attenuation factor may thus be used by the rendering to determine e.g. the signal level and/or frequency distribution for the audio component reaching the listening position. Further, this may be independent of the specific position of the listening position in the original room and thus only the distance dependent path loss attenuation needs to be specifically determined for the specific current listening position. In some embodiments, the selection criterion may comprise a requirement that a combined attenuation factor for the source sound source combined with an attenuation factor for the mirror boundary must indicate an attenuation below a threshold. Thus, in order for a boundary to be accepted to be a mirror boundary to generate a new mirror room and mirror sound source, it is required that the attenuation of this mirror sound source will not be attenuated by more than a given amount. Thus, the mirroring progression is terminated when this results in a reflected path which attenuates the original sound source to an extent where it can be considered to not contribute to the perception of the original sound. This may result in reduced complexity and resource demands in many embodiments. The use of an attenuation factor may also allow modelling of very specific scenarios. Specifically, it may allow efficient modelling of a room in which one (or more) boundaries are acoustically dead or transparent with no reflections being generated whatsoever. Specifically, an acoustically non-reflective boundary may be represented by an attenuation factor which is indicative of complete attenuation, i.e. with no reflected signal being generated. Thus, for a non-reflective element forming a boundary, an attenuation factor of 100% may be assigned (corresponding to a reflection coefficient of zero). Accordingly, any virtual mirrored sound source generated by a sequence of mirrorings that includes this boundary as a mirror boundary will result in a combined attenuation factor of 100% and thus will not generate any audio component, corresponding to any reflected path that includes this boundary will not reach the listening position. Indeed, the approach of setting the attenuation factor to 100% attenuation may also be applied to a boundary that does not include any physical element, such as a missing wall or ceiling. In many embodiments, this may be combined with the selection of the boundaries for the set of mirror boundaries not including boundaries that result in an attenuation factor below a given threshold such that any mirror sequence is stopped when it reaches a non-reflective wall. In some embodiments, the threshold may be adaptive. For example, dependent on the order of the reflection, or dependent on the relative (current, time-limited) level of the original sound source signal. The described approach may thus be used to generate an acoustic image source model for early reflections in a room by iteratively mirroring rooms around boundaries (e.g. walls) of rooms of the previous iteration. The boundaries around which to mirror in each iteration is determined by a specific selection criterion including requiring that mirror directions cannot be reversed, cannot be in an excluded direction, and cannot be repeated unless in a continuous series of mirrorings. The approach may sequentially expand a model to include modelling of higher and higher order (i.e. more) reflections. An example of a tree representing the model is shown inFIG.9which shows a tree with representation of the algorithm's progression to find all the mirrored rooms for third order reflections (i.e. a tree depth of 3 mirroring iterations). InFIG.9, the Up, Down, Left, Right, Forwards, and Backwards directions are represented by respectively U, D, L, R, F, and B. In this example, the dimensions and directions are represented by forward-backward, left-right and up-down. Each node in the graph represents a room, with the first node being the original room and the remaining nodes representing 62 mirrored versions of the original room. The approach may allow a very efficient algorithm to be used to generate a model that may be highly accurate, and which may be used to render audio such that this is perceived as realistic and naturally sounding. The approach may in particular reduce the computational complexity and/or the required computational resource requirement/usage. It may be implemented using less computational power than for typical applications. Specifically, in comparison to another approach for generating an image source model, it may typically reduce the number of mirror operations that are required to determine the mirrored virtual sound sources thereby substantially reducing the computational resource requirement. It may also avoid post-processing typically associated with having to resolve duplicate virtual sound sources. A much more efficient process can typically be achieved. The described approach for generating an image source model may typically be part of an initialization component/routine that needs to be run at least once for a room and which derives a set of virtual mirror sources representing reflections of the original sound sources. In cases where sound sources are moving, the image source model may be recalculated, or partially recalculated for the one or more moving sources. Thus, in many embodiments, the approach may be based on an iterative process with each process comprising two steps for each mirror room defined in the previous iteration. These steps may for a given room (a source room) include:1. Determine a set of mirror boundaries across which the source room and at least one point in the room is to be mirrored to find higher order reflections.2. Mirror the source room and at least one (source) position/sound source across each of the boundaries in the set of mirror boundaries, and update the combined reflection coefficient (attenuation factor) to include the reflection coefficient corresponding to the boundary mirrored across. Many embodiments may perform the iterations with a recursive process, where the mirrored rooms are used as the source room for the next iterations. An example of a process in pseudo-code is provided in the following: function [ reflectionList, reflectionAttList] = optimalImagesource (roomDef,srcPos, reflectionAtt, state)[mirrorBoundarySet, state] = getSetOfMirrorBoundaries (roomDef, state) ;idx = 0;for b = mirrorBoundarySetidx = idx + 1;mirroredRoomDef(idx) = mirrorRoom (roomDef,roomDef.boundary (mirrorBoundarySet (b) ) );mirroredSrcPos(idx) = mirrorSrc (srcPos,roomDef.boundary (mirrorBoundarySet(b) ) );mirroredReflAtt (idx) = reflectionAtt * roomDef.reflectionCoeff (b);endreflectionList = mirroredSrcPos;reflectionAttList = mirroredReflAtt;state.order = state.order + 1;if (state.order < state.maxOrder)for idx = 1:length (mirrorBoundarySet)[reflectionListPart, reflectionAttListPart] = optimalImageSource (mirroredRoomDef(idx), mirroredSrcPos (idx), mirroredReflAtt (idx),state) ;reflectionList = concat (reflectionList, reflectionListPart);reflectionAttList = concat(reflectionAttList, reflectionAttListPart);endend which could be initialized and started with: maxOrder=5; state = initOptimalImagesource (maxOrder) ;[reflectionList, reflectionAttList] =optimalImageSource (roomDef,srcPos,1,state); Rooms are often rectangular (a.k.a. shoebox model) or can be approximated by a rectangular equivalent. The boundaries of such a rectangular room model are not necessarily aligned with the coordinate system in which they are defined as illustrated inFIG.10. This introduces two issues that complicate the image source method.Mirroring points across boundaries are not straightforward subtraction and addition in a single dimension but affects measurements and adjustments in two or even three dimensions at the same time.The three mirroring dimensions (forward-backward, left-right and up-down) of the source room do not map directly to the coordinate system. One approach would be to align the room with the coordinate system by rotating every room definition coordinate and the source position coordinates, calculate the virtual sound source positions and rotate all these back with the inverse rotation. Another approach would be to define the mirroring dimensions arbitrarily to the three parallel boundary pairs defining the room, and to perform the mirroring using geometry math. Exemplary approaches based on the latter option will be described in the following. As a first step, the three pairs of parallel boundaries defining the original room must be found and assigned to the three mirroring dimensions. This mapping can be arbitrarily chosen. No specific order is needed. An exception to that is if the reflections are only calculated in the horizontal plane, e.g., in order to further reduce computational complexity. In that case, the floor and ceiling pair must be detected, which is possible by finding the boundaries of which the normal vectors are closest to the up-axis of the coordinate system. E.g. the boundaries for which the normalized normal vectors have the largest absolute dot product. maxa❘"\[LeftBracketingBar]"Na_·iz❘"\[RightBracketingBar]" To find the pairs, the normal vector of each boundary is calculated (more detailed explanation in the section on mirroring points, below). A correlation matrix of all pairs of normal vectors allows finding the pairs and furthermore allows verification that the room model is rectangular. Each element cij=cjiof the correlation matrix C, contains the dot product of the normal vectors of boundaries with indices i and j. All values should be very close to either 0, 1 or −1 for rectangular room definitions. Pairs for which the values are close to 1 or −1 are parallel pairs and can therefore be considered as one of the three dimensions along which to mirror. As part of the process, the source room definition has to be mirrored across a plane defined by one of the source room's boundaries. Also mirroring of other positions inside the source room, e.g. a (reflection) sound source position, can be mirrored across the same plane. Mirroring points across planes is a well-known mathematical procedure and can be performed using the normal vector of the plane. This vector, a direction vector, is perpendicular to all vectors inside the plane can be used to determine the point in the plane with the shortest distance to the point to be mirrored. This is the mirror point. Finding this point allows mirroring of the point to the opposite side of the plane by flipping the sign of the direction vector connecting the mirror point to the point to be mirrored, or by doubling the length of the direction vector connecting the point to be mirrored to the mirror point. The normal vector of a plane can be derived from two vectors, or three points, in the plane. A particular advantageous way to define boundaries of a rectangular room is by defining the coordinates of the four corners of the (rectangular) boundary. Therefore, choosing three of these four coordinates is sufficient to calculate the normal vector. In alternative representations, a room and its boundaries may be defined as a mesh with 3-point polygons. Similarly, the three vertices of one of the polygons defining a boundary can be used to calculate the normal vector. With the three selected points in the plane denoted as:p,qandr, the normal vector is the vector which is orthogonal to two vectors defined by the points. E.g.: V=[v1v2v3]=q−p W=[w1w2w3]=r−p Taking the cross-product of the two vectors results in this orthogonal normal vector. Ns_=[n1n2n3]=V_×W_=[v2*w3-w2*v3v3*w1-w3*v1v1*w2-w1*v2] As a means to reduce complexity in the further calculations the normal vector can be normalized. N_=Ns_n12+n22+n32 The result is a normalized direction vector starting in the origin, perpendicular to the (infinite) plane through the room boundary. The normal vector itself is not sufficient to define the plane. The plane's equation is: N·m=n1m1+n2m2+n3m3=d for any pointmin the plane. Hence, for example using any of the pointsp,q, orr, value d can be calculated. Next, the vector connecting the point to be mirrored (s) to the mirror point (ū) can be calculated where α is used to scale the direction vector to the correct length and sign. s+αN=ū Mirror point ū must lie in the plane, therefore: N·ū=N·(s+α·N)=d Working this out results in: α=d−s·N With this approach the mirrored point (s′) is found by calculating: s′=s+2*α*N In most embodiments, the attenuation resulting from the reflections are also calculated for each of the mirror sources. Thus, in many embodiments, for each mirroring operation, the reflection attenuation for the sources in the mirrored room is calculated and represented by a combination attenuation factor. In the following, the attenuation factors may be represented directly by reflection coefficients, but it will be appreciated that the attenuation may in many embodiments e.g. be frequency dependent. The reflection attenuation of the source room is combined with the reflection coefficient of the boundary across which it was mirrored. The reflection coefficients of the boundaries may be broadband or frequency dependent. E.g. the attenuation factor may be represented by FIR/IIR filter coefficients or attenuation coefficients in frequency bands/bins. For example: reflAtt(f)=reflAttIn(f)*reflCoeff(mirrorBoundIdx,f) Reflection coefficients may not necessarily be uniform across the entire boundary. In those cases, the used reflection coefficients could be made uniform for the entire boundary by calculating an average reflection coefficient over the surface of the room boundary. Similarly, an average for all the boundaries of the room could be calculated. In more accurate embodiments, the mirror point in the boundary plane (ū) can be calculated for each mirrored sound source position and used to determine which reflection coefficient is applicable for that source. Following the rules outlined earlier, a subset of the (typically) six boundaries is selected as a set of mirror boundaries for every source room in the iterative process. In each iteration, each mirror room generated by the (immediately) previous iteration may be considered a source room, i.e. each just created mirror room may be evaluated for potential further mirroring in the next iteration. Each boundary of the original room designates/represents a direction (e.g. the direction of the normal vector for the boundary and outwards from the room). As the boundaries are pairwise parallel, each pair of boundaries define one dimension with two directions, (corresponding to the two boundaries defining the mirroring dimension). For example, the pairs of parallel boundaries may be represented by (where birepresents boundary i): D1=[b1,b5] D2=[b2,b4] D3=[b3,b6] In the first iteration (i.e. generating the first order reflection), all directions are allowed, so the set of mirror boundaries for the source room corresponding to the original room contains all boundaries of the source room. This results in six (or four in the case of 2D modelling) branches from which further iterations calculate higher order reflections, e.g. B=[b1, b5, b2, b4, b3, b6]. Thus, six new mirror rooms and six new mirror sound sources are generated in the first iteration. In any of these branches (e.g. after mirroring over boundary b4), the next (i.e. second) iteration may continue in the same direction with respect to the original room. This corresponds to a mirror across the other boundary in the corresponding dimension boundary pair (since it is a mirroring over a boundary of a room generated by mirroring the original room). In this example b2. This alternation of boundaries in each dimension is also shown inFIGS.6and7. Depending in the dimension along which the previous iteration in the branch went, the directions may be more limited. E.g. if the previous step was along the first dimension, only a single direction of the second dimension is allowed by virtue of the excluded direction which is in a second dimension. However, as directions have only been in two dimensions, both directions of the third dimension are acceptable. For example, when going in a first direction of the first dimension, the excluded direction associated therefore results in that in the second dimension only a first direction is allowed, and when going in the second direction of the first dimension, only a second direction of the second dimension is allowed. In the same example, when the first step was going in the first direction of the second dimension, only the second direction of the first dimension is allowed and when the first step was going in the second direction of the second dimension only the first direction of the first dimension is allowed. This inverse relation between the allowed direction in the second step depending on whether the first step was in the first dimension (first-first, second-second) or in the second dimension (first-second, second-first) prevents the overlapping of mirrored rooms while not omitting mirrored rooms. Still in the same example, if any mirroring step is along the third dimension, all subsequent mirrorings may only be in that direction, and not along any other directions or dimensions. It should be clear that the notion of first-, second- and third dimension in the above does not have to relate to the order in which the dimension pairs have been defined and ‘first’, ‘second’ and ‘third’ may be interchanged, when related to the dimensions. Similarly, ‘first’ and ‘second’ may also be interchanged, when related to the directions within the dimensions. It is repeated that directions within dimensions are, in the above example, considered to be with reference of the original room, while the boundaries related to a certain direction in a dimension alternate with each mirroring step in that direction. All branches that have changed dimension cannot go back to mirroring in an earlier dimension. E.g. a branch that in the first step mirrored in the second dimension and in the second step along the first dimension, may only continue to mirror in that direction of the first dimension, and any direction in third dimension. Advanced embodiments may consider attenuation factors, such as reflection coefficients or total reflection attenuation, when determining the set of allowed directions. This may further reduce computational complexity. For example, if a certain direction is allowed according to the rules described above, but the reflection coefficient of the corresponding boundary is below a certain threshold (i.e. smaller than 0.05 or, alternatively, smaller or equal to −20 dB) the direction may be excluded from the set of mirror boundaries. Additionally, or alternatively, a rule may be included that when the combined reflection attenuation is below a certain threshold (e.g. smaller than 0.02) the boundary is excluded. For frequency dependent coefficients, the threshold may be frequency dependent, relate to a weighted average coefficient over all frequency bands, a maximum coefficient from all frequency bands or the coefficient related to a certain frequency (e.g. 1000 Hz). Similarly, for reflection coefficients that differ between regions within the boundary, the threshold may apply to individual sound source positions or use an averaged reflection coefficient over the whole room boundary. It will be appreciated that the above description for clarity has described embodiments of the invention with reference to different functional circuits, units and processors. However, it will be apparent that any suitable distribution of functionality between different functional circuits, units or processors may be used without detracting from the invention. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controllers. Hence, references to specific functional units or circuits are only to be seen as references to suitable means for providing the described functionality rather than indicative of a strict logical or physical structure or organization. The invention can be implemented in any suitable form including hardware, software, firmware or any combination of these. The invention may optionally be implemented at least partly as computer software running on one or more data processors and/or digital signal processors. The elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed, the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit or may be physically and functionally distributed between different units, circuits and processors. Generally, examples of a an apparatus and method for determining virtual sound sources are indicated by below embodiments. EMBODIMENTS 1. A method of determining virtual sound sources representing reflections of a first sound source in a first room, the method comprising a computer performing the steps of: receiving data describing boundaries of the first room and a sound source position for the first sound source in the room; iteratively determining the virtual sound sources as mirrored sound sources by performing sound source mirroring of sound sources determined in a previous iteration, each iteration comprising, for each source room of a set of source rooms comprising mirror rooms determined in an immediately previous iteration, performing the steps of: determining (303) a set of mirror boundaries for the source room; for each mirror boundary of the set of mirror boundaries determining (305) a mirror room by mirroring the source room around the mirror boundary, and determining a mirror sound source by mirroring a source sound source around the mirror boundary, the source sound source being a mirror sound source of the source room, the mirroring having a direction of mirroring from the source room to the mirror room; wherein the determining (303) of the set of mirror boundaries includes selecting boundaries of the source room in accordance with a selection criterion comprising: a requirement that for a candidate boundary of the source room to be included in the set of mirror boundaries, a first direction of mirroring for the candidate boundary must not be in an opposite direction of a direction of mirroring for any previous mirroring leading to the source room; a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in an excluded direction, the excluded direction being dependent on a boundary of the first room around which mirroring leading to the source room was performed; and a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in a same direction as any direction of mirroring for any previous mirroring leading to the source room except for a direction of mirroring of a mirroring generating the source room in the immediately previous iteration. 2. The method of claim1wherein the selection criterion comprises a requirement that if a second direction of mirroring for any previous mirroring leading to the source room is perpendicular to the excluded direction and to a mirror direction of the mirroring for the first room leading to the source room, then for the candidate boundary to be included in the set of mirror boundaries, the first direction must be the same as the second direction. 3. The method of claim1wherein the first room has a pair of reference directions of mirroring being in opposite directions, and the selection criterion comprises a requirement that if a second direction of mirroring for any previous mirroring leading to the source room is in a direction belonging to the pair of associated reference directions of mirroring, then for the candidate boundary to be included in the set of mirror boundaries, the first direction must be the same as the second direction. 4. The method of any previous claim wherein for a first iteration the first room is designated a source room of the set of source rooms for the first iteration. 5. The method of any previous claim wherein all boundaries of the first room are included in the set of mirror boundaries for the first iteration. 6. The method of any previous claim wherein each boundary of the first room is associated with an attenuation factor, and the method comprises determining a combined attenuation factor for each mirror sound source by combining attenuation factors for all boundaries included in mirroring leading to the mirror room comprising the mirror sound source. 7. The method of claim6wherein the selection criterion comprises a requirement that for the candidate boundary to be included in the set of mirror boundaries, a combined attenuation factor for the source sound source combined with an attenuation factor for the candidate boundary must indicate an attenuation below a threshold. 8. The method of claim6or7wherein the combined attenuation factor is frequency dependent. 9. The method of any of claims6to8wherein an attenuation factor for an acoustically non-reflective boundary is indicative of complete attenuation. 10. The method of any previous claim further comprising rendering (309) an audio signal for a listening position in the first room, the audio signal including at least one audio component representing audio from at least one mirror audio source arriving at the listening position. 11. The method of any previous claim wherein the set of mirror boundaries include all boundaries that meet the selection criterion. 12. The method of any previous claim wherein a predetermined number of iterations are performed. 13. The method of any previous claim wherein the first room is an orthotope. 14. An apparatus for determining virtual sound sources representing reflections of a first sound source in a first room, the apparatus comprising: a receiver arranged to receive data (201) describing boundaries of the first room and a sound source position for the first sound source in the room; a processing circuit (203) arranged to iteratively determine the virtual sound sources as mirrored sound sources by performing sound source mirroring of sound sources determined in a previous iteration, each iteration comprising, for each source room of a set of source rooms comprising mirror rooms determined in an immediately previous iteration, performing the steps of: determining (303) a set of mirror boundaries for the source room; for each mirror boundary of the set of mirror boundaries determining (305) a mirror room by mirroring the source room around the mirror boundary, and determining a mirror sound source by mirroring a source sound source around the mirror boundary, the source sound source being a mirror sound source of the source room, the mirroring having a direction of mirroring from the source room to the mirror room; wherein the determining of the set of mirror boundaries includes selecting boundaries of the source room in accordance with a selection criterion comprising: a requirement that for a candidate boundary of the source room to be included in the set of mirror boundaries, a first direction of mirroring for the candidate boundary must not be in an opposite direction of a direction of mirroring for any previous mirroring leading to the source room; a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in an excluded direction, the excluded direction being dependent on a boundary of the first room around which mirroring leading to the source room was performed; and a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in a same direction as any direction of mirroring for any previous mirroring leading to the source room except for a direction of mirroring of a mirroring generating the source room in the immediately previous iteration. 15. A computer program product comprising computer program code means adapted to perform all the steps of claims1-13when said program is run on a computer. 1. A method of determining virtual sound sources representing reflections of a first sound source in a first room, the method comprising a computer performing the steps of: receiving data describing boundaries of the first room and a sound source position for the first sound source in the room; iteratively determining the virtual sound sources as mirrored sound sources by performing sound source mirroring of sound sources determined in a previous iteration, each iteration comprising, for each source room of a set of source rooms comprising mirror rooms determined in an immediately previous iteration, performing the steps of: determining (303) a set of mirror boundaries for a source room of a/the current step; for each mirror boundary of the set of mirror boundaries determining (305) a mirror room by mirroring the source room around the mirror boundary, and determining a mirror sound source by mirroring a source sound source around the mirror boundary, the source sound source being a mirror sound source of the source room, the mirroring having a direction of mirroring from the source room to the mirror room; wherein the determining (303) of the set of mirror boundaries includes selecting boundaries of the source room in accordance with a selection criterion comprising: a requirement that for a candidate boundary of the source room to be included in the set of mirror boundaries, a first direction of mirroring for the candidate boundary must not be in an opposite direction of a direction of mirroring for any previous mirroring leading to the source room; a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in an excluded direction, the excluded direction being dependent on a boundary of the first room around which mirroring leading to the source room was performed; and a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in a same direction as any direction of mirroring for any previous mirroring leading to the source room except for a direction of mirroring of a mirroring generating the source room in the immediately previous iteration. 1. A method of determining virtual sound sources representing reflections of a first sound source in a first room, the method comprising: receiving data describing boundaries of the first room and a sound source position for the first sound source in the room; iteratively determining the virtual sound sources as mirrored sound sources by performing sound source mirroring of sound sources determined in a previous iteration, each iteration comprising, for each source room of a set of source rooms comprising mirror rooms determined in an immediately previous iteration, performing the steps of: determining (303) a set of mirror boundaries for the each source room; for each mirror boundary of the set of mirror boundaries determining (305) a mirror room by mirroring the each source room around the mirror boundary, and determining a mirror sound source by mirroring a source sound source around the mirror boundary, the source sound source being a mirror sound source of the each source room, the mirroring having a direction of mirroring from the each source room to the mirror room; wherein the determining (303) of the set of mirror boundaries includes selecting boundaries of the each source room in accordance with a selection criterion comprising: a requirement that for a candidate boundary of the each source room to be included in the set of mirror boundaries, a first direction of mirroring for the candidate boundary must not be in an opposite direction of a direction of mirroring for any previous mirroring leading to the each source room; a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in an excluded direction, the excluded direction being dependent on a boundary of the first room around which mirroring leading to the each source room was performed; and a requirement that for the candidate boundary to be included in the set of mirror boundaries, the first direction must not be in a same direction as any direction of mirroring for any previous mirroring leading to the each source room except for a direction of mirroring of a mirroring generating the each source room in the immediately previous iteration. More specifically, the invention is defined by the appended CLAIMS. Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term comprising does not exclude the presence of other elements or steps. Furthermore, although individually listed, a plurality of means, elements, circuits or method steps may be implemented by e.g. a single circuit, unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. Also, the inclusion of a feature in one category of claims does not imply a limitation to this category but rather indicates that the feature is equally applicable to other claim categories as appropriate. Furthermore, the order of features in the claims do not imply any specific order in which the features must be worked and in particular the order of individual steps in a method claim does not imply that the steps must be performed in this order. Rather, the steps may be performed in any suitable order. In addition, singular references do not exclude a plurality. Thus references to “a”, “an”, “first”, “second” etc. do not preclude a plurality. Reference signs in the claims are provided merely as a clarifying example shall not be construed as limiting the scope of the claims in any way.
76,553
11943607
SUMMARY Example embodiments include methods and apparatus that switch between binaural sound and mono or stereo sound. One example embodiment is a method that switches from binaural sound to one of mono sound or stereo sound in response to head movements of a listener. The method provides the listener with binaural sound at a sound localization point (SLP) in a field-of-view of the listener. Binaural sound at the SLP switches to one of mono or stereo sound when head movements of the listener cause the SLP to move outside the field-of-view. Other example embodiments are discussed herein. DETAILED DESCRIPTION Binaural sound or three-dimensional (3D) sound externally localizes away from a head of the listener, unlike stereo or mono sound that localizes inside the head of the listener or localizes to a physical sound speaker. Thus, when a listener hears binaural sound, a source or location of the sound occurs outside the head of the listener even though this location may be in empty space or space not occupied with a physical sound speaker or loud speaker. Binaural sound has many technical challenges and problems, especially when users exchange binaural sound in an electronic communication or play binaural sound in an augmented reality (AR) or virtual reality (VR) environment. Example embodiments offer solutions to these challenges and problems. Problems occur during an electronic communication and AR and VR environments when listeners move their heads while listening to binaural or three-dimensional (3D) sound. As one example, when the head of the listener moves with respect to a sound localization point (SLP) of binaural sound, the sound must be repeatedly processed or convolved in order for the listener to continue to hear the sound as originating from the SLP. Maintaining binaural sound at the SLP while the listener moves his or her head is a process intensive task. If the processor cannot process the sound fast enough, then the SLP can unexpectedly move and create confusion or an unrealistic audio environment. As another example, processing binaural sound to have accurate and consistent SLPs behind the listener is challenging. A precise location or origin of a sound source is more difficult to determine when this sound source occurs behind the head of the listener. This difficulty occurs, in part, because the listener is unable to see the sound source and must rely on hearing to determine the SLP or the location of the sound source. By contrast, when the sound source occurs in front of the listener, such as in the field-of view (FOV), the listener can determine a location of this sound source based on both visual and audio information (e.g., the listener can both see and hear the sound source). Example embodiments solve these problems and others. These example embodiments include methods and apparatus that switch or change a format of how sound is provided to the listener (e.g., mono, stereo, and binaural sound) based on changes to head movements and/or to a field-of-view of the listener. For example, one example provides the listener with binaural sound at a sound localization point (SLP) in a field-of-view of the listener. Binaural sound at the SLP switches to one of mono or stereo sound when head movements of the listener cause the SLP to move outside the field-of-view. The mono or stereo sound switches back to binaural sound when the head movements of the listener cause the SLP to move back inside the field-of-view. In this way, the binaural sound is maintained in the field-of-view of the listener and switched to another format when the SLP is no longer visible or in the field-of-view. These movements of the SLP can also occur when the SLP itself moves (e.g., an image in the FOV of the listener moves outside the FOV). FIG.1is a method that switches to and from binaural sound and mono or stereo sound in accordance with an example embodiment. Block100states provide sound as binaural sound to a listener. Binaural sound is provided to the listener through one or more electronic devices including, but not limited to, one or more of headphones, earphones, earbuds, bone conduction devices, or other electronic devices with speakers at, in, or near the ears of the listener. Binaural sound can be processed for crosstalk cancellation and provided through speakers separate or away from the listener (e.g., dipole stereo speakers). Electronic devices in communication with or formed as part of headphones, earphones, and earbuds can provide binaural sound to the listener (e.g., a smartphone in wireless communication with earphones). Various types of electronic devices can include or be in communication with speakers to provide binaural sound to listeners. Examples of these electronic devices include, but are not limited to, wearable electronic glasses, smartphones, head mounted displays (HMDs), optical head mounted displays (OHMDs), wearable electronic devices (WEDs), portable electronic devices (PEDs), handheld portable electronic devices (HPEDs), laptop computers, tablet computers, desktop computers, and other electronic devices. From the point-of-view of the listener, the sound originates or emanates from an object, point, area, or direction. This location for the origin of the sound is the sound localization point (SLP). By way of example, the SLP can be an actual point in space (e.g., an empty point in space 1-2 meters away from the head of the listener) or a point on or at a physical or virtual object (e.g., a mouth or head of an augmented reality (AR) or virtual reality (VR) image). The SLP does not have to be so precise since humans are not always able to localize sound to a particle point. As such, the SLP can also be a specific or general area (e.g., a location next to and on the right side of the listener) or a specific or general direction from where the sound originates to the listener (e.g., a location several meters behind the listener). When binaural sound is provided to the listener, the listener will hear the sound as if it originates from the sound source, the source of sound, or the SLP. The sound, however, does not originate from the sound source since the sound source or SLP may be an inanimate object with no electronics or an animate object with no electronics. Alternatively, the sound source or SLP has electronics but does not have the capability to generate sound (e.g., the sound source has no speakers or sound system). As yet another example, the sound source or SLP has speakers and the ability to provide sound but is not providing sound to the listener. In each of these examples, the listener perceives the sound to originate from the sound source or SLP, but the sound source or SLP does not produce the sound. Instead, the sound is processed or convolved and provided to the listener so the sound appears to originate from the sound source or SLP. Consider an example in which the sound externally localizes away from the head of the listener in empty space (e.g., where no physical or tangible object exists) or occupied space. For example, the sound externally localizes proximate or near the listener, such as localizing within a few meters of the listener. For instance, the SLP where the listener localizes the sound is stationary or fixed in space (e.g., fixed in space with respect to the user, fixed in space with respect to an object in a room, fixed in space with respect to an electronic device, fixed in space with respect to another object or person). By way of example, the SLP can be an actual point in space (e.g., an empty point in space 1-2 meters away from the head of the listener) or a point on a physical or virtual object (e.g., a mouth or head of an augmented reality (AR) or virtual reality (VR) image). The SLP does not have to be so precise since humans are not always able to localize sound to a particle point. As such, the SLP can also be a general area (e.g., a location next to and on the right side of the listener) or a general direction from where the sound originates to the listener (e.g., a location several meters behind the listener). Block110makes a determination whether to switch the sound from binaural sound to mono or stereo sound. If the answer to this determination is “no” then flow proceeds back to block100. If the answer to this determination is “yes” then flow proceeds to block120that states provide the sound as mono or stereo sound. The sound being provided to the listener can switch or change between binaural, stereo, and mono sound. Further, the listener, a user, an electronic device, a process, or a software program can make this determination and initiate the switching of the sound from one format (e.g., binaural) to another format (e.g., stereo or mono). Block130makes a determination whether to switch back the sound to binaural sound. If the answer to this determination is “no” then flow proceeds back to block120, and the sound continues to play as mono or stereo sound. If the answer to this determination is “yes” then flow proceeds back to block100, and the sound switches to play as the binaural sound. By way of example, a determination to switch the sound includes, but is not limited to, one or more of the following: an instruction or command from a user or listener (e.g., the listener interacts with a user interface to switch the sound), a sensor sensing of an action (e.g., a sensor senses a user donning headphones or wearable electronic device), activation of a physical or virtual switch (e.g., a switch toggles, activates, or moves to switch the sound), head tracking activates switching (e.g., switch when the listener moves his or her head a certain amount or to a certain view), a user interface receives a voice command to switch the sound, a timer or clock initiates switching (e.g., switch at a certain time of day), a global positioning system (GPS) or Internet of Things (IoT) location activates switching (e.g., switch the sound when the listener enters a predetermined area or location), user preferences indicate a switch (e.g., memory stores a user's preference to hear telephone calls in stereo but software games in 3D sound), a user agent initiates switching, a software program causes switching (e.g., while playing a software game a user takes an action that causes the game to switch sound), bandwidth availability (e.g., switch to stereo or mono when bandwidth drops to a predetermined level), processing resources availability or consumption (e.g., switch to stereo or mono when processing resources exceeds a predetermined level), and other examples discussed herein. Consider an example in which an electronic device tracks eye movement, focus, or gaze of the listener. The sound being provided to the listener switches based on the eye movements or gaze. For example, switching of the sound occurs when eyes of the listener focus on a particular object or area. As another example, switching of the sound occurs when the eyes of the listener close or open for a predetermined amounted of time. For instance, a camera captures images of the face of the listener, and facial recognition software or eye tracking software categorizes the eyes as being open or closed. When the eyes are closed for a predetermined amount of time, the sound automatically switches (e.g., switches from binaural sound to stereo or mono sound). Such switching occurs while the listener continues to hear the sound uninterrupted. FIG.2is a method that reduces processing of binaural sound to a sound localization point (SLP) in accordance with an example embodiment. Block200states provide binaural sound to a listener at a sound localization point (SLP) that occurs in a field-of-view of the listener. One or more processors process and/or convolve the sound so the sound originates or emanates to the listener from a SLP that is in the field-of-view (FOV) of the listener. This SLP can include an image (such as a 2D or 3D image), a picture, video, text, symbol, graphical representation, icon, emoji, etc. The SLP can also occur in empty space where no physical or tangible object resides. As noted, the sound can be provided to the listener through various types of electronic devices, such as headphones, earphones, speakers, etc. Consider an example in which the two users communicate with each other while wearing head mounted displays or wearable electronic devices. These electronic devices execute software that enables voice exchanges between the two users. The voice of the second user originates to the first user from a SLP that includes an image representing the second user. In this way, the first user sees the second user and also hears the voice originating from this image. The image and thus SLP are in the field-of-view of the first user since the first user sees the image from where the sound emanates. Block210states track head movements of the listener and/or a location of the SLP to detect when the SLP moves outside the field-of-view of the listener. An example embodiment executes head tracking to track head movement of the listener while the listener listens to the sound. Head tracking monitors or tracks head position and/or orientation of the head of the listener. Various methods and electronics can be used to track head movement. By way of example, such electronics include, but are not limited to, one or more of accelerometers, gyroscopes, magnetometers, cameras, and infrared LEDs. An example embodiment also tracks a location of the SLP and/or an object associated with the SLP. For example, the SLP occurs at a coordinate location associated with the coordinates of the HRTFs convolving or processing the sound being provided to the listener. As another example, SLP occurs at a coordinate location on or thru a display that includes an object at the SLP. For instance, the SLP is or includes a talking graphical representation, such as a talking emoji, animoji, emoticon, person, character, image, etc. As yet another example, the SLP can occur at a location of a physical or tangible object, such as sound externally localizing to a physical object proximate to the listener. Consider an example in the listener wears a head mounted display, electronic glasses, or other wearable electronic device that displays a field-of-view to the listener. Initially, the SLP occurs in this field-of-view. For example, the wearable electronic device includes a display or displays an image or graphical representation with or at the SLP. This SLP and graphical representation can remain at a fixed location in this field-of-view such that head movements of the listener cause the SLP and graphical representation to leave the field-of-view of the listener. For instance, the SLP and graphical representation are visible since they appear within several meters in front of the listener. When the listener turns or rotates her head 180° (e.g., turning to look behind her), the field-of-view no longer includes the location of the SLP and graphical representation. Further, the SLP and graphical representation can move even though the head of the listener remains fixed or stationary. For instance, while a head of the listener remains motionless in a forward-looking direction, the SLP and accompanying graphical representation disappear and are no longer visible to the listener (e.g., they move behind the listener or off to one side). Humans have a visual field that includes about 210° per a forward-facing horizontal range and about 150° in the vertical range. Further, the ability to perceive or to identify shape and motion across the visual field varies. Example embodiments are not limited to executing within a full field-of-view or visual field of the listener but include subsets or smaller areas within the field-of-view or visual field. For example, a listener may have a field-of-view that extends 180° horizontally and 150° vertically, but a subset of this FOV is limited to 120° horizontally and 90° vertically. Example embodiments can execute in such subsets. For example, the listener moves his or her head, and this movement causes the SLP to move outside the subset of the FOV or visual field but remains in the full FOV. Movement of the SLP outside the subset of the FOV initiates a switch or change in sound as discussed herein. An example embodiment tracks the location of the SLP based on coordinate locations derived from the head movements and/or the SLP, which can be fixed with respect to the listener or moving. The SLP can also be provided with a coordinate location (e.g., based on or derived from HRTFs processing the sound). Pixel locations from a display also provide coordinate or location information (e.g., a location or area on the display where the graphical representation and SLP are provided to the listener). Block220states reduce processing of the binaural sound to the SLP by switching the binaural sound to one of mono sound and stereo sound upon detecting that the head movements of the listener and/or the location of the SLP changed and caused the SLP to move outside the field-of-view of the listener. Processing or convolving binaural sound and then providing this binaural sound to the listener is more process intensive than providing mono or stereo sound to the listener. This difference becomes exacerbated when the listener moves his or her head and/or the SLP moves with respect to the listener since the sound is continually processed to emanate from the SLP. Example embodiments reduce this processing or convolution and hence free-up processing resources by switching the binaural sound to mono or stereo sound. Switching the sound from binaural sound to mono or stereo sound and switching the sound from mono or stereo sound to binaural sound also provides a mechanism for informing the listener of the current location of the SLP. For example, when the switch occurs from binaural sound to stereo sound, this switch audibly informs the listener that the location of the sound is no longer in the field-of-view or visual field. This switch could have occurred, for example, if the SLP or object from which the sound emanates moved and/or the head of the listener moved. Consider an example in which the listener wears an HMD while playing a VR card game or another game in a virtual environment. In this virtual environment, for example, the listener sits at a blackjack or poker table with other people also present at the table (e.g., a dealer and other players). Voice of these other people externally localize to the listener as binaural sound to the respective images seated around the table. While the table and/or people remain in the field-of-view of the listener, the voices continue to externally localize as binaural sound. Processing the voices to these locations is process intensive, especially since the listener moves his or her head while seated at the table playing the game. The listener then turns his or her head such that the table and/or the other people are no longer in the field-of-view of the listener. This movement causes the binaural sound to switch to stereo sound. The listener still hears the voices of the people (or other sounds associated with the game), but these sounds are now provided in stereo sound, not binaural sound. While the table and/or people remain out of the field-of-view of the listener, the sound continues to be provided to the listener in stereo sound. When the listener moves his or her head such that the table and/or people re-appear in the field-of-view, the sound switches from stereo sound back to binaural sound. Consider further the above example of the listener play a VR card game. While the listener is seated at the table and viewing the other players, voices of the other players externally localize as binaural sound to the respective images of the players. During the game, one player (e.g., Player A) decides to take a break. Player A stands up and walks to another part of the virtual environment or temporarily leaves the virtual environment. In response to this movement, the voice of Player A switches from binaural sound that previously localized to the image of Player A to stereo sound that now localizes inside the head of the listener. Switching saves processing resources and further signifies to the listener that Player A is no longer in the current field-of-view of the listener. In the above example of the listener playing a VR card game, switching the sound performs two tasks. First, switching from binaural sound to stereo sound reduces processing resources required to provide the sound to the listener. This occurs while the head or sight of the listener is not directed to the card table and/or other people. Or, this situation occurs when a SLP (here, one of the players) moves away from the table and outside the FOV of the listener. During this time, the listener continues to hear the sounds of the game (including the voice of the other people), but these sounds occur in stereo. Second, switching between binaural and stereo sounds notifies the listener that the people are no longer within the field-of-view of the listener. The listener knows or learns that sound externally localizes as binaural sound to objects (such as the people at the card game) when these objects are within the field-of-view (or a subset of this FOV). By contrast, when an object moves outside this field-of-view, the sound switches to or is provided in stereo sound. In this way, the format of the sound alone (e.g., binaural or stereo) informs the listener of a location of the object. For example, when the listener hears the voice of a person in stereo sound, then the listener knows that this person is not presently in a field-of-view of the listener. This information, for instance, could prompt the listener to turn his or her head to locate the person. Head movements of the listener and/or movements of the SLP can cause other actions to occur. For example, one or both of these actions cause the volume of the sound or intensity of sound to reduce. For instance, the volume reduces a voice of a person emanating from the SLP in response to detecting that head movements of the listener and/or movements of the SLP caused the SLP to move outside the field-of-view of the listener. This reduction in volume of sound notifies or alerts the listener that the SLP moved outside the field-of-view of the listener. Conventionally, a reduction in the volume occurs when a distance between the listener and the source of the sound decrease (i.e., sound intensity level decreases with a ratio of 1/r to the distance). By contrast, with an example embodiment, a reduction in volume of sound occurs even when the relative distance (r) between the listener and the source of sound does not change. For example, the listener plays an AR or VR software game in which the listener views an image of talking person two meters away directly in front of the listener. When the listener rotates his or her head to the left or right, the listener no longer can see the image of the talking person. This action causes the sound to switch to stereo sound and the volume of the sound to reduce. The reduction in volume of the sound occurred even though the distance between the listener and the source of sound (here the image located two meters away) remained constant. The reduction in the volume of the sound was provided instead to notify the listener that the image of the talking person was no longer in the field-of-view of the listener. An intensity of the sound decreases an amount sufficient to be perceivable to the listener so as to notify him or her of the change to the SLP. For example, the amount of reduction is based on a percentage of the current sound level being provided to the listener. For instance, an example embodiment reduces the sound by 10%, 20%, 30%, 40%, or 50% of its current intensity or volume. Consider an example in which the SLP occurs within an area or a boundary within a physical or real environment, an AR environment, or a VR environment. For example, this area is defined according a geographical shape that occurs in 2D or 3D space (e.g., a circle, sphere, oval, square, etc.). As another example, this area is defined according a VR or AR image or location (e.g., a VR room or VR space). As yet another example, this area is defined according to a perimeter or boundary of a display. For instance, a perimeter or edge of AR glasses or HMD define an area in which the SLP occurs. As another example, a display shows the area with a visibly perceivable indication (e.g., with the use of color, shading, brightness, words, symbols, etc.). When the SLP and/or its coordinate location moves outside of this area or boundary, then an example embodiment executes a switch from binaural sound to stereo sound or from stereo sound to binaural sound. By way of example, an example embodiment defines an area inside an outer perimeter (e.g., a perimeter of a display, a FOV, or an object). A coordinate location of the SLP occurs inside or within the perimeter. When movements of the head of the listener and/or the SLP cause the coordinate location of the SLP to move outside the perimeter, then this action executes switching of the sound from being provided to the listener as binaural sound to being provided to the listener as stereo sound or mono sound. Consider an example in which a user wears electronic glasses that display an AR image of a talking person located on a physical chair in front of the listener. An edge or perimeter of the glasses defines a field-of-view of the listener. A voice of the talking person emanates from the image and represents the SLP to the listener. The glasses include a camera that captures an image of the chair, and object recognition software tracks the location of the chair in order to maintain the AR image of the talking person at this location. When the head of the listener sufficiently rotates in one or more directions, the chair and accompanying AR image are no longer visible in the FOV thru the glasses. Here, the SLP moved outside the perimeter of the FOV. In response to detecting this occurrence, software providing the sound switches the voice of the talking person from binaural sound that localizes at the location of the chair to stereo sound that localizes inside the head of the listener. When the head of the listener rotates back so that the chair is within the FOV, the software switches the voice of the talking person back to binaural sound that externally localizes to the AR image at the chair. As noted, switching sound in this manner saves processing resources and notifies the listener that the SLP is no longer in the FOV. Additionally, this switching mitigates the need to localize sound to a location that is behind the listener or not in the FOV of the listener. Localizing binaural sound to such locations can be difficult since the origin of the sound and accompanying image occur outside the FOV. Consider an example in which a SLP and accompanying image occur directly in front of a face of a person along a forward-looking line of sight. This line of sight extends as a straight line from the listener's eyes to the SLP and image. A location of the SLP and image along the line of sight define a coordinate location (e.g., with polar or spherical coordinates). Head tracking and/or object recognition software enables an example embodiment to determine how much a coordinate location of the SLP moves with respect to a line-of-sight of the listener while the head of the listener moves. When movement of the coordinate location of the SLP with respect to the line-of-sight exceeds a threshold, then sound switches from binaural sound to stereo sound or from stereo sound to binaural sound. This switching can occur even if the SLP remains in the FOV of the listener. Consider further this example in which the SLP is directly in front of the listener along the forward-looking line of sight. For example, a location of the SLP is 1.5 meters away and hence has spherical coordinates of (1.5 m, 0, 0). An example embodiment is set to execute switching of sound when a head of the listener rotates more than a threshold amount (e.g., 49° in the azimuth direction). A head of the listener rotates along the horizontal plane or azimuth direction by 50° toward a right side of the listener. Here, the distance (1.5 m) and elevation angle (0) remain unchanged, but the azimuth angle changed by fifty degrees, which is larger than the threshold amount. Since this change in azimuth angle of fifty degrees exceeded the threshold value, the example embodiment switches the sound from playing as the binaural sound to playing as the stereo sound. This change occurs even though the SLP is still within the FOV of the listener. This change notifies to the listener that the SLP is no longer in a predetermined range of the line of sight. Switching sound in this manner enables the listener to control which sounds are provided in binaural sound and which sounds are provided in stereo sound. This listener is thus able to switch how he or she hears the sound based on head movements (e.g., based on an amount and/or direction of head movement). Consider an example in which the listener simultaneously talks to three different images of people A, B, and C who are located 2 meters in front of the listener. A is located at (2.0 m, −45°, 0); B is located at (2.0 m, 0, 0); and C is located at (2.0 m, 45°, 0). All three images simultaneously occur within the FOV of the listener. When the listener rotates his or her head to look directly at A, the voice of A occurs in binaural sound while the voices of B and C occur in stereo sound. When the listener rotates his or her head to look directly at C, then the voice of C occurs in binaural sound while the voices of A and B occur in stereo sound. An example embodiment switches sound when a line-of-sight of the listener moves more than or equal to threshold amount or predetermined amount. For example, change or switch the sound from binaural to stereo or from stereo to binaural upon detecting or determining that the line-of-sight of the listener moves more than a predetermined amount in the azimuth and/or elevation direction. Examples of predetermined amounts include, but are not limited to, 10°, 20°, 30°, 40°, 50°, 60°, 70°, 80°, 90°, . . . 180°. An example embodiment switches sound when a line-of-sight of the listener moves more than or equal to a specific direction. For example, change or switch the sound from binaural to stereo or from stereo to binaural upon detecting or determining that the line-of-sight of the listener moves with a certain compass heading. Consider an example of a wearable electronic device with a display that displays an image of a person at an SLP that remains at a fixed location to the listener while a head of the listener moves. One or more processors execute instructions to determine when the image of the person is no longer being displayed in the field-of-view of the listener and to change the sound from playing in the binaural sound to playing in the stereo sound. FIG.3is a method to warn a listener when sound will switch format in accordance with an example embodiment. Block300states provide sound to a listener in a format of binaural, stereo, or mono sound. As noted herein, headphones, earphones, HMDs, wearable electronic devices, and other electronic devices provide sound to the listener. Block310makes a determination whether a switch will occur or is occurring. If the answer to this determination is “no” then flow proceeds to block300and the sound continues to play to the listener as the binaural, stereo, or mono sound. If the answer to this determination is “yes” then flow proceeds to block320that states provide a warning of the switch to the listener. An example embodiment provides the listener with a warning before sound switches and/or while sound is switching. This warning can be a visual warning (e.g., display a notification on a display) or an audio warning (e.g., generate a sound that signifies the switch will occur). In this way, the listener knows of the change to sound in advance of the switching or while the switching occurs. Consider an example in which an electronic device displays a visual warning to the listener when the SLP in the field-of-view of the listener moves to a perimeter of the field-of-view of the listener. This visual warning notifies the listener of the switching of the binaural sound to the stereo sound and activates when the SLP is near or at the perimeter (e.g., activate the warning when the SLP touches the perimeter). An example embodiment notifies the listener of the location and/or direction of the SLP and/or graphical representation accompanying the SLP. For example, the display displays a visual indication that points to or provides a location to the SLP. This visual indication can appear on or near the perimeter of the field-of-view. For instance, an arrow or other pointing symbol located hear the perimeters points to a direction or location of the SLP. In this way, the listener knows which way to turn his or her head so the SLP appears in the FOV. As another example, the display displays a light along a perimeter of the display to inform the listener of the SLP when the SLP is outside the field-of-view of the listener. When the listener moves his or her head in the direction of the light, the SLP appears or reappears in the FOV (e.g., the image reappears in the display). FIG.4is a method that switches sound based on a direction of gaze of the listener in accordance with an example embodiment. Block400states display an area or location that represents one of binaural, stereo, and mono sound. The display of the electronic device provides a visual indication of one or more different areas or locations that represent different formats for hearing the sound. For example, binaural sound appears at one location; stereo sound appears at a second location; and mono sound appears at a third location. These locations can be simultaneously displayed to enable the listener to select the format for hearing the sound. These locations can also be or include images, graphical representations, icons, etc. Block410states detect when a listener is directed to the area or the location. For example, head tracking or gaze tracking detects when a listener is looking at one of the areas or locations. As another example, camera and facial recognition determine where the listener is looking. As another example, one or more sensors (e.g., in an Internet of things, IoT environment) detect when the listener moves into the area or location. For example, the area occurs at specific coordinates in a room or at a specific object, such as occurring at a sofa or chair. As yet another example, the listener moves to or otherwise selects a VR or AR location or object that represents one of binaural, stereo, or mono sound. Block420states switch the sound to the binaural, stereo, or mono sound as indicated by the area or the location. Sound switches to the format per the area or location upon detecting that the listener is looking to this area or location. For example, when the listener moves his or her head to the area representing binaural sound, then switch the sound be played to the listener to binaural sound. When the listener moves his or her head to the area representing stereo sound, then switch the sound be played to the listener to stereo sound. Consider an example embodiment that facilitates easy and convenient switching between different formats of sound that the listener hears. The listener interacts with the user interface and switches how he or she hears the sound based on head movements or eye gaze. Looking in a particular direction or at a particular location or area being displayed activates or deactivates the format of sound. In this way, the listener can change the format of sound via a hands-free operation. Consider an example in which the listener wears an HMD or AR glasses that provide 3D sound. When the listener speaks the word “sound” to a natural language user interface, a left side of the FOV on the display shows binaural sound and right side of the FOV shows stereo sound. In order to select one of these formats, the listener merely needs to look in the direction of the desired format. Thus, looking left selects binaural sound, and looking right selects stereo sound. Consider an example in which the listener is playing a VR or AR game in which the sounds localize as binaural sound. A relatively small area of the displayed area or FOV shows or represents stereo sound. The listener can switch the format of sound from binaural to stereo by moving and orientating his or her head to be directed at this small area. For instance, head tracking detects when a head of the listener is directed to the area that represents the stereo sound. The VR or AR game automatically switches the sound to stereo upon detecting the head of the listener is directed to the area that represents the stereo sound. In this way, the listener can quickly change the format of sound while continuing to hear the sound and while continuing to play the VR or AR game. Consider an example in which the wearable electronic device displays an area in the field-of-view that represents one or more of binaural and stereo sound. The electronic device tracks the head movements of the listener to determine when a line-of-sight of the listener is directed to the area in the field-of-view that represents the stereo sound. When this event occurs, the electronic device changes the sound from playing in the binaural sound to playing in stereo sound. The area of location to change sound can also be outside the FOV of the listener. Consider an example in which the area or location to change sound is not displayed. The user interacts with the electronic device and provides a command or instruction for viewing the format of sound or changing the format of sound. In response to this command or instruction, the electronic device displays one or more of an area, location, or option for binaural sound and stereo sound. The user selects the area, location, or option by looking at the desired format of sound. The listener can select the format of sound in other ways as well. Consider an example in which the user plays an AR or VR game that includes shooting objects with a gun. The game provides the sounds in stereo. A perimeter of the displayed area or FOV displays a “3D” indication. This indication represents 3D sound or binaural sound. When the user points and shoots the gun to this indication, the sound switches from playing as stereo sound to playing as 3D or binaural sound. The perimeter of the displayed area or FOV then displays a “stereo” indication. When the user points and shoots the gun to this indication, the sound switches from playing as binaural sound to playing as stereo sound. Consider further this example in which the listener is a player in the AR or VR game in which one object of the game is to obtain gold coins. One gold coin signifies achieving or winning 3D sound. When the listener runs to or thru the gold coin, sound switches to 3D sound. This example of an AR or VR game shows that the user is able to switch the format of sound without disrupting the game or switching sound occurs as part of the game. The user interface for switching sound appears in the game itself. As such, the user can select how the sound is provided while continuing to enjoy the game. Instead of shooting at the visual indication, the user can select the format in other ways depending on the game (e.g., throwing an object at the indication, hitting the indication, shooting an arrow or other projectile at the indication, etc.). FIG.5is a method that switches sound based on detecting a voice in accordance with an example embodiment. Block500states provide sound to a listener in one of binaural, stereo, and mono sound with an electronic device. As noted, an example embodiment provides sound to the listener thru an electronic device, such as headphones, earphones, HMD, AR glasses, speakers, bone conduction, etc. Block510states detect, with the electronic device, a voice of a person proximate to the listener. A natural language user interface and/or automatic speech recognition (ASR) of the electronic device detects the voice the person proximate to the listener. For example, the electronic device includes one or more microphones that detect sound. Block520states switch the format of the binaural, stereo, or mono sound in response to detecting the voice of the person proximate to the listener. The electronic device switches the sound in response to detecting the voice of the person proximate to the listener. When the listener hears binaural sound, this sound localizes to different locations around the listener. It may be difficult or even impossible for the listener to distinguish between these electronically generated sounds and the voice of a person proximate to the listener. When the electronic device detects the voice of the proximate person, the format of the sound being provided to the listener switches from binaural sound to one of stereo or mono sound. In this way, the listener can distinguish between the electronically generated sounds (which are now in stereo or mono) and the voice of the person. Consider an example in which the listener wears an HMD while playing a game and hearing binaural sound. The game provides a multitude of different sounds that include voices that originate around the listener in a VR environment. While playing the game, a person proximate to the listener speaks to the listener. The listener may not be able to distinguish whether the voice of the person is coming from a real person near the listener or coming from a character in the VR environment. The HMD includes one or more microphones that detect the voice. Upon making this detection, the HMD automatically switches the sounds to stereo. This switching enables the listener to distinguish the sounds in the game from the sound of the voice of the person. At the same time, the listener is able to continue to play the game uninterrupted while he or she talks to the person. FIG.6is a method that enables a listener to switch the format of sound being played in accordance with an example embodiment. Block600states display an indication of the format of sound as binaural, stereo, or mono sound. An electronic device displays a visual indication that when selected enables the listener to select one or more of binaural, stereo, or mono sound. This visual indication appears on, with, or thru the display. For example, the electronic device displays the visual indication in a FOV of the wearer or user of the electronic device. Block610receives, from the listener, a selection of the format of the sound. The user or listener interacts with the electronic device to make a selection of the format of the sound. For example, this selection comes from or thru a user interface, such as a voice activated user interface, graphical user interface (GUI), user interface on an HMD or AR glasses, handheld wand or other handheld device, a switch, etc. Block620states play the sound to the listener in the selected format. The electronic device plays the sound to the listener in the selected format. Consider an example in which a wearable electronic device (WED) includes or communicates with one or more processors that instructions to display a symbol, graphical representation, or indicia for selecting a format of sound. For example, the WED displays the word “stereo” or the symbol “S” that when selected by the listener changes the sound from playing in the binaural sound to playing in the stereo sound. As another example, the WED displays the word “3D’ or other indication that when selected plays the sound in binaural sound or switches to binaural sound. One example embodiment is an electronic device with a user interface that informs the listener how and/or where sound will play to the listener. For example, a display of the electronic device displays a visual indication and/or graphical representation that informs the listener how and/or where the sound will play. For instance, the listener knows in advance of hearing the sound that it will play as mono sound, stereo sound, or binaural sound. The listener can also know in advance a sound localization point (SLP) or location from where the sound will originate to the listener. In this way, the listener knows the format of how the sound will play and/or location from where it will originate in advance of hearing the sound. The user interface can also assist the listener in selecting the format for how the sound will play and/or selecting the SLP or location from where the listener will hear the sound. For example, the electronic device displays options to hear the sound as mono sound, stereo sound, or binaural sound and also provides a mechanism wherein the listener can move the SLP or select where the SLP occurs. In this way, the listener can control the location of the sound and the format for how he or she hears it. Consider an example in which an electronic device displays a graphical representation that plays sound to the listener when activated. Along with the graphical representation, the electronic device also displays options for hearing the sound as mono sound, stereo sound, or binaural sound. Selection of the mono option plays the sound in mono sound; selection of the stereo option plays the sound in stereo sound; and selection of the binaural or 3D option plays the sound in binaural sound. Consider an example in which the electronic device displays the graphical representation that the provides information to the listener or user. This information includes one or more of where the binaural sound will externally localize or is externally localizing with respect to the listener, a format for how the sound will localize or play to the listener, and options for selecting the format and/or location (SLP) for where or how the sound will play to the listener. This information can be presented in the graphical representation itself and/or in a visual indication or indication along with the graphical representation. In order to select the desired format of sound, the listener activates or selects the graphical representation (e.g., by looking at the graphical representation, shooting the graphical representation, speaking at or to the graphical representation, orientating a head position toward or at the graphical representation, or interacting with a user interface to select the graphical representation). One or more processors or processing unit can convolve or process sound to provide this sound as 3D sound or binaural sound. For example, a processor (such as a DSP) processes or convolves the sound with one or more of head-related transfer functions (HRTFs), head-related impulse responses (HRIRs), room impulse responses (RIRs), room transfer functions (RTFs), binaural room impulse responses (BRIRs), binaural room transfer functions (BRTFS), interaural time delays (ITDs), interaural level differences (ITDs), and a sound impulse response. Sound includes, but is not limited to, one or more of stereo sound, mono sound, binaural sound, computer-generated sound, sound captured with microphones, and other sound. Furthermore, sound includes different types including, but not limited to, music, background sound or background noise, human voice, computer-generated voice, and other naturally occurring or computer-generated sound. When the sound is recorded or generated in mono sound or stereo sound, convolution changes the sound to binaural sound. For example, one or more microphones record a human person speaking in mono sound or stereo sound, and a processor processes this sound with filters to change the sound into binaural sound. The processor or sound hardware processing or convolving the sound can be located in one or more electronic devices or computers including, but not limited to, headphones, smartphones, tablet computers, electronic speakers, head mounted displays (HMDs), optical head mounted displays (OHMDs), electronic glasses (e.g., glasses that provide augmented reality (AR)), servers, portable electronic devices (PEDs), handheld portable electronic devices (HPEDs), wearable electronic devices (WEDs), and other portable and non-portable electronic devices. These electronic devices can also be used to execute example embodiments. For example, a DSP processes or convolves stereo sound or mono sound with a process known as binaural synthesis or binaural processing to provide the sound with sound localization cues (ILD, ITD, and/or HRTFs) so the listener externally localizes the sound as binaural sound or 3D sound. Other technologies exist as well to provide 3D sound to listeners. An example embodiment models the HRTFs with one or more filters, such as a digital filter, a finite impulse response (FIR) filter, an infinite impulse response (IIR) filter, etc. Further, an ITD can be modeled as a separate delay line. When the binaural sound is not captured (e.g., on a dummy head or human head), the captured sound is convolved with sound localization information (SLI). This information includes one or more of HRTFs, HRIRs, BRTFs, BRIRs, ILDs, ITDs, and/or other information discussed herein. By way of example, SLI are retrieved, obtained, or received from memory, a database, a file, an electronic device (such as a server, cloud-based storage, or another electronic device in the computer system or in communication with a PED providing the sound to the user through one or more networks), etc. Instead of being retrieved from memory, this information can also be calculated in real-time. A central processing unit (CPU), processor (such as a DSP), or microprocessor processes and/or convolves the sound with the SLI, such as a pair of head related transfer functions (HRTFs), ITDs, and/or ILDs so that the sound will localize to a zone, area, or sound localization point (SLP). For example, the sound localizes to a specific point (e.g., localizing to point (r, θ, ϕ) or a general location or area (e.g., localizing to far-field location (θ, (I)) or near-field location (θ, ϕ). As an example, a lookup table that stores a set of HRTF pairs includes a field/column that specifies the coordinates associated with each pair, and the coordinates indicate the location for the origination of the sound. These coordinates include a distance (r) or near-field or far-field designation, an azimuth angle (θ), and/or an elevation angle (ϕ). The complex and unique shape of the human pinnae transforms sound waves through spectral modifications as the sound waves enter the ear. These spectral modifications are a function of the position of the source of sound with respect to the ears along with the physical shape of the pinnae that together cause a unique set of modifications to the sound called head related transfer functions or HRTFs. A unique pair of HRTFs (one for the left ear and one for the right ear) can be modeled or measured for each position of the source of sound with respect to a listener as the customized HRTFs. A HRTF is a function of frequency (f) and three spatial variables, by way of example (r, θ, ϕ) in a spherical coordinate system. Here, r is the radial distance from a recording point where the sound is recorded or a distance from a listening point where the sound is heard to an origination or generation point of the sound; θ (theta) is the azimuth angle between a forward-facing user at the recording or listening point and the direction of the origination or generation point of the sound relative to the user; and ϕ (phi) is the polar angle, elevation, or elevation angle between a forward-facing user at the recording or listening point and the direction of the origination or generation point of the sound relative to the user. By way of example, the value of (r) can be a distance (such as a numeric value) from an origin of sound to a recording point (e.g., when the sound is recorded with microphones) or a distance from a SLP to a head of a listener (e.g., when the sound is generated with a computer program or otherwise provided to a listener). When the distance (r) is greater than or equal to about one meter (1 m) as measured from the capture point (e.g., the head of the person) to the origination point of a sound, the sound attenuates inversely with the distance. One meter or thereabout defines a practical boundary between near-field and far-field distances and corresponding HRTFs. A “near-field” distance is one measured at about one meter or less; whereas a “far-field” distance is one measured at about one meter or more. Example embodiments are implemented with near-field and far-field distances. The coordinates for external sound localization can be calculated or estimated from an interaural time difference (ITD) of the sound between two ears. ITD is related to the azimuth angle according to, for example, the Woodworth model that provides a frequency independent ray tracing methodology. The coordinates (r, θ, ϕ) for external sound localization can also be calculated from a measurement of an orientation of and a distance to the face of the person when a head related impulse response (HRIR) is captured. The coordinates can also be calculated or extracted from one or more HRTF data files, for example by parsing known HRTF file formats, and/or HRTF file information. For example, HRTF data is stored as a set of angles that are provided in a file or header of a file (or in another predetermined or known location of a file or computer readable medium). The data can include one or more of time domain impulse responses (FIR filter coefficients), filter feedback coefficients, and an ITD value. This information can also be referred to as “a” and “b” coefficients. By way of example, these coefficients are stored or ordered according to lowest azimuth to highest azimuth for different elevation angles. The HRTF file can also include other information, such as the sampling rate, the number of elevation angles, the number of HRTFs stored, ITDs, a list of the elevation and azimuth angles, a unique identification for the HRTF pair, and other information. The data can be arranged according to one or more standard or proprietary file formats, such as AES69, and extracted from the file. The coordinates and other HRTF information can be calculated or extracted from the HRTF data files. A unique set of HRTF information (including r, θ, ϕ) is determined for each unique HRTF. The coordinates and other HRTF information are also stored in and retrieved from memory, such as storing the information in a look-up table. The information is quickly retrieved to enable real-time processing and convolving of sound using HRTFs and hence improves computer performance of execution of binaural sound. The SLP represents a location where a person will perceive an origin of the sound. For an external localization, the SLP is away from the person (e.g., the SLP is away from but proximate to the person or away from but not proximate to the person). The SLP can also be located inside the head of the person (e.g., when the sound is provided as mono sound or stereo sound). Sound can also switch between externally localizing and internally localizing, such as appearing to move and pass through a head of a listener. SLI can also be approximated or interpolated based on known data or known SLI, such as SLI for other coordinate locations. For example, a SLP is desired to localize at coordinate location (2.0 m, 0°, 40°), but HRTFs for the location are not known. HRTFs are known for two neighboring locations, such as known for (2.0 m, 0°, 35°) and (2.0 m, 0°, 45°), and the HRTFs for the desired location of (2.0 m, 0°, 40°) are approximated from the two known locations. These approximated HRTFs are provided to convolve sound to localize at the desired coordinate location (2.0 m, 0°, 40°). Sound is convolved either directly in the time domain with a finite impulse response (FIR) filter or with a Fast Fourier Transform (FFT). For example, an electronic device convolves the sound to one or more SLPs using a set of HRTFs, HRIRs, BRIRs, or RIRs and provides the person with binaural sound. In an example embodiment, convolution involves an audio input signal and one or more impulse responses of a sound originating from various positions with respect to the listener. The input signal is a limited length audio signal (such as a pre-recorded digital audio file or sound clip) or an ongoing audio signal (such as sound from a microphone or streaming audio over the Internet from a continuous source). The impulse responses are a set of HRIRs, BRIRs, RIRs, etc. Convolution applies one or more FIR filters to the input signals and convolves the input signals into binaural audio output or binaural stereo tracks. For example, the input signals are convolved into binaural audio output that is specific or individualized for the listener based on one or more of the impulse responses to the listener. The FIR filters are derived binaural impulse responses. Alternatively, or additionally, the FIR filters are obtained from another source, such as generated from a computer simulation or estimation, generated from a dummy head, retrieved from storage, computed based on known impulse responses captured from people, etc. Further, convolution of an input signal into binaural output can include sound with one or more of reverberation, single echoes, frequency coloring, and spatial impression. Processing of the sound also includes calculating and/or adjusting an interaural time difference (ITD), an interaural level difference (ILD), and/or other aspects of the sound in order to alter the cues and artificially alter the point of localization. Consider an example in which the ITD is calculated for a location (θ, ϕ) with discrete Fourier transforms (DFTs) calculated for the left and right ears. The ITD is located at the point for which the function attains its maximum value, known as the argument of the maximum or arg max as follows: ITD=arg⁢max⁡(τ)⁢∑ndl,θ,ϕ(n)·dr,θ,ϕ(n+τ). Subsequent sounds are filtered with the left HRTF, right HRTF, and/or ITD so that the sound localizes at (r, θ, ϕ). Such sounds include filtering stereo and monaural sound to localize at (r, θ, ϕ). For example, given an input signal as a monaural sound signal s(n), this sound is convolved to appear at (θ, ϕ) when the left ear is presented with: sl(n)=s(n−ITD)·dl,θ,ϕ(n); and the right ear is presented with: sr(n)=s(n)·dr,θ,ϕ(n). Consider an example in which a dedicated digital signal processor (DSP) executes frequency domain processing to generate real-time convolution of monophonic sound to binaural sound. By way of example, a continuous audio input signal x(t) is convolved with a linear filter of an impulse response h(t) to generate an output signal y(t) as follows: y⁡(τ)=x⁡(τ)·h⁡(τ)=∫∞0x⁡(τ-t)·h⁡(t)·dt. This reduces to a summation when the impulse response has a given length N and the input signal and the impulse response are sampled at t=iDt as follows: y⁡(i)=∑j=0N-1x⁡(i-j)·h⁡(j). Execution time of convolution further reduces with a Fast Fourier Transform (FFT) algorithm and/or Inverse Fast Fourier Transform (IFFT) algorithm. Consider another example of binaural synthesis in which recorded or synthesized sound is filtered with a binaural impulse response (e.g., HRIR or BRIR) to generate a binaural output sound to the person. The input sound is preprocessed to generate left and right audio streams that are mapped to one or more sound sources or sound localization points (known as SLPs). These streams are convolved with a binaural impulse response for the left ear and the right ear to generate the left and right binaural output sound signal. The output sound signal is further processed depending on a final destination. For example, a cross-talk cancellation algorithm is applied to the output sound signal when it will be provided through loudspeakers or applying artificial binaural reverberation to provide 3D spatial context to the sound. As noted herein, a user or listener can activate and/or switch the format of sound using a variety of different methods and apparatus. For instance, the user clicks on the graphical representation, issues a voice command to play the sound or activate the graphical representation, uses a mouse or pointer to activate or play the sound, commands or instructs a software program to activate or play the sound, issues body gesture (e.g., hand gesture, eye movement, etc.), etc. Activation or playing of the sound can occur in other ways as well. For example, the sound plays when the second person views the graphical representation, opens or enlarges a window, or opens a software program. For example, the sound plays upon occurrence of another event, such as playing at a certain time of day, playing when the user proceeds to a geographical or internet of things (IoT) location, the user enters a virtual space, the user focuses a window, the user dons a PED, the user activates a program, the user turns on or awakes from sleep an electronic device, or other events discussed herein. The HRTFs can be generic HRTFs, customized HRTFs, or HRTFs that are customized to the listener. Customized HRTFs or HRTFs that are customized to the listener are specific to an anatomy of a particular listener and are based on a size and/or shape of the head and/or ears of the listener. Customized HRTFs can be obtained from actual measurements (e.g., measuring HRIRs and/or BRIRs from a head of the user) or from computational modeling (e.g., modeled from a photo of the user or modeled from measurements or approximations of the listener, such as a size and/or shape of the listener's head or ears). Customized HRTFs are also known as individualized HRTFs. Generic HRTFs are not specific to an anatomy of the listener. Generic HRTFs can be obtained from actual measurements (e.g., measuring HRIRs and/or BRIRs from a head of the user or a dummy head) or from computation modeling. Generic HRTFs can work for a large group of people since these HRTFs are not customized or individualized to each person. These HRTFs are often stored in public databases and available to the generally public to use free of charge. One or more example embodiments expedite playing of sound to a user by prefetching, decrypting, and/or caching the sound before the sound is played to the listener in accordance with an example embodiment. For example, an electronic device receives or obtains the sound from local memory (e.g., memory on the electronic device), local storage (e.g., memory directly attached to the electronic device), remote storage (e.g., memory accessed over the Ethernet or wireless network), a server, a database, a data center, etc. When sound is already convolved into binaural sound, this sound can be converted back into mono or stereo sound or played as mono or stereo sound. For example, the electronic device plays the sound through a single speaker. As another example, the electronic device plays the same channel through both speakers (e.g., play the left channel sound to both the left and right speakers of the headphones or play the right channel sound to both the left and right speakers of the headphones). As another example, the sound is filtered through cross-talk canceling filters. Filters, for example, can eliminate crosstalk and the HRTFs (e.g., by utilizing an inverse filter, such as a Nelson/Kirkeby inverse filter). FIGS.7A-7Bshow switching the format of sound when an object moves out of a field of view of a listener in accordance with an example embodiment. A listener700wears a wearable electronic device (WED)710and has a field of view720that includes several objects, shown by way of example as a cabinet730and a person740. The person740is a graphical representation, such as an AR or VR image provided by the wearable electronic device710. While the person740is in the field of view720, the listener hears a voice750of the person as binaural or 3D sound that has a SLP originating from the AR or VR image. A display of the WED includes a visual designation760of binaural sound, shown as “3D”. This designation shows the listener that sounds he or she hears are electronically generated binaural sound. As shown inFIG.7B, the head of the listener700rotated to the listener's left in a horizontal or azimuth direction. This movement caused the cabinet730to move toward a center of the field of view720, but also caused the person740to move outside the field of view. For example, the person may still be present in the AR or VR environment of the listener but is no longer visible to the listener. In response to this movement, the voice750of the person switches from being provided to the listener as binaural sound to being provided to the listener as stereo sound. As such, the voice of the person now localizes inside the head of the listener (e.g., the WED includes or is in communication with headphones or earphones that the listener wears). In response to this switching, the visual designation760changes to show sound being provided to the listener in stereo, shown as “stereo”. If the listener were to move his or her head back to the position shown inFIG.7A, then the sound would switch back to binaural sound, and the visual designation760would change back to “3D” to inform the listener of this change. The visual designation760provides the listener with a visual cue or visual indication of the switching of sound to different formats and also shows the listener the current format for the sound. This switching and change in visual designation can occur when the head orientation moves and causes the SLP (e.g., person740) to move outside the field of view and/or when the SLP moves outside the field of view. FIGS.8A-8Bshow switching the format of sound when an object moves a predetermined amount in a field of view of a listener in accordance with an example embodiment. A listener800wears a wearable electronic device (WED)810and has a field of view820that includes a person840that is communicating with the listener. By way of example, the person is a graphical representation, such as an AR or VR image provided by the wearable electronic device810. For instance, two individuals communicate during a telephone call or an electronic call. FIG.8Ashows the person840being along a line of sight850that is directly in front of the listener such that a forward-looking direction of the listener is directed to the person. While the person840is in this position in the field of view820, the listener hears a voice860of the person as binaural or 3D sound that has a SLP originating from the person (e.g., emanating from the AR or VR image that the listener sees). A display of the WED includes a visual designation870of binaural sound, shown as “spatial audio”. This designation shows the listener that the voice he or she hears from the person840is electronically generated binaural or 3D sound. As shown inFIG.8B, the head of the listener800rotated to the listener's left in a horizontal or azimuth direction. This movement caused the listener to have a new or different line of sight880. An amount of this movement is shown at890and represents an amount of horizontal movement or change in azimuth angle of the head of the listener. For example, this change in azimuth angle is the angular difference between line of sight850and line of sight880. Visual indication870indicates the sound being provided to the listener is provided in stereo sound. Here, the listener hears the voice of the person840in stereo, and the display provides this visual indication to the listener. When head movements of the listener change by a predetermined amount, then the format of sound switches as indicated herein. These amounts can be preset or predetermined for azimuth, elevation, or an axis of head rotation (e.g., yaw, pitch, roll of the head). For example, when the azimuth angle and/or elevation angle of the head of the listener changes by a predetermined amount, then switch the sound. Examples of these angles include, but are not limited to, one or more of 30°, 35°, 40°, 45°, 50°, 55°, 60°, 65°, . . . 180°. This switching of sound can occur even while the person (or image) is still in the field of view of the listener. Consider an example in which the listener800talks to person840and hears a voice of the person in 3D sound as indicated inFIG.8A. While the line of sight850of the listener800remains fixed, the person840moves 70° azimuth with respect to the listener and line of sight. At this position, the person is still in a peripheral area of the field of view of the listener but not in an ideal area of the field of view for the conversation. A default is set such that when the SLP moves more than 69° azimuth with respect to the line of sight of the listener, then a switch occurs from binaural sound to stereo sound. Since the person moved 70° azimuth (which is more than the default azimuth amount), a switch occurs to the format of the sound. When the listener or the SLP moves back into a more ideal area of the field of view (in this example, 69° azimuth or less), the sound switches back to binaural sound. FIGS.9A and9Bshow an electronic device that provides an alert when a sound localization point (SLP) moves to a periphery of the field of view of the listener in accordance with an example embodiment. An electronic device900includes a display910that displays or provides objects, such as images, video, AR and/or VR graphical representations, etc. For illustration, the display910shows a field of view that includes a cabinet920and a person930. FIG.9Ashows the electronic device900moving to the left, as shown with arrow940. As such, objects viewable in, on, or thru the display move to the right with respect to the field of view of the display. When the electronic device900rotates sufficiently to the left in the direction of arrow940, the person930is no longer visible in, on, or thru the display as shown inFIG.9B. When the person930is at or near a periphery or edge of the display, an alert950occurs. For example, the display910displays an alert or warning that visually notifies the user that the person930is about to move outside the field of view of the display. This alert can include a visual alert and/or an audio alert. For example, the electronic device provides the listener with one or more beeps. These beeps can be in stereo sound. Alternatively, these beeps can externally localize as binaural sound to the location of the person930. This audio alert informs the listener to which object the alert is being directed because the sound emanates from the object itself. FIG.9Bshows the situation after the person930moved outside the field of view of the display910. When this occurs, the user may no longer know where the object exists relative to the display since the object (here, a person930) is no longer visible. In response to this occurrence, the display910provides a visual indication960notifying the user that the person930is outside the field of view of the display. The visual indication960can provide location information that includes a direction of where the object (here a person930) exists outside of the field of view of the display. As shown inFIG.9B, the light blinks or occurs on the edge or periphery of the display at a location that shows the user where to look or where to move the display to recapture the person930. When the user moves his or her head and/or electronic device in the direction of imaginary line970, the person930will reappear in the display. A position of the alert or visual indication960on or near the outer circumference or edge of the display shows the user the direction of the hidden or unseen person. For example, as shown inFIG.9B, movement of the display to the right and slightly upward will recapture the image of the person. The position of the visual indication960can appear in other locations of the display to indicate the location of the corresponding hidden object. For example, if the visual indication appeared at the top of the display, then this position would indicate to the user that the object is above the display. If the visual indication appeared at the bottom of the display, then this position would indicate to the user that the object is below the display. The display can simultaneously display multiple visual indications to indicate locations of multiple sources of sound (e.g., multiple SLPs) that are out of the current field of view. In this way, the user can track locations of multiple objects that are no longer visible but may or may not be generating sound. FIGS.10A-10Cshow an electronic device that provides switching between binaural and stereo sound in accordance with an example embodiment. The electronic device1000includes a display1010that displays or provides one or more objects1020, such as images, video, AR and/or VR graphical representations, etc. Objects shown on, with, or thru the display can be real, such as objects captured with a camera of the electronic device or objects seen thru the display (e.g., when the display forms part of electronic glasses). These objects can also be electronically generated and provided on or with the display (e.g., objects in VR or AR). Such objects can also be a mix of real and electronically generated (e.g., an AR image overlaid on a real object). The display1010also includes different areas, zones, points, locations, graphical representations, images, etc. that designate different formats of sound. For illustration, these are shown as areas1030and1040. Area1030corresponds to stereo sound, and area1040corresponds to 3D or binaural sound. Area1030is bolded or highlighted as compared to area1040to visually indicate that stereo sound is selected and the current format for how sound is provided to the listener. The area provides a mechanism thru which a user can select the format of sound designated per the area. For example, a user interacts with the area as part of a user interface to select the format corresponding to the area. For illustration, display1010designates area1030as having a format of stereo sound and area1040as having a format of 3D sound. Selection of area1030provides the sound to the listener in stereo sound, and selection of area1040provides the sound to the listener in 3D or binaural sound. Users can interact with the electronic device1000and/or display1010in a variety of ways to select the format of sound, such as pointing to the area, clicking on the area, gesturing to the area, speaking a command or instruction to select the area, looking at the area, interacting with a handheld portable electronic device to select the area, and performing other actions per a user interface. Example embodiments include method and apparatus that enable the user to select the format of sound while playing a software game or executing a software application (e.g., a mobile messaging application). The user can select or change the format of sound without interrupting execution of the game or application. Consider an example embodiment shown inFIG.10Bin which the display1010includes an image that represents the listener, user, or player1050. For example, the image1050represents the head of the listener and moves in unison with or corresponding to head movements of the listener. When the head of the listener moves to the right, the head of the image1050moves to the right. PerFIG.10B, the listener can select or change a format of sound with a line of sight1060. For example, when the head of the listener and hence head of image1050has a line of sight directed to the area1030, then the electronic device selects stereo sound as the format. Consider an example in which the listener (via an avatar or other graphical representation) navigates thru an AR or VR world. This world includes designated areas or zones1030and1040that enable the listener to change the format of sound while navigating the world. For example, such areas display or appear periodically, continuously, continually, randomly, upon a command or instruction from the listener, or upon an action occurring in the world in which the listener navigates. When the listener and hence image1050looks at the area, the electronic device provides the sound in the world to the listener in the format of the selected area. In this way, the listener can easily select or change the format of sound (e.g., look or stare at the designated area). Alternatively, the listener can ignore the area and proceed thru the world. Consider an example embodiment shown inFIG.10Cin which the display1010includes an image that represents the listener, user, or player1070. For example, the image1070is a person that the listener controls with interaction of the electronic device1000. PerFIG.10C, the listener can select or change a format of sound by shooting a gun or weapon1080to the area. When the weapon1080targets or fires1090at area1030, then the listener selects stereo sound. Firing at the area1040selects 3D or binaural sound. Consider an example in which the listener (via an avatar or other graphical representation) plays an AR or VR software game in which the listener or user is or controls image1080. For example, the listener can compete with or play with other players that appear in the game. While playing the game, the display shows images, areas, icons, graphical representations, etc. that represent the format of sound (e.g., areas1030and1040). In order to select or change a format of sound, the listener fires the weapon at the selected format. In this way, the listener can easily select or change the format of sound (e.g., firing a weapon at the designated area) while playing the game. Alternatively, the listener can ignore the area while continuing to play the game in an uninterrupted way. Selection of the format of sound perFIGS.10B and10Cdiffers from conventional approaches that would require, for example, the listener to navigate thru a series of dropdown menus or other UI selections in order to select the format of sound. For example, the listener would have to select “settings” and then “sound” and then “3D sound” if the listener wanted to hear binaural sound. By contrast,FIGS.10B and10Cshow examples in which the listener can make such selections or changes while continuing to play the game or software application without interrupting play. FIGS.11A-11Bshow an electronic device that provides switching between binaural and stereo sound based a presence or voice of a third person in accordance with an example embodiment. The electronic device1100includes a display1110that displays or provides one or more objects1120, such as images, video, AR and/or VR graphical representations, etc. The display1110also provides or displays a visual indication1130that shows the format of sound.FIG.11Ashows the visual indication showing the format of sound being in stereo, andFIG.11Bshows the visual indication showing the format of sound being in 3D. Sound switches format based on the presence of a third person1140and/or presence of a voice1150of the third person. As one example, when the third person1140physically comes within a predefined or predetermined distance to the listener and/or electronic device1100, sound switches. For instance, switch sound when the third person1140comes with 1.0 meter of the listener, 1.5 meters of the listener, or 2.0 meters of the listener. As another example, sound switches when the third person1140speaks. For instance,FIG.11Bshows the third person1140speaking the word “Hello” which would cause the sound being provided to the listener to switch from 3D sound to stereo sound. Consider an example embodiment ofFIG.11Ain which the listener navigates thru a VR world while wearing a head mounted display (HMD)1100. The listener is not aware or unable to see real people around or proximate to the listener. Sensors on the HMD detect motion, movement, or presence of other people when such people come within a predetermined distance of the listener while the listener dons the HMD. For example, when third person1140comes within one to two meters of the listener, then sound automatically switches from 3D to stereo. This switching performs several functions. First, the switching signifies to the listener that someone is physically approaching or is near. Second, the switching assists the listener in hearing the third person or knowing that this person is a real person, as opposed to a person or sound originating in the VR environment. Consider an example embodiment ofFIG.11Bin which the listener navigates thru a VR world while wearing a head mounted display (HMD)1100. The listener is not aware or unable to see real people around or proximate to the listener. A microphone on the HMD detects sound in the physical environment of the listener while the listener dons the HMD. For example, the third person1140speaks “Hello” to the listener (as indicated at1150). Upon detecting the voice and/or detecting a keyword, the electronic device automatically switches the sound from 3D to stereo. This switching performs several functions. First, the switching signifies to the listener that someone is talking (e.g., talking to the listener). Second, the switching assists the listener in distinguishing the voice of the third person from voices or sounds in the VR environment. If the sound did not switch, the listener would have difficulty talking to the third person since the listener could become confused about whether the voice originated in the real world or the VR world. FIG.11Bincludes a visual indication1160that shows a direction and/or location of the third person1140and/or voice1150. For example, the visual indication is an arrow that points to where the third person is located or where the voice originated. The visual indication can also be provided at location on the display that provides the direction or location information. For instance, the electronic device displays the visual indication on a right side of the FOV or display when the location of the person and/or voice is to a right side of the listener. FIG.12is an example computer system1200in accordance with an example embodiment. The computer system1200includes one or more of a server1210, an electronic device1230, and an electronic device1240in communication over one or more networks1250. User1239is with or uses electronic device1230, and user1249is with or uses electronic device1240. For illustration, a single server1210, two electronic devices1230and1240, and two users1239and1249are shown, but example embodiments can include one or more of a server, electronic device, and user. Server1210includes a processing unit1212and memory1214. The memory includes sound switching1216(e.g., software and/or hardware to execute examples embodiments that switch and/or change a format of sound as discussed herein) and HRTFs1218. Electronic device1230includes a processing unit1232and memory1234with sound switching1236and HRTFs1238. Electronic device1240includes a processing unit1242and memory1244with sound switching1246and HRTFs1248. Sound switching can occur in the server, in one of the electronic devices, or in combinations of these devices. FIG.13is an example of an electronic device1300in accordance with an example embodiment. The electronic device1300includes a processor or processing unit1310, memory1320, a display1330, one or more interfaces1340, a wireless transmitter/receiver1350, head tracking1360(such as one or more of an inertial sensor, accelerometer, gyroscope, and magnetometer), HRTFs1370, speakers1380, one or more microphones1390, gaze and/or eye tracker1392, sound switching1394, one or more sensors1396(such as one or more of a proximity sensor, infrared sensor, and camera), and a voice detection and/or voice recognition1398. Memory includes computer readable medium (CRM). Examples of an interface include, but are not limited to, a network interface, a graphical user interface, a natural language user interface, a natural user interface, a phone control interface, a reality user interface, a kinetic user interface, a touchless user interface, an augmented reality user interface, and/or an interface that combines reality and virtuality. The processor or processing unit includes a processor and/or a digital signal processor (DSP). For example, the processing unit includes one or more of a central processing unit, CPU, digital signal processor (DSP), microprocessor, microcontrollers, field programmable gate arrays (FPGA), application-specific integrated circuits (ASIC), etc. for controlling the overall operation of memory (such as random access memory (RAM) for temporary data storage, read only memory (ROM) for permanent data storage, and firmware). Consider an example embodiment in which the processing unit includes both a processor and DSP that communicate with each other and memory and perform operations and tasks that implement one or more blocks of the flow diagram discussed herein. The memory, for example, stores applications, data, programs, sound clips, algorithms (including software to implement or assist in implementing example embodiments) and other data. For example, a processor or DSP executes a convolving process with the retrieved HRTFs or HRIRs (or other transfer functions or impulse responses) to process sound clips so that the sound is adjusted, placed, or localized for a listener away from but proximate to the head of the listener. For example, the DSP converts mono or stereo sound to binaural sound so this binaural sound externally localizes to the user. The DSP can also receive binaural sound and move its localization point, add or remove impulse responses (such as RIRs), and perform other functions. For example, an electronic device or software program convolves and/or processes the sound captured at the microphones of an electronic device and provides this convolved sound to the listener so the listener can localize the sound and hear it. The listener can experience a resulting localization externally (such as at a sound localization point (SLP) associated with near field HRTFs and far field HRTFs) or internally (such as monaural sound or stereo sound). The memory stores HRTFs, HRIRs, BRTFs, BRIRs, RTFs, RIRs, or other transfer functions and/or impulse responses for processing and/or convolving sound. The memory can also store instructions for executing one or more example embodiments. Further, the memory can store the sound, graphical representations, and other information and instructions discussed herein (e.g., sound switching). The electronic device provides sound to the users through one or more speakers. Alternatively, or in addition to the speakers, the electronic device can communicate with headphones, earphones, earbuds, bone conduction devices, or another electronic device that provides sound to the user. The networks include one or more of a cellular network, a public switch telephone network, the Internet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a personal area network (PAN), home area network (HAM), and other public and/or private networks. Additionally, the electronic devices need not communicate with each other through a network. As one example, electronic devices couple together via one or more wires, such as a direct wired-connection. As another example, electronic devices communicate directly through a wireless protocol, such as Bluetooth, near field communication (NFC), or other wireless communication protocol. By way of example, a computer and an electronic device include, but are not limited to, handheld portable electronic devices (HPEDs), wearable electronic glasses, electronic or smart watches, wearable electronic devices (WEDs), smart earphones or hearables, electronic devices with cellular or mobile phone capabilities or subscriber identification module (SIM) cards, desktop computers, servers, portable computers (such as tablet and notebook computers), smartphones, head mounted displays (HMDs), optical head mounted displays (OHMDs), headphones, and other electronic devices with a processor or processing unit, a memory, and/or a DSP. Example embodiments are not limited to HRTFs but also include other sound transfer functions and sound impulse responses including, but not limited to, head related impulse responses (HRIRs), room transfer functions (RTFs), room impulse responses (RIRs), binaural room impulse responses (BRIRs), binaural room transfer functions (BRTFs), headphone transfer functions (HPTFs), etc. Example embodiments can be executed with one or more integrated circuits that are specifically customized, designed, or configured to execute one or more blocks discussed herein. For example, the electronic devices include a specialized or custom processor or microprocessor or semiconductor intellectual property (SIP) core or digital signal processor (DSP) with a hardware architecture optimized for convolving sound and executing one or more example embodiments (e.g., switching and/or changing a format of sound). Consider an example in which the HPED (including headphones) includes a customized or dedicated DSP that executes one or more blocks discussed herein (including processing and/or convolving sound into binaural sound for sound clips). Such a DSP has a better power performance or power efficiency compared to a general-purpose microprocessor and is more suitable for a HPED or WED due to power consumption constraints of the HPED or WED. The DSP can also include a specialized hardware architecture, such as a special or specialized memory architecture to simultaneously fetch or pre-fetch multiple data and/or instructions concurrently to increase execution speed and sound processing efficiency and to quickly correct errors while sound externally localizes to the user. By way of example, streaming sound data (such as sound data in a telephone call or software game application) is processed and convolved with a specialized memory architecture (such as the Harvard architecture or the Modified von Neumann architecture). The DSP can also provide a lower-cost solution compared to a general-purpose microprocessor that executes digital signal processing and convolving algorithms. The DSP can also provide functions as an application processor or microcontroller. The DSP can also prefetch sound clips and other sound from memory to expedite convolution. Consider an example in which a customized DSP includes one or more special instruction sets for multiply-accumulate operations (MAC operations), such as convolving with transfer functions and/or impulse responses (such as HRTFs, HRIRs, BRIRs, et al.), executing Fast Fourier Transforms (FFTs), executing finite impulse response (FIR) filtering, and executing instructions to increase parallelism. As used herein, “empty space” is a location that is not occupied by a tangible object. As used herein, “field-of-view” or “FOV” is the observable area a person can see with his or her eyes or via an optical device. As used herein, “graphical representations” include, but are not limited to, emoji, emoticons, animoji, icons, stickers, folders, documents, files, text or words, pictures, pictograms, ideograms, holograms, images, and other visible indicia that display on, thru, or with an electronic device. Furthermore, these graphical representations can be two-dimensional (2D), three-dimensional (3D), virtual reality (VR) images, augmented reality (AR) images, static or non-moving, moving, and other types of images. As used herein, “headphones” or “earphones” include a left and right over-ear ear cup, on-ear pad, or in-ear monitor (IEM) with one or more speakers or drivers for a left and a right ear of a wearer. The left and right cup, pad, or IEM may be connected with a band, connector, wire, or housing, or one or both cups, pads, or IEMs may operate wirelessly being unconnected to the other. The drivers may rest on, in, or around the ears of the wearer, or mounted near the ears without touching the ears. As used herein, the word “proximate” means near. For example, binaural sound that externally localizes away from but proximate to a user localizes within three meters of the head of the user. As used herein, a “sound localization point” or “SLP” is a location where a listener localizes sound. A SLP can be internal (such as monaural sound that localizes inside a head of a listener), or a SLP can be external (such as binaural sound that externally localizes to a point or an area that is away from but proximate to the person or away from but not near the person). A SLP can be a single point such as one defined by a single pair of HRTFs or a SLP can be a zone or shape or volume or general area. Further, in some instances, multiple impulse responses or transfer functions can be processed to convolve sounds to a place within the boundary of the SLP. In some instances, a SLP may not have access to a particular HRTF necessary to localize sound at the SLP for a particular user, or a particular HRTF may not have been created. A SLP may not require a HRTF in order to localize sound for a user, such as an internalized SLP, or a SLP may be rendered by adjusting an ITD and/or ILD or other human audial cues. As used herein, “sound localization information” or “SLI” is information that is used to process or convolve sound so the sound externally localizes as binaural sound to a listener. As used herein, a “telephone call,” or a “electronic call” is a connection over a wired and/or wireless network between a calling person or user and a called person or user. Telephone calls can use landlines, mobile phones, satellite phones, HPEDs, voice personal assistants (VPAs), computers, and other portable and non-portable electronic devices. Further, telephone calls can be placed through one or more of a public switched telephone network, the internet, and various types of networks (such as Wide Area Networks or WANs, Local Area Networks or LANs, Personal Area Networks or PANs, Campus Area Networks or CANs, etc.). Telephone calls include other types of telephony including Voice over Internet Protocol (VoIP) calls, internet telephone calls, in-game calls, telepresence, etc. As used herein, a “user” or a “listener” is a person (i.e., a human being). These terms can also be a software program (including an IPA or IUA), hardware (such as a processor or processing unit), an electronic device or a computer (such as a speaking robot or avatar shaped like a human with microphones in its ears or about six inches apart). In some example embodiments, the methods illustrated herein and data and instructions associated therewith, are stored in respective storage devices that are implemented as computer-readable and/or machine-readable storage media, physical or tangible media, and/or non-transitory storage media. These storage media include different forms of memory including semiconductor memory devices such as DRAM, or SRAM, Erasable and Programmable Read-Only Memories (EPROMs), Electrically Erasable and Programmable Read-Only Memories (EEPROMs) and flash memories; magnetic disks such as fixed and removable disks; other magnetic media including tape; optical media such as Compact Disks (CDs) or Digital Versatile Disks (DVDs). Note that the instructions of the software discussed above can be provided on computer-readable or machine-readable storage medium, or alternatively, can be provided on multiple computer-readable or machine-readable storage media distributed in a large system having possibly plural nodes. Such computer-readable or machine-readable medium or media is (are) considered to be part of an article (or article of manufacture). An article or article of manufacture can refer to a manufactured single component or multiple components. Blocks and/or methods discussed herein can be executed and/or made by a user, a user agent (including machine learning agents and intelligent user agents), a software application, an electronic device, a computer, firmware, hardware, a process, a computer system, and/or an intelligent personal assistant. Furthermore, blocks and/or methods discussed herein can be executed automatically with or without instruction from a user.
96,857
11943608
DETAILED DESCRIPTION Reference is made in detail to embodiments of the invention, which are illustrated in the accompanying drawings. The same reference numbers may be used throughout the drawings to refer to the same or like parts, components, or operations. FIG.1shows a simplified functional block diagram of a Bluetooth communication system100according to one embodiment of the present disclosure. The Bluetooth communication system100comprises a Bluetooth host device110and a Bluetooth device set102, wherein the Bluetooth device set102comprises a plurality of member devices. In practical applications, the plurality of member devices in the Bluetooth device set102may utilize various approaches complying with the Bluetooth communication standards to create a Bluetooth piconet, and may conduct various instruction transmission or data transmission through the Bluetooth piconet. Alternatively, the plurality of member devices in the Bluetooth device set102may collectively form a coordinate set complying with Bluetooth communication standards. In this embodiment, the Bluetooth host device110and all member devices in the Bluetooth device set102support the Bluetooth LE Audio (BLE Audio) technology (hereinafter referred to as BLE Audio technology) specified by the Bluetooth Core Specification Version 5.2 or newer versions. Accordingly, an user may connect the Bluetooth host device110with the Bluetooth device set102to utilize the Bluetooth device set102to conduct various audio playback operations. For example, two member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a pair of Bluetooth earphones or a 2.0 channel speaker set. For another example, three member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a 2.1 channel speaker set. For another example, sis member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a 5.1 channel speaker set. For another example, eight member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a 7.1 channel speaker set. In order to reduce the complexity of the drawing, only three exemplary member devices are shown inFIG.1, which are a first member device120, a second member device130, and a third member device140. In the embodiment ofFIG.1, the first member device120is coupled with a first audio playback circuit162and a first voice receiving circuit164, the second member device130is coupled with a second audio playback circuit172and a second voice receiving circuit174, while the third member device140is coupled with a third audio playback circuit182and a third voice receiving circuit184. The user may connect the Bluetooth host device110with the first member device120, the second member device130, and the third member device140in the Bluetooth device set102, so as to utilize above member devices to control related audio playback circuits to playback audio data transmitted from the Bluetooth host device110by adopting the BLE Audio technology. In the embodiment ofFIG.1, the Bluetooth host device110comprises a host-side communication circuit111, an input circuit113, a host-side cypher key generation circuit115, and a processing circuit117. The first member device120comprises a first communication circuit121, a first cypher key generation circuit123, a first control circuit125, and a first audio processing circuit127. The second member device130comprises a second communication circuit131, a second cypher key generation circuit133, a second control circuit135, and a second audio processing circuit137. In the Bluetooth host device110, the host-side communication circuit111is arranged to operably receive and transmit various Bluetooth packets. The input circuit113is arranged to operably various commands issued by the user. The host-side cypher key generation circuit115is arranged to operably execute various selected or predetermined cypher key algorithms to generate cypher keys required by the Bluetooth host device110for conducting subsequent Bluetooth data transmissions with respective member devices in the Bluetooth device set102. The processing circuit117is coupled with the host-side communication circuit111, the input circuit113, and the host-side cypher key generation circuit115. The processing circuit117is arranged to operably generate various Bluetooth packets to be transmitted by the host-side communication circuit111, arranged to operably parse various Bluetooth packets received by the host-side communication circuit111to obtain related data or instructions, and further arranged to operably control operations of the host-side cypher key generation circuit115. The processing circuit117is further arranged to operably control operations of the Bluetooth host device110according to various operating commands issued by the user through the input circuit113. The term “Bluetooth packet” used throughout the description and the claims also encompass various protocol data units (PDUs) specified by various Bluetooth communication standards. In some embodiments, the processing circuit117is further coupled with a display device150, and arranged to operably control operations of the display device150, so as to display related information or images to the user. In the first member device120, the first communication circuit121is arranged to operably receive and transmit various Bluetooth packets. The first cypher key generation circuit123is arranged to operably execute various selected or predetermined cypher key algorithms to generate cypher keys required by the first member device120for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110. The first control circuit125is coupled with the first communication circuit121and the first cypher key generation circuit123. The first control circuit125is arranged to operably generate various Bluetooth packets to be transmitted by the first communication circuit121, and arranged to operably parse various Bluetooth packets received by the first communication circuit121to acquire related data or instructions, and further arranged to operably control the cypher key generating operations of the first cypher key generation circuit123. In addition, the first control circuit125is further arranged to operably adjust the clock signals employed by the first member device120, so as to synchronize a piconet clock utilized among the first member device120and other Bluetooth devices. The first audio processing circuit127is coupled with the first control circuit125, the first audio playback circuit162, and the first voice receiving circuit164. The first audio processing circuit127is arranged to operably process the audio data transmitted from the Bluetooth host device110(e.g., to encode or decode the audio data, and/or to conduct format conversion on the audio data) according to the instructions of the first control circuit125, and arranged to operably control the first audio playback circuit162to playback contents of the audio data. The first audio processing circuit127is further arranged to operably encode the sounds received by the first voice receiving circuit164to generate related sound data. In the second member device130, the second communication circuit131is arranged to operably receive and transmit various Bluetooth packets. The second cypher key generation circuit133is arranged to operably execute various selected or predetermined cypher key algorithms to generate cypher keys required by the second member device130for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110. The second control circuit135is coupled with the second communication circuit131and the second cypher key generation circuit133. The second control circuit135is arranged to operably generate various Bluetooth packets to be transmitted by the second communication circuit131, and arranged to operably parse various Bluetooth packets received by the second communication circuit131to acquire related data or instructions, and further arranged to operably control the cypher key generating operations of the second cypher key generation circuit133. In addition, the second control circuit135is further arranged to operably adjust the clock signals employed by the second member device130, so as to synchronize a piconet clock utilized among the second member device130and other Bluetooth devices. The second audio processing circuit137is coupled with the second control circuit135, the second audio playback circuit172, and the second voice receiving circuit174. The second audio processing circuit137is arranged to operably process the audio data transmitted from the Bluetooth host device110(e.g., to encode or decode the audio data, and/or to conduct format conversion on the audio data) according to the instructions of the second control circuit135, and arranged to operably control the second audio playback circuit172to playback contents of the audio data. The second audio processing circuit137is further arranged to operably encode the sounds received by the second voice receiving circuit174to generate related sound data. In some embodiments, the first control circuit125is further arranged to operably control the first member device120to act as a Bluetooth Central in a Bluetooth piconet, and to operably adjust the clock signals employed by the first member device120, so as to synchronize a piconet clock utilized among the first member device120and other Bluetooth devices. In this situation, the second control circuit135is further arranged to operably control the second member device130to act as a Bluetooth Peripheral in the Bluetooth piconet, and to operably adjust the clock signals employed by the second member device130, so as to synchronize the piconet clock utilized between the second member device130and the first member device120. In this embodiment, each of the Bluetooth host device110, the first member device120, and the second member device130supports the BLE Audio technology. In this situation, the processing circuit117of the Bluetooth host device110is further arranged to operably generate audio data complying with related specifications of the BLE Audio technology (hereinafter referred to as BLE audio data), and to operably utilize the host-side communication circuit111transmit the BLE audio data to all member devices in the Bluetooth device set102. The first control circuit125of the first member device120is further arranged to operably utilize the first audio processing circuit127to process the BLE audio data transmitted from the Bluetooth host device110, and to operably instruct the first audio processing circuit127to control the first audio playback circuit162to playback the contents of the BLE audio data. Similarly, the second control circuit135of the second member device130is further arranged to operably utilize the second audio processing circuit137to process the BLE audio data transmitted from the Bluetooth host device110, and to operably instruct the second audio processing circuit137to control the second audio playback circuit172to playback the contents of the BLE audio data. In some embodiments, the host-side communication circuit111of the Bluetooth host device110is further arranged to operably adopt various wired network transmission technologies or various Radio Access Technologies (RATs) to receive the voice data transmitted from a remote device (not shown in figures) through various networks (e.g., Internet, mobile communication networks, or various private networks). The processing circuit117is arranged to operably decode the voice data received by the host-side communication circuit111, and arranged to operably utilize the host-side communication circuit111to transmit decoded voice data to the first member device120and/or the second member device130in the Bluetooth device set102in the form of Bluetooth packets, and to operably instruct the first member device120and/or the second member device130to utilize the first audio playback circuit162and/or the second audio playback circuit172to playback the contents of the voice data. The aforementioned RAT may be various 2nd Generation (2G) mobile communication technologies, various 3rd Generation (3G) mobile communication technologies, various 4th Generation (4G) mobile communication technologies, various 5th Generation (5G) mobile communication technologies, various wireless networking technologies specified by the IEEE 802.11 series standards, various Internet-of-Thing (IoT) communication technologies, various Narrow Band Internet of Thing (NB-IoT) communication technologies, various Vehicle-to-Vehicle communication technologies, various Vehicle-to-Everything (V2X) communication technologies, various satellite communication technologies, various wireless communication technologies proposed by other standard setting organizations, or the like. On the other hand, the first member device120and/or the second member device130may utilize the first voice receiving circuit164and/or the second voice receiving circuit174to receive the user's voice, and may utilize the first audio processing circuit127and/or the second audio processing circuit137to generate related sound data. The first member device120and/or the second member device130may further utilize the first communication circuit121and/or the second communication circuit131to transmit the aforementioned sound data to the Bluetooth host device110. In this situation, the processing circuit117of the Bluetooth host device110may further adopt the aforementioned wired network transmission technologies or RATs to transmit the sound data generated by the Bluetooth device set102to the remote device through various appropriate networks. As a result, the user is enabled to utilize the cooperation of the Bluetooth host device110and the Bluetooth device set102to realize voice communication with the remote device. In practice, the host-side communication circuit111in the Bluetooth host device110may be realized with appropriate wireless transceiver circuits supporting the Bluetooth communication protocol of the Bluetooth Core Specification Version 5.2 or a newer version. Alternatively, the host-side communication circuit111may be realized with various hybrid communication circuits supporting above Bluetooth communication protocol and also supporting the aforementioned wired network transmission technologies or RATs. If needed, the host-side communication circuit111may be coupled with an additional antenna (not shown in figures). The input circuit113may be realized with various appropriate circuits capable of receiving the commands issued by the user, such as a keyboard, a mouse, a touch screen, a voice activated device, a gesture sensing device, or a hybrid of the above various devices. The host-side cypher key generation circuit115may be realized with various digital computing circuits, microprocessors, security modules, or Application Specific Integrated Circuits (ASICs) having cypher key computing capabilities. The processing circuit117may be realized with an appropriate packet demodulation circuit, a digital computing circuit, a microprocessor, an ASIC, a single processor module, a combination of multiple processor modules, a single computer system, a combination of multiple computer systems, a single server, a combination of multiple servers, or a cloud computing system having appropriate computing capabilities and capable of parsing and generating Bluetooth packets adopting the BLE Audio technology specified by the Bluetooth Core Specification Version 5.2 or newer versions. In practical applications, different functional blocks of the aforementioned Bluetooth host device110may be realized with separate circuits or may be integrated into a single IC chip or a single device. For example, the input circuit113and/or the host-side cypher key generation circuit115may be integrated into the processing circuit117. For another example, the input circuit113and the display device150may be integrated into a single touch screen. Alternatively, all functional blocks of the Bluetooth host device110may be integrated into a single IC chip, a mobile communication device (e.g., a cell phone), a wearable device, a tablet computer, a notebook computer, a desktop computer, an audio broadcast system, a voice guidance system, a voice broadcasting system, a vehicular communication device, a satellite communication device, a smart TC, a Bluetooth smart speaker, or the like. In practice, each of the first communication circuit121and the second communication circuit131in the Bluetooth device set102may be realized with an appropriate Bluetooth communication circuit capable of supporting the Bluetooth communication protocol of the Bluetooth Core Specification Version 5.2 or newer versions. If needed, the first communication circuit121and the second communication circuit131may be respectively coupled with additional antennas (not shown in figures). Each of the first cypher key generation circuit123and the second cypher key generation circuit133may be realized with appropriate digital computing circuits, microprocessors, security modules, or ASICs having cypher key computing capabilities. Each of the first control circuit125and the second control circuit135may be realized with an appropriate packet demodulation circuit, a digital computing circuit, a microprocessor, a single processor module, a combination of multiple processor modules, or an ASIC having appropriate computing capabilities and capable of parsing and generating Bluetooth packets adopting the BLE Audio technology specified by the Bluetooth Core Specification Version 5.2 or newer versions. In some embodiments, the aforementioned first communication circuit121and second communication circuit131may be realized with appropriate Bluetooth transmission circuits that also support the Bluetooth communication protocol of earlier Bluetooth versions (e.g., Bluetooth 2.0, Bluetooth 3.0, Bluetooth 4.0, Bluetooth 4.2, or the like). In this situation, the aforementioned first control circuit125and second control circuit135should be designed to be able to parse and generate Bluetooth packets defined by the Bluetooth communication protocol of earlier Bluetooth versions. Each of the first audio processing circuit127and the second audio processing circuit137may be realized with digital computing circuits, microprocessors, ASICs, or digital-to-analog converters (DACs) capable of conducting various encoding/decoding processing and/or data format conversion on audio data. In some embodiments, the first audio processing circuit127and the second audio processing circuit137may be respectively integrated into the first control circuit125and the second control circuit135. Different functional blocks of the aforementioned first member device120may be realized with separate circuits or may be integrated into a single IC chip, a single wearable Bluetooth device, or a single Bluetooth speaker. Similarly, different functional blocks of the aforementioned second member device130may be realized with separate circuits or may be integrated into a single IC chip, a single wearable Bluetooth device, or a single Bluetooth speaker. In addition, each of the first audio playback circuit162and the second audio playback circuit172may be realized with various appropriate circuits capable of receiving and playbacking audio data, such as various types of speakers. Each of the first voice receiving circuit164and the second voice receiving circuit17may be realized with various appropriate circuits capable of receiving sound and converting sound into corresponding audio signals, such as various types of microphones. In some embodiments, the first member device120, the first audio playback circuit162, and the first voice receiving circuit164may be integrated into a single device (e.g., a wearable Bluetooth device or a Bluetooth speaker). Similarly, the second member device130, the second audio playback circuit172, and the second voice receiving circuit174may be integrated into a single device (e.g., a wearable Bluetooth device or a Bluetooth speaker). The main circuit structure and implementations of other member devices (e.g., the third member device140), other audio playback circuits (e.g., the third audio playback circuit182), and other voice receiving circuits (e.g., the third voice receiving circuit184) in the Bluetooth device set102, may be similar to the aforementioned corresponding member devices/corresponding circuits. But different additional circuit components may be provided in different member devices, different audio playback circuits, and/or different voice receiving circuits. The circuit structure of all member devices is not required to be exactly identical with each other. The circuit structure of all audio playback circuits is not required to be exactly identical with each other. The circuit structure of all voice receiving circuits are not required to be exactly identical with each other. When the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the user may utilize the Bluetooth communication system100to conduct various audio playback operations adopting the BLE Audio technology to reduce the power consumption of the Bluetooth communication system100while improving the overall audio playback quality. As described previously, when a traditional Bluetooth device set that supports the BLE Audio technology wants to connect with a traditional Bluetooth host device, the traditional Bluetooth host device has to negotiate with individual member devices in the traditional Bluetooth device set one by one regarding the relevant parameters for generating cypher keys. Therefore, it will take a lengthy time for the traditional Bluetooth host device to respectively conduct Bluetooth pairing with respective member devices in the traditional Bluetooth device set. In order to solve the problem that the efficiency of pairing between the traditional Bluetooth host device and different member devices in the traditional Bluetooth device set is too low, the Bluetooth host device110and the Bluetooth device set102in the disclosed Bluetooth communication system100will adopt different approaches to improve the generation efficiency of related cypher keys. The operations of the Bluetooth communication system100will be further described in the following by reference toFIG.2andFIG.3.FIG.2andFIG.3collectively show a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a first embodiment of the present disclosure. In the flowchart ofFIG.2andFIG.3, operations within a column under the name of a specific device are operations to be performed by the specific device. For example, operations within a column under the label “Bluetooth host device” are operations to be performed by the Bluetooth host device110; operations within a column under the label “first member device” are operations to be performed by the first member device120; operations within a column under the label “second member device” are operations to be performed by the second member device130; and so forth. The same analogous arrangement also applies to the subsequent flowcharts. When the user wants to utilize the Bluetooth communication system100to playback various audio data adopting the BLE Audio technology, the Bluetooth host device110should be paired with respective member devices in the Bluetooth device set102in advance. In this situation, the processing circuit117of the Bluetooth host device110may generate a Bluetooth inquiry request containing a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices, and then wait for responses from the member devices of the Bluetooth device set102. In practice, the processing circuit117may also fill in other data or messages in the above Bluetooth inquiry request depending on the requirement of the function design. Alternatively, the processing circuit117may control the host-side communication circuit111to operate in a predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. For example, the predetermined receiving mode may be an operating mode capable of receiving various Bluetooth advertising packets, such as an LE Extended Passive Scan mode, an LE Extended Active Scan mode, an LE Extended Initiator mode, or a Periodic Scanning. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The predetermined transmitting mode may be various operating modes capable of transmitting various Bluetooth advertising packets and/or Bluetooth protocol data units (PDUs). For example, the predetermined transmitting mode may be an Advertising mode, a Scannable mode, a Connectable mode, a Non-connectable mode, a Non-scannable mode, a Periodic Advertising mode, an LE Extended Advertising mode, or an LE Periodic Advertising mode. The first member device120may perform the operation202ofFIG.2after entering the predetermined transmitting mode. In the operation202, the first control circuit125may generate one or more target Bluetooth packets, wherein the one or more target Bluetooth packets contain a device information of the first member device120(e.g., a Bluetooth device address of the first member device120) and an auto-pair request that can be utilized to identify the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125. The first control circuit125may define the content and format of the auto-pair request by itself according to preset rules. The first control circuit125may insert the auto-pair request and the device information of the first member device120into one or more specific fields of a single target Bluetooth packet, or may insert them into specific fields of multiple target Bluetooth packets in a distributed manner. In operations, the first control circuit125may utilize predetermined Bluetooth advertising packets to be the above target Bluetooth packets. For example, the one or more target Bluetooth packets mentioned in the operation202may be one or more auxiliary advertising indication (AUX_ADV_IND) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets and one or more auxiliary advertising indication (AUX_ADV_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more auxiliary chain indication (AUX_CHAIN_IND) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, and one or more auxiliary chain indication (AUX_CHAIN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more auxiliary scan response (AUX_SCAN_RSP) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, and one or more auxiliary scan response (AUX_SCAN_RSP) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more auxiliary scan response (AUX_SCAN_RSP) packets, and one or more auxiliary chain indication (AUX_CHAIN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, one or more auxiliary scan response (AUX_SCAN_RSP) packets, and one or more auxiliary chain indication (AUX_CHAIN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more auxiliary synchronous indication (AUX_SYNC_IND) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, and one or more auxiliary synchronous indication (AUX_SYNC_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more advertising indication (ADV_IND) packets, one or more non-connectable advertising indication (ADV_NONCONN_IND) packets, or one or more discoverable advertisement indication (ADV_DISCOVER_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more advertising indication (ADV_IND) packets, and one or more non-connectable advertising indication (ADV_NONCONN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more advertising indication (ADV_IND) packets, and one or more discoverable advertisement indication (ADV_DISCOVER_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more advertising indication (ADV_IND) packets, one or more non-connectable advertising indication (ADV_NONCONN_IND) packets, and one or more discoverable advertisement indication (ADV_DISCOVER_IND) packets. In the operation204, the first control circuit125may utilize the first communication circuit121to transmit the aforementioned one or more target Bluetooth packets to the Bluetooth host device110. In the operation206, the host-side communication circuit111of the Bluetooth host device110may receive the one or more target Bluetooth packets. In the operation208, the processing circuit117of the Bluetooth host device110may parse the one or more target Bluetooth packets to acquire the auto-pair request and the device information of the first member device120transmitted from the first member device120. Then, the processing circuit117may inspect the format and content of the auto-pair request to determine whether the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches a predetermined condition (e.g., whether it or they correspond to the brand, the vendor, the circuit model, and/or the firmware version of the Bluetooth host device110and/or the processing circuit117). For example, the processing circuit117may inspect whether the format of the auto-pair request matches a predetermined feature or not, or whether the auto-pair request contains a predetermined content or not. In one embodiment, if the format of the auto-pair request matches the predetermined feature, and/or the auto-pair request contains the predetermined content, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches the predetermined condition. In this situation, the processing circuit117may identify the first member device120as a first privileged device according to the aforementioned auto-pair request, and then perform the operation210. In this embodiment, when the first member device120is identified as a privileged device by the processing circuit117, it means that when the Bluetooth host device110and the first member device120conduct a Bluetooth pairing procedure, the Bluetooth host device110and the first member device120can skip many traditional key parameter negotiation steps, and are permitted to directly adopt a pre-defined simplified method to generate the cypher keys. Relevant operations will be further described in the operation210through the operation216. On the contrary, if the format of the auto-pair request does not match the predetermined feature, and the auto-pair request does not contain predetermined contents, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125does not match the predetermined condition. In this situation, the processing circuit117may identify the first member device120as an ordinary Bluetooth device, and then adopt various existing approaches to conduct Bluetooth pairing with the first member device120so as to generate related cypher keys. In the operation210, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111, and may decide a first parameter P1and generate a first privileged pairing notice. In one embodiment, the processing circuit117may generate a first predetermined value, a first random value, a first predetermined address, a first random address, a first predetermined string, a first random string, a first predetermined token, a first random token, or a first access address corresponding to the first member device120to be the first parameter P1. In another embodiment, the processing circuit117may opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the first member device120to the Bluetooth host device110to be the first parameter P1, or may instead opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the Bluetooth host device110to the first member device120to be the first parameter P1. For example, the processing circuit117may opt to use an initial value of a cyclic redundancy check (CRCInit), a window size (WinSize), a window offset (WinOffset), a connection event interval (Connection Interval), a slave latency, a timeout value, a channel map, a hop, or a sleep clock accuracy (SCA) in a connection indication (Connect_IND) packet or in an auxiliary connection request (AUX_Connect_REQ) packet generated by the processing circuit117to be the first parameter P1. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in the aforementioned connection indication (Connect_IND) packet or auxiliary connection request (AUX_Connect_REQ) packet to be the first parameter P1. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in an auxiliary connection response (AUX_Connect_RSP) packet or in a specific Bluetooth advertising packet generated by the first member device120to be the first parameter P1. The processing circuit117may also transmit the first privileged pairing notice to the first member device120through the host-side communication circuit111in the operation210. Additionally, in the operation210, the processing circuit117may also transmit the first parameter P1or a first field indication to the first member device120through the host-side communication circuit111, wherein the first field indication is utilized for indicating a specific packet field whose content is to be utilized as the first parameter P1. In this situation, the first communication circuit121of the first member device120may perform the operation212to receive the first privileged pairing notice transmitted from the Bluetooth host device110. In addition, the first communication circuit121may also receive the first parameter P1or a related first field indication transmitted from the Bluetooth host device110in the operation212, so that the first control circuit125is enabled to learn the first parameter P1decided by the Bluetooth host device110accordingly. In the operation214, the processing circuit117of the Bluetooth host device110may generate a first cypher key Key-1required for conducting subsequent Bluetooth data transmissions with the first member device120according to the first parameter P1. For example, the processing circuit117may execute a predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1and the device information of the Bluetooth host device110. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1, the device information of the Bluetooth host device110, and the device information of the first member device120. In the operation216, the first control circuit125of the first member device120may generate a second cypher key Key-2required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the first parameter P1. In other words, the second cypher key Key-2generated by the first control circuit125and the first cypher key Key-1generated by the processing circuit117will correspond to each other. For example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1and the device information of the first member device120. For another example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1, the device information of the first member device120, and the device information of the Bluetooth host device110. In other words, after the first member device120is identified as the first privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110can directly generate the first cypher key Key-1based on the first parameter P1decided by the Bluetooth host device110while the first member device120can directly generate the second cypher key Key-2based on the first parameter P1decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. In the operation218, the processing circuit117of the Bluetooth host device110may use the first cypher key Key-1to conduct Bluetooth data transmissions with the first member device120through the host-side communication circuit111. In the operation220, the first control circuit125of the first member device120may use the second cypher key Key-2to conduct Bluetooth data transmissions with the Bluetooth host device110through the first communication circuit121. For example, in the embodiments where both the Bluetooth host device110and the first member device120support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the first member device120to thereby extend the serving time of the Bluetooth host device110and the first member device120, but also effectively improves the overall quality of the audio playback operations. As shown inFIG.3, after the second cypher key Key-2is generated by the first control circuit125, the first control circuit125may further perform the operation302to utilize the first communication circuit121to transmit a device set identification information Set-ID corresponding to the Bluetooth device set102. For example, the first control circuit125may utilize a Set Identity Resolving Key (SIRK) of the Bluetooth device set102to be the device set identification information Set-ID of the Bluetooth device set102. In this situation, the host-side communication circuit111of the Bluetooth host device110may perform the operation304to receive the device set identification information Set-ID transmitted from the first member device120. In operations, the first control circuit125of the first member device120may generate a resolvable set identifier (RSI) corresponding to the first member device120at an appropriate time point (e.g., at any time point between the operation202and the operation220, or at a certain time point before the operation202). For example, the first control circuit125may perform a predetermined target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address to be a resolvable set identifier RSI-1corresponding to the first member device120. In practice, the first control circuit125may utilize the first communication circuit121to transmit the resolvable set identifier RSI-1corresponding to the first member device120to the Bluetooth host device110at any time point after the operation202. Alternatively, the first control circuit125may also insert the resolvable set identifier RSI-1corresponding to the first member device120into the one or more target Bluetooth packets to be transmitted to the Bluetooth host device110in the operation202. As a result, the Bluetooth host device110is enabled to receive the resolvable set identifier RSI-1corresponding to the first member device120in the operation206. Similarly, the second control circuit135of the second member device130may perform the operation306ofFIG.3at any appropriate time point to generate a resolvable set identifier RSI-2corresponding to the second member device130. For example, the second control circuit135may perform the aforementioned target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address to be the resolvable set identifier RSI-2corresponding to the second member device130. In practice, the second control circuit135may perform the operation306at any time point between the operation202and the operation220, or at a certain time point before the operation202. As described previously, all member devices in the Bluetooth device set102may operate in a predetermined transmitting mode. The second member device130may perform the operation308ofFIG.3during a time period while the second member device130operates in the predetermined transmitting mode. In the operation308, the second control circuit135may utilize the second communication circuit131to transmit a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) and the resolvable set identifier RSI-2to the Bluetooth host device110. In operations, the second control circuit135may generate one or more target Bluetooth packets containing the device information of the second member device130and the resolvable set identifier RSI-2by adopting the approach described in the aforementioned operation202. For example, the second control circuit135may insert the resolvable set identifier RSI-2and the device information of the second member device130into one or more specific fields of a single target Bluetooth packet, or may insert them into specific fields of multiple target Bluetooth packets in a distributed manner. Then, the second control circuit135may utilize the second communication circuit131to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation308may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In this situation, the host-side communication circuit111of the Bluetooth host device110may perform the operation310to receive the one or more target Bluetooth packets transmitted from the second member device130. The processing circuit117may parse the one or more target Bluetooth packets to acquire the device information of the second member device130and the resolvable set identifier RSI-2. Then, in the operation312, the processing circuit117may inspect the resolvable set identifier RSI-2of the second member device130according to the device set identification information Set-ID transmitted from the first member device120, so as to determine whether the second member device130belongs to the Bluetooth device set102or not. For example, in this embodiment, the processing circuit117may inspect whether the resolvable set identifier RSI-2is a random address calculated based on the device set identification information Set-ID or not. If the processing circuit117determines that the resolvable set identifier RSI-2is a random address generated based on the device set identification information Set-ID, then the processing circuit117may determine that the second member device130belongs to the Bluetooth device set102. In this situation, the processing circuit117may identify the second member device130as a member device of the Bluetooth device set102in the operation312according to the device set identification information Set-ID and the resolvable set identifier RSI-2, and then perform the operation314. In this embodiment, when the second member device130is identified as a member device of the Bluetooth device set102by the processing circuit117, it means that when the Bluetooth host device110and the second member device130conduct a Bluetooth pairing procedure, the Bluetooth host device110and the second member device130can skip many traditional key parameter negotiation steps, and are permitted to directly adopt a pre-defined simplified method to generate the cypher keys. Relevant operations will be further described in the operation314through the operation320. On the contrary, if the processing circuit117determines that the resolvable set identifier RSI-2is not a random address generated based on the device set identification information Set-ID, then the processing circuit117may determine that the second member device130does not belong to the Bluetooth device set102. In this situation, the processing circuit117may identify the second member device130as an ordinary Bluetooth device, and then adopt various existing approaches to conduct Bluetooth pairing with the second member device130so as to generate related cypher keys. In the operation314, the processing circuit117may establish a connection with the second member device130through the host-side communication circuit111, and may decide a second parameter P2and generate a second privileged pairing notice. In one embodiment, the processing circuit117may generate a second predetermined value, a second random value, a second predetermined address, a second random address, a second predetermined string, a second random string, a second predetermined token, a second random token, or a second access address corresponding to the second member device130to be the second parameter P2. In another embodiment, the processing circuit117may opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the second member device130to the Bluetooth host device110to be the second parameter P2, or may instead opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the Bluetooth host device110to the second member device130to be the second parameter P2. For example, the processing circuit117may opt to use an initial value of a cyclic redundancy check (CRCInit), a window size (WinSize), a window offset (WinOffset), a connection event interval (Connection Interval), a slave latency, a timeout value, a channel map, a hop, or a sleep clock accuracy (SCA) in a connection indication (Connect_IND) packet or in an auxiliary connection request (AUX_Connect_REQ) packet generated by the processing circuit117to be the second parameter P2. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in the aforementioned connection indication (Connect_IND) packet or auxiliary connection request (AUX_Connect_REQ) packet to be the second parameter P2. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in an auxiliary connection response (AUX_Connect_RSP) packet or in a specific Bluetooth advertising packet generated by the second member device130to be the second parameter P2. The processing circuit117may also transmit the second privileged pairing notice to the second member device130through the host-side communication circuit111in the operation314. Additionally, in the operation314, the processing circuit117may also transmit the second parameter P2or a second field indication to the second member device130through the host-side communication circuit111, wherein the second field indication is utilized for indicating a specific packet field whose content is to be utilized as the second parameter P2. In practice, the second parameter P2may be identical to the first parameter P1, or may be different from the first parameter P1. In this situation, the second communication circuit131of the second member device130may perform the operation316to receive the second privileged pairing notice transmitted from the Bluetooth host device110. In addition, the second communication circuit131may also receive the second parameter P2or a related second field indication transmitted from the Bluetooth host device110in the operation316, so that the second control circuit135is enabled to learn the second parameter P2decided by the Bluetooth host device110accordingly. In the operation318, the processing circuit117of the Bluetooth host device110may generate a third cypher key Key-3required for conducting subsequent Bluetooth data transmissions with the second member device130according to the second parameter P2. For example, the processing circuit117may execute a predetermined cypher key algorithm according to the second parameter P2and the device information of the Bluetooth host device110to generate the third cypher key Key-3. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the third cypher key Key-3. In the operation320, the second control circuit135of the second member device130may generate a fourth cypher key Key-4required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the second parameter P2. In other words, the fourth cypher key Key-4generated by the second control circuit135and the third cypher key Key-3generated by the processing circuit117will correspond to each other. For example, the second control circuit135may generate the aforementioned predetermined cypher key algorithm according to the second parameter P2and the device information of the second member device130to generate the fourth cypher key Key-4. For another example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the fourth cypher key Key-4. In other words, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110can directly generate the third cypher key Key-3based on the second parameter P2decided by the Bluetooth host device110while the second member device130can directly generate the fourth cypher key Key-4based on the second parameter P2decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. In the operation322, the processing circuit117of the Bluetooth host device110may use the third cypher key Key-3to conduct Bluetooth data transmissions with the second member device130through the host-side communication circuit111. In the operation324, the second control circuit135of the second member device130may use the fourth cypher key Key-4to conduct Bluetooth data transmissions with the Bluetooth host device110through the second communication circuit131. In practice, the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the third member device140) may establish connections according to the aforementioned interaction between the Bluetooth host device110and the second member device130to respectively generate required cypher keys for conducting subsequent Bluetooth data transmissions between both parties. In the embodiments where both the Bluetooth host device110and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the second member device130to thereby extend the serving time of the Bluetooth host device110and the second member device130, but also effectively improves the overall quality of the audio playback operations. In another embodiment, each of the other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may generate one or more target Bluetooth packets containing the aforementioned auto-pair request, the device information of respective member device, and the resolvable set identifiers corresponding to respective member device, and transmit the one or more target Bluetooth packets to the to the Bluetooth host device110according to the approach adopted by the first member device120in the operation202. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation202. In this situation, the Bluetooth host device110may identify a member device that transmits the auto-pair request first as the first privileged device, and then conduct the simplified pairing procedure with the first privileged device first. Afterwards, the Bluetooth host device110may identify other member devices as member devices of the Bluetooth device set102according to the device set identification information Set-ID transmitted from the first privileged device and the resolvable set identifiers transmitted from other member devices, and then conduct the simplified pairing procedure with other member devices. It can be appreciated from the foregoing descriptions ofFIG.2thoughFIG.3that the Bluetooth host device110is enabled to determine whether the first member device120is a privileged device or not according to the auto-pair request transmitted from the first member device120. After the first member device120is identified as a privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation210and operation214while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. On the other hand, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation210and operation214while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.2throughFIG.3can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. According to the method described inFIG.2throughFIG.3, the Bluetooth host device110and respective member devices of the Bluetooth device set102does not need to use any display device. Therefore, the display device150may be omitted, and the hardware structure, the weight, and the volume of respective member devices of the Bluetooth device set102can be greatly simplified. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.4andFIG.5, which collectively show a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a second embodiment of the present disclosure. As described previously, when the user wants to utilize the Bluetooth device set102to playback audio data transmitted from the Bluetooth host device110by adopting the BLE Audio technology, the Bluetooth host device110should be paired with respective member devices in the Bluetooth device set102in advance. In this situation, as described above, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices, and then wait for responses from the member devices of the Bluetooth device set102. Alternatively, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation402ofFIG.4after entering the predetermined transmitting mode. In the operation402, the first control circuit125may generate one or more target Bluetooth packets, wherein the one or more target Bluetooth packets containing a resolvable set identifier RSI-1corresponding to the first member device120, and a device information of the first member device120(e.g., a Bluetooth device address of the first member device120). In operations, the first control circuit125of the first member device120may generate a resolvable set identifier RSI-1corresponding to the first member device120in the operation402or at a certain time point before the operation402. For example, the first control circuit125may perform a predetermined target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address as a resolvable set identifier RSI-1corresponding to the first member device120. The first control circuit125may insert the resolvable set identifier RSI-1and the device information of the first member device120into one or more specific fields of a single target Bluetooth packet, or may insert them into specific fields of multiple target Bluetooth packets in a distributed manner. In practice, the first control circuit125may also insert the device set identification information Set-ID of the Bluetooth device set102, and/or the device information of other member devices in the Bluetooth device set102(e.g., the second member device130or the third member device140) into the aforementioned one or more target Bluetooth packets. The type of the target Bluetooth packets referred to in the operation402may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In some embodiments, each of the other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may generate one or more target Bluetooth packets containing the device information of respective member device and the resolvable set identifiers corresponding to respective member device, and transmit the one or more target Bluetooth packets to the to the Bluetooth host device110according to the approach adopted by the first member device120in the operation402. Similarly, each of the other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may also insert the device set identification information Set-ID of the Bluetooth device set102, and/or the device information of other member devices of the Bluetooth device set102into the one or more target Bluetooth packets to be transmitted to the Bluetooth host device110. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation402. In the operation404, the first control circuit125may utilize the first communication circuit121to transmit the aforementioned one or more target Bluetooth packets to the Bluetooth host device110. In the operation406, the host-side communication circuit111of the Bluetooth host device110may receive the one or more target Bluetooth packets. In the operation408, the processing circuit117of the Bluetooth host device110may parse the one or more target Bluetooth packets to acquire the resolvable set identifier RSI-1and the device information of the first member device120transmitted from the first member device120. Then, the processing circuit117may inspect the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets, to determine whether the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches a predetermined condition (e.g., whether it or they correspond to the brand, the vendor, the circuit model, and/or the firmware version of the Bluetooth host device110and/or the processing circuit117). For example, the processing circuit117may inspect whether the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets matches a predetermined rule or not. In one embodiment, if the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets matches the predetermined rule, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches the predetermined condition. In this situation, the processing circuit117may identify the first member device120as a first privileged device according to the position of the resolvable set identifier RSI-1, and then perform the operation410ofFIG.4. In this embodiment, when the first member device120is identified as a privileged device by the processing circuit117, it means that when the Bluetooth host device110and the first member device120conduct a Bluetooth pairing procedure, the Bluetooth host device110and the first member device120can skip many traditional key parameter negotiation steps, and my directly adopt a pre-defined simplified method to generate the cypher keys. The operations of this portion are substantially the same as that in the operation210through the operation216described previously. On the contrary, if the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets does not match the predetermined rule, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125does not match the predetermined condition. In this situation, the processing circuit117may identify the first member device120as an ordinary Bluetooth device, and then adopt various existing approaches to conduct Bluetooth pairing with the first member device120so as to generate related cypher keys. In the operation410, the processing circuit117may generate a corresponding candidate device list according to messages transmitted from multiple nearby Bluetooth devices (e.g., responses to the Bluetooth inquiry request sent by the Bluetooth host device110), and control the display device150to display the candidate device list. The processing circuit117may also conduct filtering on the device items to be displayed in the candidate device list in the operation410, and control the display device150to display a single device item for representing the entire Bluetooth device set102in the candidate device list, but does not simultaneously display a plurality of device items for respectively representing a plurality of member devices of the Bluetooth device set102in the candidate device list, so as to simplify the complexity of the user's manipulations during the Bluetooth pairing procedure. As described previously, all member devices in the Bluetooth device set102may conduct the same operations in the operation402, that is, transmitting one or more target Bluetooth packets containing the device set identification information Set-ID of the Bluetooth device set102, their own device information, their own resolvable set identifier, and the device information of other member devices the to the Bluetooth host device110. In the aforementioned operation410, the processing circuit117may determine which member devices belong to the Bluetooth device set102just like the first member device120according to the contents of the target Bluetooth packets transmitted from different member devices. For example, the processing circuit117may inspect the resolvable set identifier RSI-2provided by the second member device130according to the device set identification information Set-ID transmitted from the first member device120to determine whether the second member device130belongs to the Bluetooth device set102or not. In this embodiment, the processing circuit117may inspect whether the resolvable set identifier RSI-2is a random address generated based on the device set identification information Set-ID. If the processing circuit117determines that the resolvable set identifier RSI-2is a random address generated based on the device set identification information Set-ID, then the processing circuit117may determine that both the first member device120and the second member device130belong to the Bluetooth device set102. For another example, the processing circuit117may compare the device information of the second member device130provided by the first member device120with the device information of the second member device130provided by the second member device130itself, so as to determine whether the second member device130belongs to the Bluetooth device set102or not. In this embodiment, if the device information of the second member device130provided by the first member device120is identical to the device information of the second member device130provided by the second member device130itself, then the processing circuit117may determine that both the first member device120and the second member device130belong to the Bluetooth device set102. The user can know which Bluetooth devices can be paired with the Bluetooth host device110from the candidate device list displayed on the display device150. If the processing circuit117does not conduct filtering on the device items to be displayed in the candidate device list in the operation410, multiple device items respectively representing multiple member devices of the Bluetooth device set102may be shown in the candidate device list. Such a Bluetooth pairing method is likely to be too complicated (because the user has to select multiple member devices to be paired with the Bluetooth host device110one by one), and even makes it difficult for the user to find the correct pairing object. From another aspect, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation410can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and can reduce the possibility of user's erroneous manipulation. The user may manipulate the input circuit113to select the Bluetooth device set102as the object to be paired with the Bluetooth host device110. In this situation, the input circuit113may perform the operation412to receive a selection command issued by the user, and transmit the selection command to the processing circuit117. Then, the operations of the Bluetooth host device110in the following operation210and operation214ofFIG.4are the same as in the corresponding operations inFIG.2, while the operations of the first member device120in the following operation212and operation216ofFIG.4are the same as the in corresponding operations inFIG.2. For the sake of brevity, the descriptions will not be repeated here. In other words, after the first member device120is identified as the first privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation210and operation214while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. As shown inFIG.5, the Bluetooth host device110may perform the operation218ofFIG.5and subsequent operations after generating the first cypher key Key-1, and the first member device120may perform the operation220ofFIG.5and subsequent operations after generating the second cypher key Key-2. Similarly, the second control circuit135of the second member device130may perform the operation306ofFIG.5at an appropriate time to generate a resolvable set identifier RSI-2corresponding to the second member device130. For example, the second control circuit135may perform the aforementioned target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address to be the resolvable set identifier RSI-2corresponding to the second member device130. In practice, the second control circuit135may perform the operation306ofFIG.5at any time point between the operation402ofFIG.4through the operation220ofFIG.5, or at a certain time point before the operation402ofFIG.4. The operations of the Bluetooth communication system100in respective operations ofFIG.5are the same as the in corresponding operations of the aforementionedFIG.2andFIG.3. For the sake of brevity, the descriptions will not be repeated here. In other words, in the embodiment ofFIG.5, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation210and operation214while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. In practice, the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the third member device140) may establish connections according to the aforementioned interaction between the Bluetooth host device110and the second member device130to respectively generate required cypher keys for conducting subsequent Bluetooth data transmissions between both parties. Similarly, in the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In another embodiment, other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may generate one or more target Bluetooth packets containing the device information of respective member device and the resolvable set identifiers corresponding to respective member device, and transmit the one or more target Bluetooth packets to the to the Bluetooth host device110according to the approach adopted by the first member device120in the operation402ofFIG.4. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation402. In this situation, the Bluetooth host device110may identify a member device that transmits the auto-pair request first as the first privileged device, and then conduct the simplified pairing procedure with the first privileged device first. Afterwards, the Bluetooth host device110may identify other member devices as member devices of the Bluetooth device set102according to the device set identification information Set-ID transmitted from the first privileged device and the resolvable set identifiers transmitted from other member devices, and then conduct the simplified pairing procedure with other member devices. It can be appreciated from the foregoing descriptions ofFIG.2thoughFIG.3that the Bluetooth host device110is enabled to determine whether the first member device120is a privileged device or not according to the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets transmitted from the first member device120. After the first member device120is identified as a privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation210and operation214while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. On the other hand, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation210and operation214while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.4throughFIG.5can also effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation410can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.6, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a third embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Alternatively, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.6after entering the predetermined transmitting mode. In the operation602, the first control circuit125may utilize the first communication circuit121to transmit a device information of the first member device120(e.g., a Bluetooth device address of the first member device120), and a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120, and the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation602may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation604, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120and the device information of the second member device130transmitted from the first member device120. In practice, other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may transmit their own device information and the device information of other member devices to the Bluetooth host device110according to the approach adopted by the first member device120in the operation602. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation602. In this situation, the host-side communication circuit111may receive the device information of multiple member devices transmitted from different member devices in the operation604. In the operation606, the processing circuit117may generate a corresponding candidate device list according to messages transmitted from multiple nearby Bluetooth devices (e.g., responses to the Bluetooth inquiry request sent by the Bluetooth host device110), and control the display device150to display the candidate device list. The processing circuit117may also filter the device items to be displayed in the candidate device list in the operation606, and control the display device150to display a single device item for representing the entire Bluetooth device set102in the candidate device list, but does not simultaneously display a plurality of device items for respectively representing a plurality of member devices of the Bluetooth device set102in the candidate device list, so as to simplify the complexity of the user's manipulations during the Bluetooth pairing procedure. As described previously, all member devices in the Bluetooth device set102may conduct the same operations in the operation602, that is, transmitting their own device information and the device information of other member devices to the Bluetooth host device110. The processing circuit117may determine which member devices belong to the Bluetooth device set102just like the first member device120according to the device information of multiple member devices transmitted from different member devices in the operation604. For example, the processing circuit117may compare the device information of the second member device130provided by the first member device120with the device information of the second member device130provided by the second member device130itself, to determine whether the second member device130belongs to the Bluetooth device set102or not. In this embodiment, if the device information of the second member device130provided by the first member device120is identical to the device information of the second member device130provided by the second member device130itself, then the processing circuit117may determine that both the first member device120and the second member device130belong to the Bluetooth device set102. The user can know which Bluetooth devices can be paired with the Bluetooth host device110from the candidate device list displayed on the display device150. If the processing circuit117does not conduct filtering on the device items to be displayed in the candidate device list in the operation606, multiple device items respectively representing multiple member devices of the Bluetooth device set102may be shown in the candidate device list. Such a Bluetooth pairing method is likely to be too complicated (because the user has to select multiple member devices to be paired with the Bluetooth host device110one by one), and even makes it difficult for the user to find the correct pairing object. From another aspect, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and can reduce the possibility of user's erroneous manipulation. The user may manipulate the input circuit113to select the Bluetooth device set102as the object to be paired with the Bluetooth host device110. In this situation, the input circuit113may perform the operation608to receive a selection command issued by the user, and transmit the selection command to the processing circuit117. In the operation610, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111according to the selection command, and conduct a pairing procedure to generate a first cypher key Key-1. In this situation, the first control circuit125may perform the operation612to establish a connection with the Bluetooth host device110through the first communication circuit121, and conduct the pairing procedure to generate a second cypher key Key-2corresponding to the first cypher key Key-1. Please note that in the aforementioned operation610and operation612, the Bluetooth host device110and the first member device120may adopt various appropriate approach to conduct the Bluetooth pairing procedure, and are not restricted to follow the pairing approach adopted in the aforementioned embodiment ofFIG.2andFIG.4. In addition, the Bluetooth host device110and the first member device120may adopt various appropriate approach to negotiate the parameters of key generation to respectively generate the first cypher key Key-1and the second cypher key Key-2, and are not restricted to follow the key generation mechanism adopted in the aforementioned embodiment ofFIG.2andFIG.4. As shown inFIG.6, the processing circuit117of this embodiment further perform the operation614after generating the first cypher key Key-1to create a correlation between the second member device130and the first cypher key Key-1. On the other hand, the first control circuit125may further perform the operation616after generating the second cypher key Key-2to utilize the first communication circuit121to transmit a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110) and the second cypher key Key-2to the second member device130. In this situation, the second communication circuit131of the second member device130may perform the operation618to receive the second cypher key Key-2and the device information of the Bluetooth host device110transmitted from the first member device120. Then, the processing circuit117may perform the operation620to establish a connection with the second member device130through the host-side communication circuit111, and directly use the first cypher key Key-1to conduct Bluetooth data transmissions with the second member device130. The second control circuit135may perform the operation622to establish a connection with the Bluetooth host device110through the second communication circuit131according to the device information of the Bluetooth host device110, and directly use the second cypher key Key-2to conduct Bluetooth data transmissions with the Bluetooth host device110. In practice, the first control circuit125may adopt the aforementioned approach to transmit the aforementioned second cypher key Key-2to other member devices in the Bluetooth device set102(e.g., the third member device140), so that other member devices in the Bluetooth device set102can directly use the second cypher key Key-2generated by the first member device120to conduct Bluetooth data transmissions with the Bluetooth host device110. In the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.6, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at a different time point. For example,FIG.7shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a fourth embodiment of the present disclosure. The method ofFIG.7is similar with the method of aforementionedFIG.6, but in the embodiment ofFIG.7, the first member device120performs the operation702instead of the operation602. In the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.7, the first control circuit125performs the operation708to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110after generating the second cypher key Key-2. In this situation, the host-side communication circuit111may perform the operation710to receive the device information of the second member device130transmitted from the first member device120. Then, the processing circuit117may perform the operation614ofFIG.7to create a correlation between the second member device130and the first cypher key Key-1. The operations of the Bluetooth communication system100in others operations ofFIG.7are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.7. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.6andFIG.7, it can be appreciated that only the Bluetooth host device110and the first member device120are required to respectively generate the corresponding first cypher key Key-1and second cypher key Key-2in this embodiment. Other member devices (e.g., the second member device130and the third member device140) would directly use the second cypher key Key-2generated by the first member device120to conduct subsequent Bluetooth data transmissions with the Bluetooth host device110, without generating related cypher keys by themselves. Accordingly, by adopting the method ofFIG.6orFIG.7, it can significantly reduce the time and computing loading of other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140) required for negotiating the key parameters with the Bluetooth host device110, and also save their time and computing load required for generating the cypher keys. Additionally, in the embodiments ofFIG.6andFIG.7, the Bluetooth host device110only needs to negotiate the parameters of key generation with a single member device in the Bluetooth device set102(i.e., the first member device120), and does not need to negotiate the parameters of key generation with other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140). In other words, by adopting the method ofFIG.6orFIG.7, it can also greatly reduce the time and computing loading of the Bluetooth host device110required for negotiating the key parameters with other member devices and required for generating cypher keys. Apparently, the method of aboveFIG.6andFIG.7can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.8, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a fifth embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Similarly, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.8after entering the predetermined transmitting mode. The operations of the Bluetooth communication system100in the operation602through the operation612ofFIG.8are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the foregoing descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.8. For the sake of brevity, the descriptions will not be repeated here. As shown inFIG.8, after generating the second cypher key Key-2in the operation610, the first control circuit125of this embodiment may perform the operation802. In the operation802, the first control circuit125may execute a predetermined cypher key algorithm to generate a third cypher key Key-3and a corresponding fourth cypher key Key-4. Then, the first control circuit125may perform the operation802and the operation804. In the operation804, the first control circuit125may utilize the first communication circuit121to transmit the third cypher key Key-3to the Bluetooth host device110. In the operation806, the first control circuit125may utilize the first communication circuit121to transmit a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110) and the fourth cypher key Key-4to the second member device130. In this situation, the Bluetooth host device110may perform the operation808and the operation810, and the second member device130may perform the operation812. In the operation808, the host-side communication circuit111may receive the third cypher key Key-3transmitted from the first member device120. In the operation810, the processing circuit117may create a correlation between the second member device130and the third cypher key Key-3. In the operation812, the second communication circuit131of the second member device130may receive the fourth cypher key Key-4and the device information of the Bluetooth host device110transmitted from the first member device120. Then, the processing circuit117may perform the operation814to establish a connection with the second member device130through the host-side communication circuit111, and directly use the third cypher key Key-3generated by the first member device120to conduct Bluetooth data transmissions with the second member device130. The second control circuit135may perform the operation816to establish a connection with the Bluetooth host device110through the second communication circuit131according to the device information of the Bluetooth host device110, and directly use the fourth cypher key Key-4generated by the first member device120to conduct Bluetooth data transmissions with the Bluetooth host device110. In practice, the first control circuit125may generate required key pairs for conducting subsequent Bluetooth data transmissions for the Bluetooth host device110and other member devices by adopting the same approach described above. In the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.8, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at another different time point. For example,FIG.9shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a sixth embodiment of the present disclosure. The method ofFIG.9is similar with the method of aforementionedFIG.8, but in the embodiment ofFIG.9, the first member device120performs the operation702instead of the operation602. As described previously, in the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.9, the first control circuit125performs the operation904to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) and the third cypher key Key-3to the Bluetooth host device110after generating the second cypher key Key-2in the operation612. In this situation, the host-side communication circuit111may perform the operation908to receive the device information of the second member device130and the third cypher key Key-3transmitted from the first member device120. Then, the processing circuit117may perform the operation810ofFIG.9to create a correlation between the second member device130and the third cypher key Key-3. Then, the processing circuit117may perform the operation814ofFIG.9to establish a connection with the second member device130through the host-side communication circuit111, and directly use the third cypher key Key-3generated by the first member device120to conduct Bluetooth data transmissions with the second member device130. The second control circuit135may perform the operation816ofFIG.9to establish a connection with the Bluetooth host device110through the second communication circuit131according to the device information of the Bluetooth host device110, and directly use the fourth cypher key Key-4generated by the first member device120to conduct Bluetooth data transmissions with the Bluetooth host device110. The operations of the Bluetooth communication system100in others operations ofFIG.9are the same as in the corresponding operations of the aforementioned embodiments ofFIG.6,FIG.7, orFIG.8. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.6,FIG.7,FIG.8, and related advantages are also applicable to the embodiment ofFIG.9. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.8andFIG.9, it can be appreciated that only the Bluetooth host device110and the first member device120are required to respectively generate the corresponding first cypher key Key-1and second cypher key Key-2in this embodiment. However, the required cypher keys for conducting subsequent Bluetooth data transmissions between the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) are generated by the first member device120alone. Accordingly, by adopting the method ofFIG.8orFIG.9, it can significantly reduce the time and computing loading of other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140) required for negotiating the key parameters with the Bluetooth host device110, and also save their time and computing load required for generating the cypher keys. Additionally, in the embodiments ofFIG.8andFIG.9, the Bluetooth host device110only needs to negotiate the parameters of key generation with a single member device in the Bluetooth device set102(i.e., the first member device120), and does not need to negotiate the parameters of key generation with other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140). In other words, by adopting the method ofFIG.8orFIG.9, it can also greatly reduce the time and computing loading of the Bluetooth host device110required for negotiating the key parameters with other member devices and required for generating cypher keys. Apparently, the method of aboveFIG.8andFIG.9can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.10, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a seventh embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Similarly, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.10after entering the predetermined transmitting mode. The operations of the Bluetooth communication system100in the operation602through the operation608ofFIG.10are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the foregoing descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.10. For the sake of brevity, the descriptions will not be repeated here. As shown inFIG.10, the Bluetooth host device110of this embodiment may perform the operation1010after receiving a selection command issued by the user in the operation608. In the operation1010, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111according to the selection command, and transmit a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110) to the first member device120. In this situation, the first communication circuit121may perform the operation1012to receive the device information of the Bluetooth host device110, and may establish a connection with the Bluetooth host device110under control of the first control circuit125. In addition, the first control circuit125further generate an indication value required for conducting the Bluetooth pairing between the Bluetooth host device110and the first member device120in the operation1012. In one embodiment, the aforementioned indication value is a predetermined value, a random value, a predetermined address, a random address, a predetermined string, a random string, a predetermined token, a random token, or the like for use in a predetermined cypher key algorithm. In another embodiment, the aforementioned indication value is an algorithm identifier corresponding to a predetermined cypher key algorithm. After generating the indication value, the first member device120may perform the operation1014, the operation1016, and the operation1018. In the operation1014, the first control circuit125may generate a second cypher key Key-2according to the indication value and a device information of the first member device120(e.g., a Bluetooth device address of the first member device120). For example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the indication value and the device information of the first member device120. For another example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the indication value, the device information of the first member device120, and the device information of the Bluetooth host device110. For another example, the first control circuit125may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the second cypher key Key-2. In the operation1016, the first control circuit125may utilize the first communication circuit121to transmit the indication value to the Bluetooth host device110. In the operation1018, the first control circuit125may utilize the first communication circuit121to transmit the device information of the Bluetooth host device110and the indication value to the second member device130. In this situation, the Bluetooth host device110may perform the operation1020and the operation1022ofFIG.10, and the second member device130may perform the operation1024ofFIG.10. In the operation1020, the host-side communication circuit111may receive the indication value. In the operation1022, the processing circuit117may generate a first cypher key Key-1according to the indication value and the device information of the first member device120. For example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the indication value and the device information of the first member device120. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the indication value, the device information of the first member device120, and the device information of the Bluetooth host device110. For another example, the processing circuit117may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the first cypher key Key-1. In the operation1024, the second communication circuit131may receive the device information of the Bluetooth host device110and the indication value transmitted from the first member device120. In the operation1026, the processing circuit117may establish a connection with the second member device130through the host-side communication circuit111according to a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) transmitted from the first member device120in the operation602, and generate a third cypher key Key-3according to the indication value and the device information of the second member device130. For example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the third cypher key Key-3according to the indication value and the device information of the second member device130. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the third cypher key Key-3according to the indication value, the second member device130, and the device information of the Bluetooth host device110. For another example, the processing circuit117may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the third cypher key Key-3. In this situation, the second member device130may perform the operation1028. In the operation1028, the second control circuit135may establish a connection with the Bluetooth host device110through the second communication circuit131, and generate a fourth cypher key Key-4corresponding to the third cypher key Key-3according to the indication value and the device information of the second member device130. For example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm to generate the fourth cypher key Key-4according to the indication value and the device information of the second member device130. For another example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm to generate the fourth cypher key Key-4according to the indication value, the device information of the second member device130, and the device information of the Bluetooth host device110. For another example, the second control circuit135may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the fourth cypher key Key-4. In other words, after the indication value is generated by the first member device120, the Bluetooth host device110and the first member device120may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. Similarly, the Bluetooth host device110and the second member device130may also omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. As a result, it can significantly reduce the required time for generating the first cypher key Key-1, the second cypher key Key-2, the third cypher key Key-3, and the fourth cypher key Key-4. In the operation1030, the processing circuit117of the Bluetooth host device110may use the third cypher key Key-3to conduct Bluetooth data transmissions with the second member device130through the host-side communication circuit111. In the operation1032, the second control circuit135of the second member device130may use the fourth cypher key Key-4to conduct Bluetooth data transmissions with the Bluetooth host device110through the second communication circuit131. In practice, the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the third member device140) may adopt the aforementioned approach to respectively generate the cypher keys required for conducting subsequent Bluetooth data transmission according to the indication value generated by the first member device120. Similarly, in the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.10, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at another different time point. For example,FIG.11shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to an eighth embodiment of the present disclosure. The method ofFIG.11is similar with the method of aforementionedFIG.10, but in the embodiment ofFIG.11, the first member device120performs the operation702instead of the operation602. As described previously, in the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.11, the first control circuit125performs the operation1118to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110after generating the second cypher key Key-2in the operation1014. In this situation, the host-side communication circuit111may perform the operation1120to receive the device information of the second member device130transmitted from the first member device120. The operations of the Bluetooth communication system100in others operations ofFIG.11are the same as in the corresponding operations of the aforementioned embodiments ofFIG.6,FIG.7, orFIG.8. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.6,FIG.7,FIG.8, and related advantages are also applicable to the embodiment ofFIG.9. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.6andFIG.7, it can be appreciated that after the aforementioned indication value is generated by the first member device120, the Bluetooth host device110and the first member device120may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation1020and operation1022while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation1014. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. Similarly, the Bluetooth host device110and the second member device130can also omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation1026while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation1028. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.10andFIG.11can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.12, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a ninth embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Similarly, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.12after entering the predetermined transmitting mode. The operations of the Bluetooth communication system100in the operation602through the operation608ofFIG.12are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the foregoing descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.12. For the sake of brevity, the descriptions will not be repeated here. As shown inFIG.12, the Bluetooth host device110of this embodiment may perform the operation1210after receiving a selection command issued by the user in the operation608. In the operation1210, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111according to the selection command, and may decide a first parameter P1. The processing circuit117may adopt the same approach as employed in the aforementioned operation210to decide the first parameter P1. Accordingly, the foregoing descriptions regarding how to decide the first parameter P1in the operation210are also applicable to the operation1210, and will not be repeated here for the sake of brevity. The processing circuit117may also transmit the first parameter P1or a first field indication to the first member device120through the host-side communication circuit111in the operation1210, wherein the first field indication is utilized for indicating a specific packet field whose content is to be utilized as the first parameter P1. In this situation, the first communication circuit121of the first member device120may perform the operation1212to establish a connection with the Bluetooth host device110, and to receive the first parameter P1or a related first field indication transmitted from the Bluetooth host device110, so that the first control circuit125is enabled to learn the first parameter P1decided by the Bluetooth host device110accordingly. As shown inFIG.12, the processing circuit117then may perform the operation214to generate a first cypher key Key-1required for conducting subsequent Bluetooth data transmissions with the first member device120according to the first parameter P1. For example, the processing circuit117may execute a predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1and the device information of the Bluetooth host device110. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1, the device information of the Bluetooth host device110, and the device information of the first member device120. On the other hand, the first control circuit125may perform the operation216to generate a second cypher key Key-2required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the first parameter P1. In other words, the second cypher key Key-2generated by the first control circuit125and the first cypher key Key-1generated by the processing circuit117will correspond to each other. For example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1and the device information of the first member device120. For another example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1, the device information of the first member device120, and the device information of the Bluetooth host device110. In other words, after the first parameter P1is decided by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110can directly generate the first cypher key Key-1based on the first parameter P1decided by the Bluetooth host device110, and the first member device120can directly generate the second cypher key Key-2based on the first parameter P1decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. Afterwards, the processing circuit117may use the first cypher key Key-1to conduct Bluetooth data transmissions with the first member device120through the host-side communication circuit111, and the first control circuit125may use the second cypher key Key-2to conduct Bluetooth data transmissions with the Bluetooth host device110through the first communication circuit121. As shown inFIG.12, the first control circuit125may further perform the operation1216to utilize the first communication circuit121to transmit the device information of the Bluetooth host device110to the second member device130. In this situation, the second communication circuit131may perform the operation1218ofFIG.12to receive the device information of the Bluetooth host device110transmitted from the first member device120. As shown inFIG.12, the Bluetooth host device110of this embodiment may further perform the operation1220. In the operation1220, the processing circuit117may establish a connection with the second member device130through the host-side communication circuit111according to a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) transmitted from the first member device120in the operation602, and may decide a second parameter P2. The processing circuit117may adopt the same approach as employed in the aforementioned operation314to decide the second parameter P2. Accordingly, the foregoing descriptions regarding how to decide the second parameter P2in the operation314are also applicable to the operation1220, and will not be repeated here for the sake of brevity. The processing circuit117may also transmit the second parameter P2or a second field indication to the first member device120through the host-side communication circuit111in the operation1220, wherein the second field indication is utilized for indicating a specific packet field whose content is to be utilized as the second parameter P2. In this situation, the first communication circuit121of the first member device120may perform the operation1222to establish a connection with the Bluetooth host device110, and to receive the second parameter P2or a related second field indication transmitted from the Bluetooth host device110, so that the first control circuit125is enabled to learn the second parameter P2decided by the Bluetooth host device110accordingly. As shown inFIG.12, the processing circuit117then may perform the operation318to generate a third cypher key Key-3required for conducting subsequent Bluetooth data transmissions with the second member device130according to the second parameter P2. For example, the processing circuit117may execute a predetermined cypher key algorithm according to the second parameter P2and the device information of the Bluetooth host device110to generate the third cypher key Key-3. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the third cypher key Key-3. On the other hand, the second control circuit135may perform the operation320to generate a fourth cypher key Key-4required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the second parameter P2. In other words, the fourth cypher key Key-4generated by the second control circuit135and the third cypher key Key-3generated by the processing circuit117will correspond to each other. For example, the second control circuit135may generate the aforementioned predetermined cypher key algorithm according to the second parameter P2and the device information of the second member device130to generate the fourth cypher key Key-4. For another example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the fourth cypher key Key-4. In other words, after the second parameter P2is decided by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110can directly generate the third cypher key Key-3based on the second parameter P2decided by the Bluetooth host device110while the second member device130can directly generate the fourth cypher key Key-4based on the second parameter P2decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Afterwards, the processing circuit117may perform the operation322ofFIG.12to use the third cypher key Key-3to conduct Bluetooth data transmissions with the second member device130through the host-side communication circuit111. On the other hand, the second control circuit135may perform the operation324ofFIG.12to use the fourth cypher key Key-4to conduct Bluetooth data transmissions with the Bluetooth host device110through the second communication circuit131. Similarly, in the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.12, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at another different time point. For example,FIG.13shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a tenth embodiment of the present disclosure. The method ofFIG.13is similar with the method of aforementionedFIG.12, but in the embodiment ofFIG.13, the first member device120performs the operation702instead of the operation602. As described previously, in the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.11, the first control circuit125performs the operation1118to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110after generating the second cypher key Key-2in the operation216. In this situation, the host-side communication circuit111may perform the operation1120to receive the device information of the second member device130transmitted from the first member device120. The operations of the Bluetooth communication system100in others operations ofFIG.13are the same as in the corresponding operations of the aforementioned embodiments ofFIG.2,FIG.3,FIG.6,FIG.7, orFIG.23. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.2,FIG.3,FIG.6,FIG.7,FIG.23, and related advantages are also applicable to the embodiment ofFIG.13. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.12andFIG.13, it can be appreciated that after the first parameter P1is decided by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. Similarly, after the first parameter P1is decided by the Bluetooth host device110, the Bluetooth host device110and the second member device130can also omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.12andFIG.13can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please note that the aforementioned executing order of the operations in each flowchart is merely an exemplary embodiment, rather than a restriction to the practical implementations of the present disclosure. For example, inFIG.2, the operation214may be performed at the same time with the operation210, or may be performed before transmitting the first privileged pairing notice, the first parameter P1, and/or a first field indication related to the first parameter P1. For another example, inFIG.3andFIG.5, the operation306and the operation308may be performed before the operation302, or may be performed at the same time with the operation302. For another example, inFIG.3andFIG.5, the operation310and the operation304may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.3andFIG.5, the operation318may be performed at the same time with the operation314, or may be performed before transmitting the second privileged pairing notice, the second parameter P2, and/or a second field indication related to the second parameter P2. For another example, inFIG.4, the operation408may be performed at the same time with the operation410or the operation412, or may be performed between the operation410and the operation412, or may be performed between the operation412and the operation210. For another example, inFIG.7, the operation708and the operation616may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.8, the operation806and the operation804may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.9, the operation806and the operation904may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.10andFIG.11, the operation1018and the operation1016may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.11, the operation1118may be performed at the same time with the operation1016or the operation1018, or may be performed between the operation1016and the operation1018, or may be performed between the operation1014and the operation1016. For another example, inFIG.12andFIG.13, the operation1216and the operation216may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.12andFIG.13, the operation214may be performed at the same time with the operation1210, or may be performed before transmitting the first parameter P1or a first field indication related to the first parameter P1. For another example, inFIG.13, the operation1118and the operation1216may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.12andFIG.13, the operation318may be performed at the same time with the operation1220, or may be performed before transmitting the second parameter P2or a second field indication related to the second parameter P2. In addition, the quantity of functional blocks in the Bluetooth communication system100and the connection among the functional blocks may be modified based on the actual circuit design requirement, and are restricted to the case illustrated in the aforementioned embodiment. For another example, in some embodiments where the Bluetooth device set102does not need to receive the user's voice or ambient sounds, the first voice receiving circuit164, the second voice receiving circuit174, and/or the third voice receiving circuit184may be omitted. For another example, in some embodiments where the Bluetooth device set102does not need to playback audio data, the first audio playback circuit162, the second audio playback circuit172, and/or the third audio playback circuit182may be omitted. For another example, the number of member devices in the Bluetooth device set102may be expanded to a larger number, or the Bluetooth device set102may be simplified to contain only the first member device120and the second member device130. Certain terms are used throughout the description and the claims to refer to particular components. One skilled in the art appreciates that a component may be referred to as different names. This disclosure does not intend to distinguish between components that differ in name but not in function. In the description and in the claims, the term “comprise” is used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to.” The term “couple” is intended to encompass any indirect or direct connection. Accordingly, if this disclosure mentioned that a first device is coupled with a second device, it means that the first device may be directly or indirectly connected to the second device through electrical connections, wireless communications, optical communications, or other signal connections with/without other intermediate devices or connection means. The term “and/or” may comprise any and all combinations of one or more of the associated listed items. In addition, the singular forms “a,” “an,” and “the” herein are intended to comprise the plural forms as well, unless the context clearly indicates otherwise. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention indicated by the following claims.
149,731
11943609
DETAILED DESCRIPTION Reference is made in detail to embodiments of the invention, which are illustrated in the accompanying drawings. The same reference numbers may be used throughout the drawings to refer to the same or like parts, components, or operations. FIG.1shows a simplified functional block diagram of a Bluetooth communication system100according to one embodiment of the present disclosure. The Bluetooth communication system100comprises a Bluetooth host device110and a Bluetooth device set102, wherein the Bluetooth device set102comprises a plurality of member devices. In practical applications, the plurality of member devices in the Bluetooth device set102may utilize various approaches complying with the Bluetooth communication standards to create a Bluetooth piconet, and may conduct various instruction transmission or data transmission through the Bluetooth piconet. Alternatively, the plurality of member devices in the Bluetooth device set102may collectively form a coordinate set complying with Bluetooth communication standards. In this embodiment, the Bluetooth host device110and all member devices in the Bluetooth device set102support the Bluetooth LE Audio (BLE Audio) technology (hereinafter referred to as BLE Audio technology) specified by the Bluetooth Core Specification Version 5.2 or newer versions. Accordingly, an user may connect the Bluetooth host device110with the Bluetooth device set102to utilize the Bluetooth device set102to conduct various audio playback operations. For example, two member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a pair of Bluetooth earphones or a 2.0 channel speaker set. For another example, three member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a 2.1 channel speaker set. For another example, sis member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a 5.1 channel speaker set. For another example, eight member devices in the Bluetooth device set102may cooperate with appropriate audio playback circuits to collectively form a 7.1 channel speaker set. In order to reduce the complexity of the drawing, only three exemplary member devices are shown inFIG.1, which are a first member device120, a second member device130, and a third member device140. In the embodiment ofFIG.1, the first member device120is coupled with a first audio playback circuit162and a first voice receiving circuit164, the second member device130is coupled with a second audio playback circuit172and a second voice receiving circuit174, while the third member device140is coupled with a third audio playback circuit182and a third voice receiving circuit184. The user may connect the Bluetooth host device110with the first member device120, the second member device130, and the third member device140in the Bluetooth device set102, so as to utilize above member devices to control related audio playback circuits to playback audio data transmitted from the Bluetooth host device110by adopting the BLE Audio technology. In the embodiment ofFIG.1, the Bluetooth host device110comprises a host-side communication circuit111, an input circuit113, a host-side cypher key generation circuit115, and a processing circuit117. The first member device120comprises a first communication circuit121, a first cypher key generation circuit123, a first control circuit125, and a first audio processing circuit127. The second member device130comprises a second communication circuit131, a second cypher key generation circuit133, a second control circuit135, and a second audio processing circuit137. In the Bluetooth host device110, the host-side communication circuit111is arranged to operably receive and transmit various Bluetooth packets. The input circuit113is arranged to operably various commands issued by the user. The host-side cypher key generation circuit115is arranged to operably execute various selected or predetermined cypher key algorithms to generate cypher keys required by the Bluetooth host device110for conducting subsequent Bluetooth data transmissions with respective member devices in the Bluetooth device set102. The processing circuit117is coupled with the host-side communication circuit111, the input circuit113, and the host-side cypher key generation circuit115. The processing circuit117is arranged to operably generate various Bluetooth packets to be transmitted by the host-side communication circuit111, arranged to operably parse various Bluetooth packets received by the host-side communication circuit111to obtain related data or instructions, and further arranged to operably control operations of the host-side cypher key generation circuit115. The processing circuit117is further arranged to operably control operations of the Bluetooth host device110according to various operating commands issued by the user through the input circuit113. The term “Bluetooth packet” used throughout the description and the claims also encompass various protocol data units (PDUs) specified by various Bluetooth communication standards. In some embodiments, the processing circuit117is further coupled with a display device150, and arranged to operably control operations of the display device150, so as to display related information or images to the user. In the first member device120, the first communication circuit121is arranged to operably receive and transmit various Bluetooth packets. The first cypher key generation circuit123is arranged to operably execute various selected or predetermined cypher key algorithms to generate cypher keys required by the first member device120for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110. The first control circuit125is coupled with the first communication circuit121and the first cypher key generation circuit123. The first control circuit125is arranged to operably generate various Bluetooth packets to be transmitted by the first communication circuit121, and arranged to operably parse various Bluetooth packets received by the first communication circuit121to acquire related data or instructions, and further arranged to operably control the cypher key generating operations of the first cypher key generation circuit123. In addition, the first control circuit125is further arranged to operably adjust the clock signals employed by the first member device120, so as to synchronize a piconet clock utilized among the first member device120and other Bluetooth devices. The first audio processing circuit127is coupled with the first control circuit125, the first audio playback circuit162, and the first voice receiving circuit164. The first audio processing circuit127is arranged to operably process the audio data transmitted from the Bluetooth host device110(e.g., to encode or decode the audio data, and/or to conduct format conversion on the audio data) according to the instructions of the first control circuit125, and arranged to operably control the first audio playback circuit162to playback contents of the audio data. The first audio processing circuit127is further arranged to operably encode the sounds received by the first voice receiving circuit164to generate related sound data. In the second member device130, the second communication circuit131is arranged to operably receive and transmit various Bluetooth packets. The second cypher key generation circuit133is arranged to operably execute various selected or predetermined cypher key algorithms to generate cypher keys required by the second member device130for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110. The second control circuit135is coupled with the second communication circuit131and the second cypher key generation circuit133. The second control circuit135is arranged to operably generate various Bluetooth packets to be transmitted by the second communication circuit131, and arranged to operably parse various Bluetooth packets received by the second communication circuit131to acquire related data or instructions, and further arranged to operably control the cypher key generating operations of the second cypher key generation circuit133. In addition, the second control circuit135is further arranged to operably adjust the clock signals employed by the second member device130, so as to synchronize a piconet clock utilized among the second member device130and other Bluetooth devices. The second audio processing circuit137is coupled with the second control circuit135, the second audio playback circuit172, and the second voice receiving circuit174. The second audio processing circuit137is arranged to operably process the audio data transmitted from the Bluetooth host device110(e.g., to encode or decode the audio data, and/or to conduct format conversion on the audio data) according to the instructions of the second control circuit135, and arranged to operably control the second audio playback circuit172to playback contents of the audio data. The second audio processing circuit137is further arranged to operably encode the sounds received by the second voice receiving circuit174to generate related sound data. In some embodiments, the first control circuit125is further arranged to operably control the first member device120to act as a Bluetooth Central in a Bluetooth piconet, and to operably adjust the clock signals employed by the first member device120, so as to synchronize a piconet clock utilized among the first member device120and other Bluetooth devices. In this situation, the second control circuit135is further arranged to operably control the second member device130to act as a Bluetooth Peripheral in the Bluetooth piconet, and to operably adjust the clock signals employed by the second member device130, so as to synchronize the piconet clock utilized between the second member device130and the first member device120. In this embodiment, each of the Bluetooth host device110, the first member device120, and the second member device130supports the BLE Audio technology. In this situation, the processing circuit117of the Bluetooth host device110is further arranged to operably generate audio data complying with related specifications of the BLE Audio technology (hereinafter referred to as BLE audio data), and to operably utilize the host-side communication circuit111transmit the BLE audio data to all member devices in the Bluetooth device set102. The first control circuit125of the first member device120is further arranged to operably utilize the first audio processing circuit127to process the BLE audio data transmitted from the Bluetooth host device110, and to operably instruct the first audio processing circuit127to control the first audio playback circuit162to playback the contents of the BLE audio data. Similarly, the second control circuit135of the second member device130is further arranged to operably utilize the second audio processing circuit137to process the BLE audio data transmitted from the Bluetooth host device110, and to operably instruct the second audio processing circuit137to control the second audio playback circuit172to playback the contents of the BLE audio data. In some embodiments, the host-side communication circuit111of the Bluetooth host device110is further arranged to operably adopt various wired network transmission technologies or various Radio Access Technologies (RATs) to receive the voice data transmitted from a remote device (not shown in figures) through various networks (e.g., Internet, mobile communication networks, or various private networks). The processing circuit117is arranged to operably decode the voice data received by the host-side communication circuit111, and arranged to operably utilize the host-side communication circuit111to transmit decoded voice data to the first member device120and/or the second member device130in the Bluetooth device set102in the form of Bluetooth packets, and to operably instruct the first member device120and/or the second member device130to utilize the first audio playback circuit162and/or the second audio playback circuit172to playback the contents of the voice data. The aforementioned RAT may be various 2nd Generation (2G) mobile communication technologies, various 3rd Generation (3G) mobile communication technologies, various 4th Generation (4G) mobile communication technologies, various 5th Generation (5G) mobile communication technologies, various wireless networking technologies specified by the IEEE 802.11 series standards, various Internet-of-Thing (IoT) communication technologies, various Narrow Band Internet of Thing (NB-IoT) communication technologies, various Vehicle-to-Vehicle communication technologies, various Vehicle-to-Everything (V2X) communication technologies, various satellite communication technologies, various wireless communication technologies proposed by other standard setting organizations, or the like. On the other hand, the first member device120and/or the second member device130may utilize the first voice receiving circuit164and/or the second voice receiving circuit174to receive the user's voice, and may utilize the first audio processing circuit127and/or the second audio processing circuit137to generate related sound data. The first member device120and/or the second member device130may further utilize the first communication circuit121and/or the second communication circuit131to transmit the aforementioned sound data to the Bluetooth host device110. In this situation, the processing circuit117of the Bluetooth host device110may further adopt the aforementioned wired network transmission technologies or RATs to transmit the sound data generated by the Bluetooth device set102to the remote device through various appropriate networks. As a result, the user is enabled to utilize the cooperation of the Bluetooth host device110and the Bluetooth device set102to realize voice communication with the remote device. In practice, the host-side communication circuit111in the Bluetooth host device110may be realized with appropriate wireless transceiver circuits supporting the Bluetooth communication protocol of the Bluetooth Core Specification Version 5.2 or a newer version. Alternatively, the host-side communication circuit111may be realized with various hybrid communication circuits supporting above Bluetooth communication protocol and also supporting the aforementioned wired network transmission technologies or RATs. If needed, the host-side communication circuit111may be coupled with an additional antenna (not shown in figures). The input circuit113may be realized with various appropriate circuits capable of receiving the commands issued by the user, such as a keyboard, a mouse, a touch screen, a voice activated device, a gesture sensing device, or a hybrid of the above various devices. The host-side cypher key generation circuit115may be realized with various digital computing circuits, microprocessors, security modules, or Application Specific Integrated Circuits (ASICs) having cypher key computing capabilities. The processing circuit117may be realized with an appropriate packet demodulation circuit, a digital computing circuit, a microprocessor, an ASIC, a single processor module, a combination of multiple processor modules, a single computer system, a combination of multiple computer systems, a single server, a combination of multiple servers, or a cloud computing system having appropriate computing capabilities and capable of parsing and generating Bluetooth packets adopting the BLE Audio technology specified by the Bluetooth Core Specification Version 5.2 or newer versions. In practical applications, different functional blocks of the aforementioned Bluetooth host device110may be realized with separate circuits or may be integrated into a single IC chip or a single device. For example, the input circuit113and/or the host-side cypher key generation circuit115may be integrated into the processing circuit117. For another example, the input circuit113and the display device150may be integrated into a single touch screen. Alternatively, all functional blocks of the Bluetooth host device110may be integrated into a single IC chip, a mobile communication device (e.g., a cell phone), a wearable device, a tablet computer, a notebook computer, a desktop computer, an audio broadcast system, a voice guidance system, a voice broadcasting system, a vehicular communication device, a satellite communication device, a smart TC, a Bluetooth smart speaker, or the like. In practice, each of the first communication circuit121and the second communication circuit131in the Bluetooth device set102may be realized with an appropriate Bluetooth communication circuit capable of supporting the Bluetooth communication protocol of the Bluetooth Core Specification Version 5.2 or newer versions. If needed, the first communication circuit121and the second communication circuit131may be respectively coupled with additional antennas (not shown in figures). Each of the first cypher key generation circuit123and the second cypher key generation circuit133may be realized with appropriate digital computing circuits, microprocessors, security modules, or ASICs having cypher key computing capabilities. Each of the first control circuit125and the second control circuit135may be realized with an appropriate packet demodulation circuit, a digital computing circuit, a microprocessor, a single processor module, a combination of multiple processor modules, or an ASIC having appropriate computing capabilities and capable of parsing and generating Bluetooth packets adopting the BLE Audio technology specified by the Bluetooth Core Specification Version 5.2 or newer versions. In some embodiments, the aforementioned first communication circuit121and second communication circuit131may be realized with appropriate Bluetooth transmission circuits that also support the Bluetooth communication protocol of earlier Bluetooth versions (e.g., Bluetooth 2.0, Bluetooth 3.0, Bluetooth 4.0, Bluetooth 4.2, or the like). In this situation, the aforementioned first control circuit125and second control circuit135should be designed to be able to parse and generate Bluetooth packets defined by the Bluetooth communication protocol of earlier Bluetooth versions. Each of the first audio processing circuit127and the second audio processing circuit137may be realized with digital computing circuits, microprocessors, ASICs, or digital-to-analog converters (DACs) capable of conducting various encoding/decoding processing and/or data format conversion on audio data. In some embodiments, the first audio processing circuit127and the second audio processing circuit137may be respectively integrated into the first control circuit125and the second control circuit135. Different functional blocks of the aforementioned first member device120may be realized with separate circuits or may be integrated into a single IC chip, a single wearable Bluetooth device, or a single Bluetooth speaker. Similarly, different functional blocks of the aforementioned second member device130may be realized with separate circuits or may be integrated into a single IC chip, a single wearable Bluetooth device, or a single Bluetooth speaker. In addition, each of the first audio playback circuit162and the second audio playback circuit172may be realized with various appropriate circuits capable of receiving and playbacking audio data, such as various types of speakers. Each of the first voice receiving circuit164and the second voice receiving circuit17may be realized with various appropriate circuits capable of receiving sound and converting sound into corresponding audio signals, such as various types of microphones. In some embodiments, the first member device120, the first audio playback circuit162, and the first voice receiving circuit164may be integrated into a single device (e.g., a wearable Bluetooth device or a Bluetooth speaker). Similarly, the second member device130, the second audio playback circuit172, and the second voice receiving circuit174may be integrated into a single device (e.g., a wearable Bluetooth device or a Bluetooth speaker). The main circuit structure and implementations of other member devices (e.g., the third member device140), other audio playback circuits (e.g., the third audio playback circuit182), and other voice receiving circuits (e.g., the third voice receiving circuit184) in the Bluetooth device set102, may be similar to the aforementioned corresponding member devices/corresponding circuits. But different additional circuit components may be provided in different member devices, different audio playback circuits, and/or different voice receiving circuits. The circuit structure of all member devices is not required to be exactly identical with each other. The circuit structure of all audio playback circuits is not required to be exactly identical with each other. The circuit structure of all voice receiving circuits are not required to be exactly identical with each other. When the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the user may utilize the Bluetooth communication system100to conduct various audio playback operations adopting the BLE Audio technology to reduce the power consumption of the Bluetooth communication system100while improving the overall audio playback quality. As described previously, when a traditional Bluetooth device set that supports the BLE Audio technology wants to connect with a traditional Bluetooth host device, the traditional Bluetooth host device has to negotiate with individual member devices in the traditional Bluetooth device set one by one regarding the relevant parameters for generating cypher keys. Therefore, it will take a lengthy time for the traditional Bluetooth host device to respectively conduct Bluetooth pairing with respective member devices in the traditional Bluetooth device set. In order to solve the problem that the efficiency of pairing between the traditional Bluetooth host device and different member devices in the traditional Bluetooth device set is too low, the Bluetooth host device110and the Bluetooth device set102in the disclosed Bluetooth communication system100will adopt different approaches to improve the generation efficiency of related cypher keys. The operations of the Bluetooth communication system100will be further described in the following by reference toFIG.2andFIG.3.FIG.2andFIG.3collectively show a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a first embodiment of the present disclosure. In the flowchart ofFIG.2andFIG.3, operations within a column under the name of a specific device are operations to be performed by the specific device. For example, operations within a column under the label “Bluetooth host device” are operations to be performed by the Bluetooth host device110; operations within a column under the label “first member device” are operations to be performed by the first member device120; operations within a column under the label “second member device” are operations to be performed by the second member device130; and so forth. The same analogous arrangement also applies to the subsequent flowcharts. When the user wants to utilize the Bluetooth communication system100to playback various audio data adopting the BLE Audio technology, the Bluetooth host device110should be paired with respective member devices in the Bluetooth device set102in advance. In this situation, the processing circuit117of the Bluetooth host device110may generate a Bluetooth inquiry request containing a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices, and then wait for responses from the member devices of the Bluetooth device set102. In practice, the processing circuit117may also fill in other data or messages in the above Bluetooth inquiry request depending on the requirement of the function design. Alternatively, the processing circuit117may control the host-side communication circuit111to operate in a predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. For example, the predetermined receiving mode may be an operating mode capable of receiving various Bluetooth advertising packets, such as an LE Extended Passive Scan mode, an LE Extended Active Scan mode, an LE Extended Initiator mode, or a Periodic Scanning. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The predetermined transmitting mode may be various operating modes capable of transmitting various Bluetooth advertising packets and/or Bluetooth protocol data units (PDUs). For example, the predetermined transmitting mode may be an Advertising mode, a Scannable mode, a Connectable mode, a Non-connectable mode, a Non-scannable mode, a Periodic Advertising mode, an LE Extended Advertising mode, or an LE Periodic Advertising mode. The first member device120may perform the operation202ofFIG.2after entering the predetermined transmitting mode. In the operation202, the first control circuit125may generate one or more target Bluetooth packets, wherein the one or more target Bluetooth packets contain a device information of the first member device120(e.g., a Bluetooth device address of the first member device120) and an auto-pair request that can be utilized to identify the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125. The first control circuit125may define the content and format of the auto-pair request by itself according to preset rules. The first control circuit125may insert the auto-pair request and the device information of the first member device120into one or more specific fields of a single target Bluetooth packet, or may insert them into specific fields of multiple target Bluetooth packets in a distributed manner. In operations, the first control circuit125may utilize predetermined Bluetooth advertising packets to be the above target Bluetooth packets. For example, the one or more target Bluetooth packets mentioned in the operation202may be one or more auxiliary advertising indication (AUX_ADV_IND) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets and one or more auxiliary advertising indication (AUX_ADV_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more auxiliary chain indication (AUX_CHAIN_IND) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, and one or more auxiliary chain indication (AUX_CHAIN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more auxiliary scan response (AUX_SCAN_RSP) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, and one or more auxiliary scan response (AUX_SCAN_RSP) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more auxiliary scan response (AUX_SCAN_RSP) packets, and one or more auxiliary chain indication (AUX_CHAIN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, one or more auxiliary scan response (AUX_SCAN_RSP) packets, and one or more auxiliary chain indication (AUX_CHAIN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more auxiliary synchronous indication (AUX_SYNC_IND) packets, or may be a group of packets formed by one or more extended advertising indication (ADV_EXT_IND) packets, one or more auxiliary advertising indication (AUX_ADV_IND) packets, and one or more auxiliary synchronous indication (AUX_SYNC_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be one or more advertising indication (ADV_IND) packets, one or more non-connectable advertising indication (ADV_NONCONN_IND) packets, or one or more discoverable advertisement indication (ADV_DISCOVER_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more advertising indication (ADV_IND) packets, and one or more non-connectable advertising indication (ADV_NONCONN_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more advertising indication (ADV_IND) packets, and one or more discoverable advertisement indication (ADV_DISCOVER_IND) packets. For another example, the aforementioned one or more target Bluetooth packets may be a group of packets formed by one or more advertising indication (ADV_IND) packets, one or more non-connectable advertising indication (ADV_NONCONN_IND) packets, and one or more discoverable advertisement indication (ADV_DISCOVER_IND) packets. In the operation204, the first control circuit125may utilize the first communication circuit121to transmit the aforementioned one or more target Bluetooth packets to the Bluetooth host device110. In the operation206, the host-side communication circuit111of the Bluetooth host device110may receive the one or more target Bluetooth packets. In the operation208, the processing circuit117of the Bluetooth host device110may parse the one or more target Bluetooth packets to acquire the auto-pair request and the device information of the first member device120transmitted from the first member device120. Then, the processing circuit117may inspect the format and content of the auto-pair request to determine whether the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches a predetermined condition (e.g., whether it or they correspond to the brand, the vendor, the circuit model, and/or the firmware version of the Bluetooth host device110and/or the processing circuit117). For example, the processing circuit117may inspect whether the format of the auto-pair request matches a predetermined feature or not, or whether the auto-pair request contains a predetermined content or not. In one embodiment, if the format of the auto-pair request matches the predetermined feature, and/or the auto-pair request contains the predetermined content, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches the predetermined condition. In this situation, the processing circuit117may identify the first member device120as a first privileged device according to the aforementioned auto-pair request, and then perform the operation210. In this embodiment, when the first member device120is identified as a privileged device by the processing circuit117, it means that when the Bluetooth host device110and the first member device120conduct a Bluetooth pairing procedure, the Bluetooth host device110and the first member device120can skip many traditional key parameter negotiation steps, and are permitted to directly adopt a pre-defined simplified method to generate the cypher keys. Relevant operations will be further described in the operation210through the operation216. On the contrary, if the format of the auto-pair request does not match the predetermined feature, and the auto-pair request does not contain predetermined contents, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125does not match the predetermined condition. In this situation, the processing circuit117may identify the first member device120as an ordinary Bluetooth device, and then adopt various existing approaches to conduct Bluetooth pairing with the first member device120so as to generate related cypher keys. In the operation210, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111, and may decide a first parameter P1and generate a first privileged pairing notice. In one embodiment, the processing circuit117may generate a first predetermined value, a first random value, a first predetermined address, a first random address, a first predetermined string, a first random string, a first predetermined token, a first random token, or a first access address corresponding to the first member device120to be the first parameter P1. In another embodiment, the processing circuit117may opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the first member device120to the Bluetooth host device110to be the first parameter P1, or may instead opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the Bluetooth host device110to the first member device120to be the first parameter P1. For example, the processing circuit117may opt to use an initial value of a cyclic redundancy check (CRCInit), a window size (WinSize), a window offset (WinOffset), a connection event interval (Connection Interval), a slave latency, a timeout value, a channel map, a hop, or a sleep clock accuracy (SCA) in a connection indication (Connect_IND) packet or in an auxiliary connection request (AUX_Connect_REQ) packet generated by the processing circuit117to be the first parameter P1. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in the aforementioned connection indication (Connect_IND) packet or auxiliary connection request (AUX_Connect_REQ) packet to be the first parameter P1. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in an auxiliary connection response (AUX_Connect_RSP) packet or in a specific Bluetooth advertising packet generated by the first member device120to be the first parameter P1. The processing circuit117may also transmit the first privileged pairing notice to the first member device120through the host-side communication circuit111in the operation210. Additionally, in the operation210, the processing circuit117may also transmit the first parameter P1or a first field indication to the first member device120through the host-side communication circuit111, wherein the first field indication is utilized for indicating a specific packet field whose content is to be utilized as the first parameter P1. In this situation, the first communication circuit121of the first member device120may perform the operation212to receive the first privileged pairing notice transmitted from the Bluetooth host device110. In addition, the first communication circuit121may also receive the first parameter P1or a related first field indication transmitted from the Bluetooth host device110in the operation212, so that the first control circuit125is enabled to learn the first parameter P1decided by the Bluetooth host device110accordingly. In the operation214, the processing circuit117of the Bluetooth host device110may generate a first cypher key Key-1required for conducting subsequent Bluetooth data transmissions with the first member device120according to the first parameter P1. For example, the processing circuit117may execute a predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1and the device information of the Bluetooth host device110. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1, the device information of the Bluetooth host device110, and the device information of the first member device120. In the operation216, the first control circuit125of the first member device120may generate a second cypher key Key-2required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the first parameter P1. In other words, the second cypher key Key-2generated by the first control circuit125and the first cypher key Key-1generated by the processing circuit117will correspond to each other. For example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1and the device information of the first member device120. For another example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1, the device information of the first member device120, and the device information of the Bluetooth host device110. In other words, after the first member device120is identified as the first privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110can directly generate the first cypher key Key-1based on the first parameter P1decided by the Bluetooth host device110while the first member device120can directly generate the second cypher key Key-2based on the first parameter P1decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. In the operation218, the processing circuit117of the Bluetooth host device110may use the first cypher key Key-1to conduct Bluetooth data transmissions with the first member device120through the host-side communication circuit111. In the operation220, the first control circuit125of the first member device120may use the second cypher key Key-2to conduct Bluetooth data transmissions with the Bluetooth host device110through the first communication circuit121. For example, in the embodiments where both the Bluetooth host device110and the first member device120support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the first member device120to thereby extend the serving time of the Bluetooth host device110and the first member device120, but also effectively improves the overall quality of the audio playback operations. As shown inFIG.3, after the second cypher key Key-2is generated by the first control circuit125, the first control circuit125may further perform the operation302to utilize the first communication circuit121to transmit a device set identification information Set-ID corresponding to the Bluetooth device set102. For example, the first control circuit125may utilize a Set Identity Resolving Key (SIRK) of the Bluetooth device set102to be the device set identification information Set-ID of the Bluetooth device set102. In this situation, the host-side communication circuit111of the Bluetooth host device110may perform the operation304to receive the device set identification information Set-ID transmitted from the first member device120. In operations, the first control circuit125of the first member device120may generate a resolvable set identifier (RSI) corresponding to the first member device120at an appropriate time point (e.g., at any time point between the operation202and the operation220, or at a certain time point before the operation202). For example, the first control circuit125may perform a predetermined target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address to be a resolvable set identifier RSI-1corresponding to the first member device120. In practice, the first control circuit125may utilize the first communication circuit121to transmit the resolvable set identifier RSI-1corresponding to the first member device120to the Bluetooth host device110at any time point after the operation202. Alternatively, the first control circuit125may also insert the resolvable set identifier RSI-1corresponding to the first member device120into the one or more target Bluetooth packets to be transmitted to the Bluetooth host device110in the operation202. As a result, the Bluetooth host device110is enabled to receive the resolvable set identifier RSI-1corresponding to the first member device120in the operation206. Similarly, the second control circuit135of the second member device130may perform the operation306ofFIG.3at any appropriate time point to generate a resolvable set identifier RSI-2corresponding to the second member device130. For example, the second control circuit135may perform the aforementioned target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address to be the resolvable set identifier RSI-2corresponding to the second member device130. In practice, the second control circuit135may perform the operation306at any time point between the operation202and the operation220, or at a certain time point before the operation202. As described previously, all member devices in the Bluetooth device set102may operate in a predetermined transmitting mode. The second member device130may perform the operation308ofFIG.3during a time period while the second member device130operates in the predetermined transmitting mode. In the operation308, the second control circuit135may utilize the second communication circuit131to transmit a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) and the resolvable set identifier RSI-2to the Bluetooth host device110. In operations, the second control circuit135may generate one or more target Bluetooth packets containing the device information of the second member device130and the resolvable set identifier RSI-2by adopting the approach described in the aforementioned operation202. For example, the second control circuit135may insert the resolvable set identifier RSI-2and the device information of the second member device130into one or more specific fields of a single target Bluetooth packet, or may insert them into specific fields of multiple target Bluetooth packets in a distributed manner. Then, the second control circuit135may utilize the second communication circuit131to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation308may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In this situation, the host-side communication circuit111of the Bluetooth host device110may perform the operation310to receive the one or more target Bluetooth packets transmitted from the second member device130. The processing circuit117may parse the one or more target Bluetooth packets to acquire the device information of the second member device130and the resolvable set identifier RSI-2. Then, in the operation312, the processing circuit117may inspect the resolvable set identifier RSI-2of the second member device130according to the device set identification information Set-ID transmitted from the first member device120, so as to determine whether the second member device130belongs to the Bluetooth device set102or not. For example, in this embodiment, the processing circuit117may inspect whether the resolvable set identifier RSI-2is a random address calculated based on the device set identification information Set-ID or not. If the processing circuit117determines that the resolvable set identifier RSI-2is a random address generated based on the device set identification information Set-ID, then the processing circuit117may determine that the second member device130belongs to the Bluetooth device set102. In this situation, the processing circuit117may identify the second member device130as a member device of the Bluetooth device set102in the operation312according to the device set identification information Set-ID and the resolvable set identifier RSI-2, and then perform the operation314. In this embodiment, when the second member device130is identified as a member device of the Bluetooth device set102by the processing circuit117, it means that when the Bluetooth host device110and the second member device130conduct a Bluetooth pairing procedure, the Bluetooth host device110and the second member device130can skip many traditional key parameter negotiation steps, and are permitted to directly adopt a pre-defined simplified method to generate the cypher keys. Relevant operations will be further described in the operation314through the operation320. On the contrary, if the processing circuit117determines that the resolvable set identifier RSI-2is not a random address generated based on the device set identification information Set-ID, then the processing circuit117may determine that the second member device130does not belong to the Bluetooth device set102. In this situation, the processing circuit117may identify the second member device130as an ordinary Bluetooth device, and then adopt various existing approaches to conduct Bluetooth pairing with the second member device130so as to generate related cypher keys. In the operation314, the processing circuit117may establish a connection with the second member device130through the host-side communication circuit111, and may decide a second parameter P2and generate a second privileged pairing notice. In one embodiment, the processing circuit117may generate a second predetermined value, a second random value, a second predetermined address, a second random address, a second predetermined string, a second random string, a second predetermined token, a second random token, or a second access address corresponding to the second member device130to be the second parameter P2. In another embodiment, the processing circuit117may opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the second member device130to the Bluetooth host device110to be the second parameter P2, or may instead opt to use the content of a predetermined field in a certain Bluetooth packet transmitted from the Bluetooth host device110to the second member device130to be the second parameter P2. For example, the processing circuit117may opt to use an initial value of a cyclic redundancy check (CRCInit), a window size (WinSize), a window offset (WinOffset), a connection event interval (Connection Interval), a slave latency, a timeout value, a channel map, a hop, or a sleep clock accuracy (SCA) in a connection indication (Connect_IND) packet or in an auxiliary connection request (AUX_Connect_REQ) packet generated by the processing circuit117to be the second parameter P2. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in the aforementioned connection indication (Connect_IND) packet or auxiliary connection request (AUX_Connect_REQ) packet to be the second parameter P2. For another example, the processing circuit117may opt to use the value of the cyclic redundancy check (CRC) in an auxiliary connection response (AUX_Connect_RSP) packet or in a specific Bluetooth advertising packet generated by the second member device130to be the second parameter P2. The processing circuit117may also transmit the second privileged pairing notice to the second member device130through the host-side communication circuit111in the operation314. Additionally, in the operation314, the processing circuit117may also transmit the second parameter P2or a second field indication to the second member device130through the host-side communication circuit111, wherein the second field indication is utilized for indicating a specific packet field whose content is to be utilized as the second parameter P2. In practice, the second parameter P2may be identical to the first parameter P1, or may be different from the first parameter P1. In this situation, the second communication circuit131of the second member device130may perform the operation316to receive the second privileged pairing notice transmitted from the Bluetooth host device110. In addition, the second communication circuit131may also receive the second parameter P2or a related second field indication transmitted from the Bluetooth host device110in the operation316, so that the second control circuit135is enabled to learn the second parameter P2decided by the Bluetooth host device110accordingly. In the operation318, the processing circuit117of the Bluetooth host device110may generate a third cypher key Key-3required for conducting subsequent Bluetooth data transmissions with the second member device130according to the second parameter P2. For example, the processing circuit117may execute a predetermined cypher key algorithm according to the second parameter P2and the device information of the Bluetooth host device110to generate the third cypher key Key-3. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the third cypher key Key-3. In the operation320, the second control circuit135of the second member device130may generate a fourth cypher key Key-4required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the second parameter P2. In other words, the fourth cypher key Key-4generated by the second control circuit135and the third cypher key Key-3generated by the processing circuit117will correspond to each other. For example, the second control circuit135may generate the aforementioned predetermined cypher key algorithm according to the second parameter P2and the device information of the second member device130to generate the fourth cypher key Key-4. For another example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the fourth cypher key Key-4. In other words, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110can directly generate the third cypher key Key-3based on the second parameter P2decided by the Bluetooth host device110while the second member device130can directly generate the fourth cypher key Key-4based on the second parameter P2decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. In the operation322, the processing circuit117of the Bluetooth host device110may use the third cypher key Key-3to conduct Bluetooth data transmissions with the second member device130through the host-side communication circuit111. In the operation324, the second control circuit135of the second member device130may use the fourth cypher key Key-4to conduct Bluetooth data transmissions with the Bluetooth host device110through the second communication circuit131. In practice, the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the third member device140) may establish connections according to the aforementioned interaction between the Bluetooth host device110and the second member device130to respectively generate required cypher keys for conducting subsequent Bluetooth data transmissions between both parties. In the embodiments where both the Bluetooth host device110and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the second member device130to thereby extend the serving time of the Bluetooth host device110and the second member device130, but also effectively improves the overall quality of the audio playback operations. In another embodiment, each of the other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may generate one or more target Bluetooth packets containing the aforementioned auto-pair request, the device information of respective member device, and the resolvable set identifiers corresponding to respective member device, and transmit the one or more target Bluetooth packets to the to the Bluetooth host device110according to the approach adopted by the first member device120in the operation202. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation202. In this situation, the Bluetooth host device110may identify a member device that transmits the auto-pair request first as the first privileged device, and then conduct the simplified pairing procedure with the first privileged device first. Afterwards, the Bluetooth host device110may identify other member devices as member devices of the Bluetooth device set102according to the device set identification information Set-ID transmitted from the first privileged device and the resolvable set identifiers transmitted from other member devices, and then conduct the simplified pairing procedure with other member devices. It can be appreciated from the foregoing descriptions ofFIG.2thoughFIG.3that the Bluetooth host device110is enabled to determine whether the first member device120is a privileged device or not according to the auto-pair request transmitted from the first member device120. After the first member device120is identified as a privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation210and operation214while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. On the other hand, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation210and operation214while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.2throughFIG.3can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. According to the method described inFIG.2throughFIG.3, the Bluetooth host device110and respective member devices of the Bluetooth device set102does not need to use any display device. Therefore, the display device150may be omitted, and the hardware structure, the weight, and the volume of respective member devices of the Bluetooth device set102can be greatly simplified. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.4andFIG.5, which collectively show a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a second embodiment of the present disclosure. As described previously, when the user wants to utilize the Bluetooth device set102to playback audio data transmitted from the Bluetooth host device110by adopting the BLE Audio technology, the Bluetooth host device110should be paired with respective member devices in the Bluetooth device set102in advance. In this situation, as described above, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices, and then wait for responses from the member devices of the Bluetooth device set102. Alternatively, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation402ofFIG.4after entering the predetermined transmitting mode. In the operation402, the first control circuit125may generate one or more target Bluetooth packets, wherein the one or more target Bluetooth packets containing a resolvable set identifier RSI-1corresponding to the first member device120, and a device information of the first member device120(e.g., a Bluetooth device address of the first member device120). In operations, the first control circuit125of the first member device120may generate a resolvable set identifier RSI-1corresponding to the first member device120in the operation402or at a certain time point before the operation402. For example, the first control circuit125may perform a predetermined target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address as a resolvable set identifier RSI-1corresponding to the first member device120. The first control circuit125may insert the resolvable set identifier RSI-1and the device information of the first member device120into one or more specific fields of a single target Bluetooth packet, or may insert them into specific fields of multiple target Bluetooth packets in a distributed manner. In practice, the first control circuit125may also insert the device set identification information Set-ID of the Bluetooth device set102, and/or the device information of other member devices in the Bluetooth device set102(e.g., the second member device130or the third member device140) into the aforementioned one or more target Bluetooth packets. The type of the target Bluetooth packets referred to in the operation402may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In some embodiments, each of the other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may generate one or more target Bluetooth packets containing the device information of respective member device and the resolvable set identifiers corresponding to respective member device, and transmit the one or more target Bluetooth packets to the to the Bluetooth host device110according to the approach adopted by the first member device120in the operation402. Similarly, each of the other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may also insert the device set identification information Set-ID of the Bluetooth device set102, and/or the device information of other member devices of the Bluetooth device set102into the one or more target Bluetooth packets to be transmitted to the Bluetooth host device110. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation402. In the operation404, the first control circuit125may utilize the first communication circuit121to transmit the aforementioned one or more target Bluetooth packets to the Bluetooth host device110. In the operation406, the host-side communication circuit111of the Bluetooth host device110may receive the one or more target Bluetooth packets. In the operation408, the processing circuit117of the Bluetooth host device110may parse the one or more target Bluetooth packets to acquire the resolvable set identifier RSI-1and the device information of the first member device120transmitted from the first member device120. Then, the processing circuit117may inspect the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets, to determine whether the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches a predetermined condition (e.g., whether it or they correspond to the brand, the vendor, the circuit model, and/or the firmware version of the Bluetooth host device110and/or the processing circuit117). For example, the processing circuit117may inspect whether the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets matches a predetermined rule or not. In one embodiment, if the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets matches the predetermined rule, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125matches the predetermined condition. In this situation, the processing circuit117may identify the first member device120as a first privileged device according to the position of the resolvable set identifier RSI-1, and then perform the operation410ofFIG.4. In this embodiment, when the first member device120is identified as a privileged device by the processing circuit117, it means that when the Bluetooth host device110and the first member device120conduct a Bluetooth pairing procedure, the Bluetooth host device110and the first member device120can skip many traditional key parameter negotiation steps, and my directly adopt a pre-defined simplified method to generate the cypher keys. The operations of this portion are substantially the same as that in the operation210through the operation216described previously. On the contrary, if the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets does not match the predetermined rule, then the processing circuit117may determine that the brand, the vendor, the circuit model, and/or the firmware version of the first member device120or the first control circuit125does not match the predetermined condition. In this situation, the processing circuit117may identify the first member device120as an ordinary Bluetooth device, and then adopt various existing approaches to conduct Bluetooth pairing with the first member device120so as to generate related cypher keys. In the operation410, the processing circuit117may generate a corresponding candidate device list according to messages transmitted from multiple nearby Bluetooth devices (e.g., responses to the Bluetooth inquiry request sent by the Bluetooth host device110), and control the display device150to display the candidate device list. The processing circuit117may also conduct filtering on the device items to be displayed in the candidate device list in the operation410, and control the display device150to display a single device item for representing the entire Bluetooth device set102in the candidate device list, but does not simultaneously display a plurality of device items for respectively representing a plurality of member devices of the Bluetooth device set102in the candidate device list, so as to simplify the complexity of the user's manipulations during the Bluetooth pairing procedure. As described previously, all member devices in the Bluetooth device set102may conduct the same operations in the operation402, that is, transmitting one or more target Bluetooth packets containing the device set identification information Set-ID of the Bluetooth device set102, their own device information, their own resolvable set identifier, and the device information of other member devices the to the Bluetooth host device110. In the aforementioned operation410, the processing circuit117may determine which member devices belong to the Bluetooth device set102just like the first member device120according to the contents of the target Bluetooth packets transmitted from different member devices. For example, the processing circuit117may inspect the resolvable set identifier RSI-2provided by the second member device130according to the device set identification information Set-ID transmitted from the first member device120to determine whether the second member device130belongs to the Bluetooth device set102or not. In this embodiment, the processing circuit117may inspect whether the resolvable set identifier RSI-2is a random address generated based on the device set identification information Set-ID. If the processing circuit117determines that the resolvable set identifier RSI-2is a random address generated based on the device set identification information Set-ID, then the processing circuit117may determine that both the first member device120and the second member device130belong to the Bluetooth device set102. For another example, the processing circuit117may compare the device information of the second member device130provided by the first member device120with the device information of the second member device130provided by the second member device130itself, so as to determine whether the second member device130belongs to the Bluetooth device set102or not. In this embodiment, if the device information of the second member device130provided by the first member device120is identical to the device information of the second member device130provided by the second member device130itself, then the processing circuit117may determine that both the first member device120and the second member device130belong to the Bluetooth device set102. The user can know which Bluetooth devices can be paired with the Bluetooth host device110from the candidate device list displayed on the display device150. If the processing circuit117does not conduct filtering on the device items to be displayed in the candidate device list in the operation410, multiple device items respectively representing multiple member devices of the Bluetooth device set102may be shown in the candidate device list. Such a Bluetooth pairing method is likely to be too complicated (because the user has to select multiple member devices to be paired with the Bluetooth host device110one by one), and even makes it difficult for the user to find the correct pairing object. From another aspect, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation410can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and can reduce the possibility of user's erroneous manipulation. The user may manipulate the input circuit113to select the Bluetooth device set102as the object to be paired with the Bluetooth host device110. In this situation, the input circuit113may perform the operation412to receive a selection command issued by the user, and transmit the selection command to the processing circuit117. Then, the operations of the Bluetooth host device110in the following operation210and operation214ofFIG.4are the same as in the corresponding operations inFIG.2, while the operations of the first member device120in the following operation212and operation216ofFIG.4are the same as the in corresponding operations inFIG.2. For the sake of brevity, the descriptions will not be repeated here. In other words, after the first member device120is identified as the first privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation210and operation214while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. As shown inFIG.5, the Bluetooth host device110may perform the operation218ofFIG.5and subsequent operations after generating the first cypher key Key-1, and the first member device120may perform the operation220ofFIG.5and subsequent operations after generating the second cypher key Key-2. Similarly, the second control circuit135of the second member device130may perform the operation306ofFIG.5at an appropriate time to generate a resolvable set identifier RSI-2corresponding to the second member device130. For example, the second control circuit135may perform the aforementioned target algorithm according to the device set identification information Set-ID of the Bluetooth device set102to generate a random address, and utilize the random address to be the resolvable set identifier RSI-2corresponding to the second member device130. In practice, the second control circuit135may perform the operation306ofFIG.5at any time point between the operation402ofFIG.4through the operation220ofFIG.5, or at a certain time point before the operation402ofFIG.4. The operations of the Bluetooth communication system100in respective operations ofFIG.5are the same as the in corresponding operations of the aforementionedFIG.2andFIG.3. For the sake of brevity, the descriptions will not be repeated here. In other words, in the embodiment ofFIG.5, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation210and operation214while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. In practice, the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the third member device140) may establish connections according to the aforementioned interaction between the Bluetooth host device110and the second member device130to respectively generate required cypher keys for conducting subsequent Bluetooth data transmissions between both parties. Similarly, in the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In another embodiment, other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may generate one or more target Bluetooth packets containing the device information of respective member device and the resolvable set identifiers corresponding to respective member device, and transmit the one or more target Bluetooth packets to the to the Bluetooth host device110according to the approach adopted by the first member device120in the operation402ofFIG.4. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation402. In this situation, the Bluetooth host device110may identify a member device that transmits the auto-pair request first as the first privileged device, and then conduct the simplified pairing procedure with the first privileged device first. Afterwards, the Bluetooth host device110may identify other member devices as member devices of the Bluetooth device set102according to the device set identification information Set-ID transmitted from the first privileged device and the resolvable set identifiers transmitted from other member devices, and then conduct the simplified pairing procedure with other member devices. It can be appreciated from the foregoing descriptions ofFIG.2thoughFIG.3that the Bluetooth host device110is enabled to determine whether the first member device120is a privileged device or not according to the position of the resolvable set identifier RSI-1in the one or more target Bluetooth packets transmitted from the first member device120. After the first member device120is identified as a privileged device by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation210and operation214while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. On the other hand, after the second member device130is identified as a member device of the Bluetooth device set102by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation210and operation214while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation212and operation216. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.4throughFIG.5can also effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation410can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.6, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a third embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Alternatively, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.6after entering the predetermined transmitting mode. In the operation602, the first control circuit125may utilize the first communication circuit121to transmit a device information of the first member device120(e.g., a Bluetooth device address of the first member device120), and a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120, and the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation602may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation604, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120and the device information of the second member device130transmitted from the first member device120. In practice, other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) may transmit their own device information and the device information of other member devices to the Bluetooth host device110according to the approach adopted by the first member device120in the operation602. In other words, all member devices in the Bluetooth device set102may conduct the same operations in the operation602. In this situation, the host-side communication circuit111may receive the device information of multiple member devices transmitted from different member devices in the operation604. In the operation606, the processing circuit117may generate a corresponding candidate device list according to messages transmitted from multiple nearby Bluetooth devices (e.g., responses to the Bluetooth inquiry request sent by the Bluetooth host device110), and control the display device150to display the candidate device list. The processing circuit117may also filter the device items to be displayed in the candidate device list in the operation606, and control the display device150to display a single device item for representing the entire Bluetooth device set102in the candidate device list, but does not simultaneously display a plurality of device items for respectively representing a plurality of member devices of the Bluetooth device set102in the candidate device list, so as to simplify the complexity of the user's manipulations during the Bluetooth pairing procedure. As described previously, all member devices in the Bluetooth device set102may conduct the same operations in the operation602, that is, transmitting their own device information and the device information of other member devices to the Bluetooth host device110. The processing circuit117may determine which member devices belong to the Bluetooth device set102just like the first member device120according to the device information of multiple member devices transmitted from different member devices in the operation604. For example, the processing circuit117may compare the device information of the second member device130provided by the first member device120with the device information of the second member device130provided by the second member device130itself, to determine whether the second member device130belongs to the Bluetooth device set102or not. In this embodiment, if the device information of the second member device130provided by the first member device120is identical to the device information of the second member device130provided by the second member device130itself, then the processing circuit117may determine that both the first member device120and the second member device130belong to the Bluetooth device set102. The user can know which Bluetooth devices can be paired with the Bluetooth host device110from the candidate device list displayed on the display device150. If the processing circuit117does not conduct filtering on the device items to be displayed in the candidate device list in the operation606, multiple device items respectively representing multiple member devices of the Bluetooth device set102may be shown in the candidate device list. Such a Bluetooth pairing method is likely to be too complicated (because the user has to select multiple member devices to be paired with the Bluetooth host device110one by one), and even makes it difficult for the user to find the correct pairing object. From another aspect, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and can reduce the possibility of user's erroneous manipulation. The user may manipulate the input circuit113to select the Bluetooth device set102as the object to be paired with the Bluetooth host device110. In this situation, the input circuit113may perform the operation608to receive a selection command issued by the user, and transmit the selection command to the processing circuit117. In the operation610, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111according to the selection command, and conduct a pairing procedure to generate a first cypher key Key-1. In this situation, the first control circuit125may perform the operation612to establish a connection with the Bluetooth host device110through the first communication circuit121, and conduct the pairing procedure to generate a second cypher key Key-2corresponding to the first cypher key Key-1. Please note that in the aforementioned operation610and operation612, the Bluetooth host device110and the first member device120may adopt various appropriate approach to conduct the Bluetooth pairing procedure, and are not restricted to follow the pairing approach adopted in the aforementioned embodiment ofFIG.2andFIG.4. In addition, the Bluetooth host device110and the first member device120may adopt various appropriate approach to negotiate the parameters of key generation to respectively generate the first cypher key Key-1and the second cypher key Key-2, and are not restricted to follow the key generation mechanism adopted in the aforementioned embodiment ofFIG.2andFIG.4. As shown inFIG.6, the processing circuit117of this embodiment further perform the operation614after generating the first cypher key Key-1to create a correlation between the second member device130and the first cypher key Key-1. On the other hand, the first control circuit125may further perform the operation616after generating the second cypher key Key-2to utilize the first communication circuit121to transmit a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110) and the second cypher key Key-2to the second member device130. In this situation, the second communication circuit131of the second member device130may perform the operation618to receive the second cypher key Key-2and the device information of the Bluetooth host device110transmitted from the first member device120. Then, the processing circuit117may perform the operation620to establish a connection with the second member device130through the host-side communication circuit111, and directly use the first cypher key Key-1to conduct Bluetooth data transmissions with the second member device130. The second control circuit135may perform the operation622to establish a connection with the Bluetooth host device110through the second communication circuit131according to the device information of the Bluetooth host device110, and directly use the second cypher key Key-2to conduct Bluetooth data transmissions with the Bluetooth host device110. In practice, the first control circuit125may adopt the aforementioned approach to transmit the aforementioned second cypher key Key-2to other member devices in the Bluetooth device set102(e.g., the third member device140), so that other member devices in the Bluetooth device set102can directly use the second cypher key Key-2generated by the first member device120to conduct Bluetooth data transmissions with the Bluetooth host device110. In the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.6, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at a different time point. For example,FIG.7shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a fourth embodiment of the present disclosure. The method ofFIG.7is similar with the method of aforementionedFIG.6, but in the embodiment ofFIG.7, the first member device120performs the operation702instead of the operation602. In the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.7, the first control circuit125performs the operation708to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110after generating the second cypher key Key-2. In this situation, the host-side communication circuit111may perform the operation710to receive the device information of the second member device130transmitted from the first member device120. Then, the processing circuit117may perform the operation614ofFIG.7to create a correlation between the second member device130and the first cypher key Key-1. The operations of the Bluetooth communication system100in others operations ofFIG.7are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.7. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.6andFIG.7, it can be appreciated that only the Bluetooth host device110and the first member device120are required to respectively generate the corresponding first cypher key Key-1and second cypher key Key-2in this embodiment. Other member devices (e.g., the second member device130and the third member device140) would directly use the second cypher key Key-2generated by the first member device120to conduct subsequent Bluetooth data transmissions with the Bluetooth host device110, without generating related cypher keys by themselves. Accordingly, by adopting the method ofFIG.6orFIG.7, it can significantly reduce the time and computing loading of other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140) required for negotiating the key parameters with the Bluetooth host device110, and also save their time and computing load required for generating the cypher keys. Additionally, in the embodiments ofFIG.6andFIG.7, the Bluetooth host device110only needs to negotiate the parameters of key generation with a single member device in the Bluetooth device set102(i.e., the first member device120), and does not need to negotiate the parameters of key generation with other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140). In other words, by adopting the method ofFIG.6orFIG.7, it can also greatly reduce the time and computing loading of the Bluetooth host device110required for negotiating the key parameters with other member devices and required for generating cypher keys. Apparently, the method of aboveFIG.6andFIG.7can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.8, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a fifth embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Similarly, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.8after entering the predetermined transmitting mode. The operations of the Bluetooth communication system100in the operation602through the operation612ofFIG.8are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the foregoing descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.8. For the sake of brevity, the descriptions will not be repeated here. As shown inFIG.8, after generating the second cypher key Key-2in the operation610, the first control circuit125of this embodiment may perform the operation802. In the operation802, the first control circuit125may execute a predetermined cypher key algorithm to generate a third cypher key Key-3and a corresponding fourth cypher key Key-4. Then, the first control circuit125may perform the operation802and the operation804. In the operation804, the first control circuit125may utilize the first communication circuit121to transmit the third cypher key Key-3to the Bluetooth host device110. In the operation806, the first control circuit125may utilize the first communication circuit121to transmit a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110) and the fourth cypher key Key-4to the second member device130. In this situation, the Bluetooth host device110may perform the operation808and the operation810, and the second member device130may perform the operation812. In the operation808, the host-side communication circuit111may receive the third cypher key Key-3transmitted from the first member device120. In the operation810, the processing circuit117may create a correlation between the second member device130and the third cypher key Key-3. In the operation812, the second communication circuit131of the second member device130may receive the fourth cypher key Key-4and the device information of the Bluetooth host device110transmitted from the first member device120. Then, the processing circuit117may perform the operation814to establish a connection with the second member device130through the host-side communication circuit111, and directly use the third cypher key Key-3generated by the first member device120to conduct Bluetooth data transmissions with the second member device130. The second control circuit135may perform the operation816to establish a connection with the Bluetooth host device110through the second communication circuit131according to the device information of the Bluetooth host device110, and directly use the fourth cypher key Key-4generated by the first member device120to conduct Bluetooth data transmissions with the Bluetooth host device110. In practice, the first control circuit125may generate required key pairs for conducting subsequent Bluetooth data transmissions for the Bluetooth host device110and other member devices by adopting the same approach described above. In the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.8, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at another different time point. For example,FIG.9shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a sixth embodiment of the present disclosure. The method ofFIG.9is similar with the method of aforementionedFIG.8, but in the embodiment ofFIG.9, the first member device120performs the operation702instead of the operation602. As described previously, in the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.9, the first control circuit125performs the operation904to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) and the third cypher key Key-3to the Bluetooth host device110after generating the second cypher key Key-2in the operation612. In this situation, the host-side communication circuit111may perform the operation908to receive the device information of the second member device130and the third cypher key Key-3transmitted from the first member device120. Then, the processing circuit117may perform the operation810ofFIG.9to create a correlation between the second member device130and the third cypher key Key-3. Then, the processing circuit117may perform the operation814ofFIG.9to establish a connection with the second member device130through the host-side communication circuit111, and directly use the third cypher key Key-3generated by the first member device120to conduct Bluetooth data transmissions with the second member device130. The second control circuit135may perform the operation816ofFIG.9to establish a connection with the Bluetooth host device110through the second communication circuit131according to the device information of the Bluetooth host device110, and directly use the fourth cypher key Key-4generated by the first member device120to conduct Bluetooth data transmissions with the Bluetooth host device110. The operations of the Bluetooth communication system100in others operations ofFIG.9are the same as in the corresponding operations of the aforementioned embodiments ofFIG.6,FIG.7, orFIG.8. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.6,FIG.7,FIG.8, and related advantages are also applicable to the embodiment ofFIG.9. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.8andFIG.9, it can be appreciated that only the Bluetooth host device110and the first member device120are required to respectively generate the corresponding first cypher key Key-1and second cypher key Key-2in this embodiment. However, the required cypher keys for conducting subsequent Bluetooth data transmissions between the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the second member device130and the third member device140) are generated by the first member device120alone. Accordingly, by adopting the method ofFIG.8orFIG.9, it can significantly reduce the time and computing loading of other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140) required for negotiating the key parameters with the Bluetooth host device110, and also save their time and computing load required for generating the cypher keys. Additionally, in the embodiments ofFIG.8andFIG.9, the Bluetooth host device110only needs to negotiate the parameters of key generation with a single member device in the Bluetooth device set102(i.e., the first member device120), and does not need to negotiate the parameters of key generation with other member devices of the Bluetooth device set102(e.g., the second member device130and the third member device140). In other words, by adopting the method ofFIG.8orFIG.9, it can also greatly reduce the time and computing loading of the Bluetooth host device110required for negotiating the key parameters with other member devices and required for generating cypher keys. Apparently, the method of aboveFIG.8andFIG.9can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.10, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a seventh embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Similarly, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.10after entering the predetermined transmitting mode. The operations of the Bluetooth communication system100in the operation602through the operation608ofFIG.10are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the foregoing descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.10. For the sake of brevity, the descriptions will not be repeated here. As shown inFIG.10, the Bluetooth host device110of this embodiment may perform the operation1010after receiving a selection command issued by the user in the operation608. In the operation1010, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111according to the selection command, and transmit a device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110) to the first member device120. In this situation, the first communication circuit121may perform the operation1012to receive the device information of the Bluetooth host device110, and may establish a connection with the Bluetooth host device110under control of the first control circuit125. In addition, the first control circuit125further generate an indication value required for conducting the Bluetooth pairing between the Bluetooth host device110and the first member device120in the operation1012. In one embodiment, the aforementioned indication value is a predetermined value, a random value, a predetermined address, a random address, a predetermined string, a random string, a predetermined token, a random token, or the like for use in a predetermined cypher key algorithm. In another embodiment, the aforementioned indication value is an algorithm identifier corresponding to a predetermined cypher key algorithm. After generating the indication value, the first member device120may perform the operation1014, the operation1016, and the operation1018. In the operation1014, the first control circuit125may generate a second cypher key Key-2according to the indication value and a device information of the first member device120(e.g., a Bluetooth device address of the first member device120). For example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the indication value and the device information of the first member device120. For another example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the indication value, the device information of the first member device120, and the device information of the Bluetooth host device110. For another example, the first control circuit125may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the second cypher key Key-2. In the operation1016, the first control circuit125may utilize the first communication circuit121to transmit the indication value to the Bluetooth host device110. In the operation1018, the first control circuit125may utilize the first communication circuit121to transmit the device information of the Bluetooth host device110and the indication value to the second member device130. In this situation, the Bluetooth host device110may perform the operation1020and the operation1022ofFIG.10, and the second member device130may perform the operation1024ofFIG.10. In the operation1020, the host-side communication circuit111may receive the indication value. In the operation1022, the processing circuit117may generate a first cypher key Key-1according to the indication value and the device information of the first member device120. For example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the indication value and the device information of the first member device120. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the indication value, the device information of the first member device120, and the device information of the Bluetooth host device110. For another example, the processing circuit117may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the first cypher key Key-1. In the operation1024, the second communication circuit131may receive the device information of the Bluetooth host device110and the indication value transmitted from the first member device120. In the operation1026, the processing circuit117may establish a connection with the second member device130through the host-side communication circuit111according to a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) transmitted from the first member device120in the operation602, and generate a third cypher key Key-3according to the indication value and the device information of the second member device130. For example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the third cypher key Key-3according to the indication value and the device information of the second member device130. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the third cypher key Key-3according to the indication value, the second member device130, and the device information of the Bluetooth host device110. For another example, the processing circuit117may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the third cypher key Key-3. In this situation, the second member device130may perform the operation1028. In the operation1028, the second control circuit135may establish a connection with the Bluetooth host device110through the second communication circuit131, and generate a fourth cypher key Key-4corresponding to the third cypher key Key-3according to the indication value and the device information of the second member device130. For example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm to generate the fourth cypher key Key-4according to the indication value and the device information of the second member device130. For another example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm to generate the fourth cypher key Key-4according to the indication value, the device information of the second member device130, and the device information of the Bluetooth host device110. For another example, the second control circuit135may select a predetermined cypher key algorithm from a plurality of pre-agreed key algorithms according to the indication value, and execute the selected predetermined cypher key algorithm to generate the fourth cypher key Key-4. In other words, after the indication value is generated by the first member device120, the Bluetooth host device110and the first member device120may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. Similarly, the Bluetooth host device110and the second member device130may also omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. As a result, it can significantly reduce the required time for generating the first cypher key Key-1, the second cypher key Key-2, the third cypher key Key-3, and the fourth cypher key Key-4. In the operation1030, the processing circuit117of the Bluetooth host device110may use the third cypher key Key-3to conduct Bluetooth data transmissions with the second member device130through the host-side communication circuit111. In the operation1032, the second control circuit135of the second member device130may use the fourth cypher key Key-4to conduct Bluetooth data transmissions with the Bluetooth host device110through the second communication circuit131. In practice, the Bluetooth host device110and other member devices in the Bluetooth device set102(e.g., the third member device140) may adopt the aforementioned approach to respectively generate the cypher keys required for conducting subsequent Bluetooth data transmission according to the indication value generated by the first member device120. Similarly, in the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.10, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at another different time point. For example,FIG.11shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to an eighth embodiment of the present disclosure. The method ofFIG.11is similar with the method of aforementionedFIG.10, but in the embodiment ofFIG.11, the first member device120performs the operation702instead of the operation602. As described previously, in the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.11, the first control circuit125performs the operation1118to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110after generating the second cypher key Key-2in the operation1014. In this situation, the host-side communication circuit111may perform the operation1120to receive the device information of the second member device130transmitted from the first member device120. The operations of the Bluetooth communication system100in others operations ofFIG.11are the same as in the corresponding operations of the aforementioned embodiments ofFIG.6,FIG.7, orFIG.8. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.6,FIG.7,FIG.8, and related advantages are also applicable to the embodiment ofFIG.9. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.6andFIG.7, it can be appreciated that after the aforementioned indication value is generated by the first member device120, the Bluetooth host device110and the first member device120may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110is enabled to generate the first cypher key Key-1by simply performing the aforementioned operation1020and operation1022while the first member device120is enabled to generate the second cypher key Key-2by simply performing the aforementioned operation1014. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. Similarly, the Bluetooth host device110and the second member device130can also omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110is enabled to generate the third cypher key Key-3by simply performing the aforementioned operation1026while the second member device130is enabled to generate the fourth cypher key Key-4by simply performing the aforementioned operation1028. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.10andFIG.11can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please refer toFIG.12, which shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a ninth embodiment of the present disclosure. As described previously, when the Bluetooth host device110wants to conduct pairing with respective member devices of the Bluetooth device set102, the processing circuit117may generate a Bluetooth inquiry request containing the device information of the Bluetooth host device110(e.g., a Bluetooth device address of the Bluetooth host device110), and may utilize the host-side communication circuit111to transmit the Bluetooth inquiry request to other nearby Bluetooth devices. Similarly, the processing circuit117may control the host-side communication circuit111to operate in the aforementioned predetermined receiving mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs. On the other hand, all member devices in the Bluetooth device set102may enter a predetermined transmitting mode at an appropriate time according to the user's manipulation, or based on the default operating instructions of the internal programs, or may operate in the predetermined transmitting mode after receiving the Bluetooth inquiry request generated by the Bluetooth host device110. The first member device120may perform the operation602ofFIG.12after entering the predetermined transmitting mode. The operations of the Bluetooth communication system100in the operation602through the operation608ofFIG.12are the same as in the corresponding operations of the aforementioned embodiment ofFIG.6. Accordingly, the foregoing descriptions regarding corresponding operations inFIG.6and related advantages are also applicable to the embodiment ofFIG.12. For the sake of brevity, the descriptions will not be repeated here. As shown inFIG.12, the Bluetooth host device110of this embodiment may perform the operation1210after receiving a selection command issued by the user in the operation608. In the operation1210, the processing circuit117may establish a connection with the first member device120through the host-side communication circuit111according to the selection command, and may decide a first parameter P1. The processing circuit117may adopt the same approach as employed in the aforementioned operation210to decide the first parameter P1. Accordingly, the foregoing descriptions regarding how to decide the first parameter P1in the operation210are also applicable to the operation1210, and will not be repeated here for the sake of brevity. The processing circuit117may also transmit the first parameter P1or a first field indication to the first member device120through the host-side communication circuit111in the operation1210, wherein the first field indication is utilized for indicating a specific packet field whose content is to be utilized as the first parameter P1. In this situation, the first communication circuit121of the first member device120may perform the operation1212to establish a connection with the Bluetooth host device110, and to receive the first parameter P1or a related first field indication transmitted from the Bluetooth host device110, so that the first control circuit125is enabled to learn the first parameter P1decided by the Bluetooth host device110accordingly. As shown inFIG.12, the processing circuit117then may perform the operation214to generate a first cypher key Key-1required for conducting subsequent Bluetooth data transmissions with the first member device120according to the first parameter P1. For example, the processing circuit117may execute a predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1and the device information of the Bluetooth host device110. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm to generate the first cypher key Key-1according to the first parameter P1, the device information of the Bluetooth host device110, and the device information of the first member device120. On the other hand, the first control circuit125may perform the operation216to generate a second cypher key Key-2required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the first parameter P1. In other words, the second cypher key Key-2generated by the first control circuit125and the first cypher key Key-1generated by the processing circuit117will correspond to each other. For example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1and the device information of the first member device120. For another example, the first control circuit125may execute the aforementioned predetermined cypher key algorithm to generate the second cypher key Key-2according to the first parameter P1, the device information of the first member device120, and the device information of the Bluetooth host device110. In other words, after the first parameter P1is decided by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. That is, the Bluetooth host device110can directly generate the first cypher key Key-1based on the first parameter P1decided by the Bluetooth host device110, and the first member device120can directly generate the second cypher key Key-2based on the first parameter P1decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. Afterwards, the processing circuit117may use the first cypher key Key-1to conduct Bluetooth data transmissions with the first member device120through the host-side communication circuit111, and the first control circuit125may use the second cypher key Key-2to conduct Bluetooth data transmissions with the Bluetooth host device110through the first communication circuit121. As shown inFIG.12, the first control circuit125may further perform the operation1216to utilize the first communication circuit121to transmit the device information of the Bluetooth host device110to the second member device130. In this situation, the second communication circuit131may perform the operation1218ofFIG.12to receive the device information of the Bluetooth host device110transmitted from the first member device120. As shown inFIG.12, the Bluetooth host device110of this embodiment may further perform the operation1220. In the operation1220, the processing circuit117may establish a connection with the second member device130through the host-side communication circuit111according to a device information of the second member device130(e.g., a Bluetooth device address of the second member device130) transmitted from the first member device120in the operation602, and may decide a second parameter P2. The processing circuit117may adopt the same approach as employed in the aforementioned operation314to decide the second parameter P2. Accordingly, the foregoing descriptions regarding how to decide the second parameter P2in the operation314are also applicable to the operation1220, and will not be repeated here for the sake of brevity. The processing circuit117may also transmit the second parameter P2or a second field indication to the first member device120through the host-side communication circuit111in the operation1220, wherein the second field indication is utilized for indicating a specific packet field whose content is to be utilized as the second parameter P2. In this situation, the first communication circuit121of the first member device120may perform the operation1222to establish a connection with the Bluetooth host device110, and to receive the second parameter P2or a related second field indication transmitted from the Bluetooth host device110, so that the first control circuit125is enabled to learn the second parameter P2decided by the Bluetooth host device110accordingly. As shown inFIG.12, the processing circuit117then may perform the operation318to generate a third cypher key Key-3required for conducting subsequent Bluetooth data transmissions with the second member device130according to the second parameter P2. For example, the processing circuit117may execute a predetermined cypher key algorithm according to the second parameter P2and the device information of the Bluetooth host device110to generate the third cypher key Key-3. For another example, the processing circuit117may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the third cypher key Key-3. On the other hand, the second control circuit135may perform the operation320to generate a fourth cypher key Key-4required for conducting subsequent Bluetooth data transmissions with the Bluetooth host device110according to the second parameter P2. In other words, the fourth cypher key Key-4generated by the second control circuit135and the third cypher key Key-3generated by the processing circuit117will correspond to each other. For example, the second control circuit135may generate the aforementioned predetermined cypher key algorithm according to the second parameter P2and the device information of the second member device130to generate the fourth cypher key Key-4. For another example, the second control circuit135may execute the aforementioned predetermined cypher key algorithm according to the second parameter P2, the device information of the second member device130, and the device information of the Bluetooth host device110to generate the fourth cypher key Key-4. In other words, after the second parameter P2is decided by the Bluetooth host device110, the Bluetooth host device110and the second member device130may omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. That is, the Bluetooth host device110can directly generate the third cypher key Key-3based on the second parameter P2decided by the Bluetooth host device110while the second member device130can directly generate the fourth cypher key Key-4based on the second parameter P2decided by the Bluetooth host device110. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Afterwards, the processing circuit117may perform the operation322ofFIG.12to use the third cypher key Key-3to conduct Bluetooth data transmissions with the second member device130through the host-side communication circuit111. On the other hand, the second control circuit135may perform the operation324ofFIG.12to use the fourth cypher key Key-4to conduct Bluetooth data transmissions with the Bluetooth host device110through the second communication circuit131. Similarly, in the embodiments where the Bluetooth host device110, the first member device120, and the second member device130support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the first member device120and the second member device130, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110, the first member device120, and the second member device130to thereby extend the serving time of the Bluetooth host device110, the first member device120, and the second member device130, but also effectively improves the overall quality of the audio playback operations. In the above embodiment ofFIG.12, the first member device120transmits the device information of the first member device120and the device information of the second member device130to the Bluetooth host device110in the operation602. But this merely an exemplary embodiment, rather than a restriction to practical implementations. In practice, the first member device120may instead transmit the device information of the second member device130to the Bluetooth host device110at another different time point. For example,FIG.13shows a simplified flowchart of a method for generating cypher keys required for Bluetooth data transmission according to a tenth embodiment of the present disclosure. The method ofFIG.13is similar with the method of aforementionedFIG.12, but in the embodiment ofFIG.13, the first member device120performs the operation702instead of the operation602. As described previously, in the operation702, the first control circuit125utilizes the first communication circuit121to transmit a device information of the first member device120to the Bluetooth host device110, but does not transmit the device information of other member devices (e.g., the second member device130) to the Bluetooth host device110. For example, the first control circuit125may generate one or more target Bluetooth packets containing the device information of the first member device120but not containing the device information of the second member device130, and utilize the first communication circuit121to transmit the one or more target Bluetooth packets to the Bluetooth host device110. The type of the target Bluetooth packets referred to in the operation72may be the same as the type of the target Bluetooth packets referred to in the aforementioned operation202. For the sake of brevity, the descriptions will not be repeated here. In the operation704, the host-side communication circuit111of the Bluetooth host device110may receive the device information of the first member device120transmitted from the first member device120. In the embodiment ofFIG.11, the first control circuit125performs the operation1118to utilize the first communication circuit121to transmit the device information of the second member device130(e.g., a Bluetooth device address of the second member device130) to the Bluetooth host device110after generating the second cypher key Key-2in the operation216. In this situation, the host-side communication circuit111may perform the operation1120to receive the device information of the second member device130transmitted from the first member device120. The operations of the Bluetooth communication system100in others operations ofFIG.13are the same as in the corresponding operations of the aforementioned embodiments ofFIG.2,FIG.3,FIG.6,FIG.7, orFIG.23. Accordingly, the aforementioned descriptions regarding corresponding operations inFIG.2,FIG.3,FIG.6,FIG.7,FIG.23, and related advantages are also applicable to the embodiment ofFIG.13. For the sake of brevity, the descriptions will not be repeated here. According to the foregoing descriptions ofFIG.12andFIG.13, it can be appreciated that after the first parameter P1is decided by the Bluetooth host device110, the Bluetooth host device110and the first member device120can omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding first cypher key Key-1and second cypher key Key-2. As a result, it can significantly reduce the required time for generating the first cypher key Key-1and the second cypher key Key-2. Similarly, after the first parameter P1is decided by the Bluetooth host device110, the Bluetooth host device110and the second member device130can also omit many traditional key parameter negotiation steps, and instead adopt a highly simplified approach to generate the corresponding third cypher key Key-3and fourth cypher key Key-4. As a result, it can significantly reduce the required time for generating the third cypher key Key-3and the fourth cypher key Key-4. Apparently, the method of aboveFIG.12andFIG.13can effectively simplify the Bluetooth pairing procedure between the Bluetooth host device110and respective member device of the Bluetooth device set102, thereby significantly reduce the required time for completing the pairing procedure between the Bluetooth host device110and the Bluetooth device set102. Furthermore, the operation of filtering device items to be shown in the candidate device list conducted by the processing circuit117in the aforementioned operation606can simplify the complexity of user's manipulation during the Bluetooth pairing procedure, and also reduce the possibility of user's erroneous manipulation. Additionally, in the embodiments where the Bluetooth host device110and the member devices in the Bluetooth device set102support the BLE Audio technology, the Bluetooth host device110may adopt the BLE Audio technology to transmit audio data to the member devices of the Bluetooth device set102, and the Bluetooth host device110can utilize the Low Complexity Communication Codec (LC3) to encode the audio data. As a result, it not only reduces the power consumption of the Bluetooth host device110and the member devices of the Bluetooth device set102to thereby extend the serving time of the Bluetooth host device110and the member devices of the Bluetooth device set102, but also effectively improves the overall quality of the audio playback operations. Please note that the aforementioned executing order of the operations in each flowchart is merely an exemplary embodiment, rather than a restriction to the practical implementations of the present disclosure. For example, inFIG.2, the operation214may be performed at the same time with the operation210, or may be performed before transmitting the first privileged pairing notice, the first parameter P1, and/or a first field indication related to the first parameter P1. For another example, inFIG.3andFIG.5, the operation306and the operation308may be performed before the operation302, or may be performed at the same time with the operation302. For another example, inFIG.3andFIG.5, the operation310and the operation304may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.3andFIG.5, the operation318may be performed at the same time with the operation314, or may be performed before transmitting the second privileged pairing notice, the second parameter P2, and/or a second field indication related to the second parameter P2. For another example, inFIG.4, the operation408may be performed at the same time with the operation410or the operation412, or may be performed between the operation410and the operation412, or may be performed between the operation412and the operation210. For another example, inFIG.7, the operation708and the operation616may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.8, the operation806and the operation804may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.9, the operation806and the operation904may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.10andFIG.11, the operation1018and the operation1016may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.11, the operation1118may be performed at the same time with the operation1016or the operation1018, or may be performed between the operation1016and the operation1018, or may be performed between the operation1014and the operation1016. For another example, inFIG.12andFIG.13, the operation1216and the operation216may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.12andFIG.13, the operation214may be performed at the same time with the operation1210, or may be performed before transmitting the first parameter P1or a first field indication related to the first parameter P1. For another example, inFIG.13, the operation1118and the operation1216may be performed in a reverse order, or may be performed at the same time. For another example, inFIG.12andFIG.13, the operation318may be performed at the same time with the operation1220, or may be performed before transmitting the second parameter P2or a second field indication related to the second parameter P2. In addition, the quantity of functional blocks in the Bluetooth communication system100and the connection among the functional blocks may be modified based on the actual circuit design requirement, and are restricted to the case illustrated in the aforementioned embodiment. For another example, in some embodiments where the Bluetooth device set102does not need to receive the user's voice or ambient sounds, the first voice receiving circuit164, the second voice receiving circuit174, and/or the third voice receiving circuit184may be omitted. For another example, in some embodiments where the Bluetooth device set102does not need to playback audio data, the first audio playback circuit162, the second audio playback circuit172, and/or the third audio playback circuit182may be omitted. For another example, the number of member devices in the Bluetooth device set102may be expanded to a larger number, or the Bluetooth device set102may be simplified to contain only the first member device120and the second member device130. Certain terms are used throughout the description and the claims to refer to particular components. One skilled in the art appreciates that a component may be referred to as different names. This disclosure does not intend to distinguish between components that differ in name but not in function. In the description and in the claims, the term “comprise” is used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to.” The term “couple” is intended to encompass any indirect or direct connection. Accordingly, if this disclosure mentioned that a first device is coupled with a second device, it means that the first device may be directly or indirectly connected to the second device through electrical connections, wireless communications, optical communications, or other signal connections with/without other intermediate devices or connection means. The term “and/or” may comprise any and all combinations of one or more of the associated listed items. In addition, the singular forms “a,” “an,” and “the” herein are intended to comprise the plural forms as well, unless the context clearly indicates otherwise. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention indicated by the following claims.
149,731
11943610
DETAILED DESCRIPTION Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. One skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. Several aspects of telecommunication systems will now be presented with reference to various apparatuses and techniques. These apparatuses and techniques will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, or the like (collectively referred to as “elements”). These elements may be implemented using hardware, software, or combinations thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. While aspects may be described herein using terminology commonly associated with a 5G or New Radio (NR) radio access technology (RAT), aspects of the present disclosure can be applied to other RATs, such as a 3G RAT, a 4G RAT, and/or a RAT subsequent to 5G (e.g., 6G). FIG.1is a diagram illustrating an example of a wireless network100, in accordance with the present disclosure. The wireless network100may be or may include elements of a 5G (e.g., NR) network and/or a 4G (e.g., Long Term Evolution (LTE)) network, among other examples. The wireless network100may include one or more base stations110(shown as a BS110a, a BS110b, a BS110c, and a BS110d), a user equipment (UE)120or multiple UEs120(shown as a UE120a, a UE120b, a UE120c, a UE120d, and a UE120e), and/or other network entities. A base station110is an entity that communicates with UEs120. A base station110(sometimes referred to as a BS) may include, for example, an NR base station, an LTE base station, a Node B, an eNB (e.g., in 4G), a gNB (e.g., in 5G), an access point, and/or a transmission reception point (TRP). Each base station110may provide communication coverage for a particular geographic area. In the Third Generation Partnership Project (3GPP), the term “cell” can refer to a coverage area of a base station110and/or a base station subsystem serving this coverage area, depending on the context in which the term is used. A base station110may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or another type of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs120with service subscriptions. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs120with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs120having association with the femto cell (e.g., UEs120in a closed subscriber group (CSG)). A base station110for a macro cell may be referred to as a macro base station. A base station110for a pico cell may be referred to as a pico base station. A base station110for a femto cell may be referred to as a femto base station or an in-home base station. In the example shown inFIG.1, the BS110amay be a macro base station for a macro cell102a, the BS110bmay be a pico base station for a pico cell102b, and the BS110cmay be a femto base station for a femto cell102c. A base station may support one or multiple (e.g., three) cells. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a base station110that is mobile (e.g., a mobile base station). In some examples, the base stations110may be interconnected to one another and/or to one or more other base stations110or network nodes (not shown) in the wireless network100through various types of backhaul interfaces, such as a direct physical connection or a virtual network, using any suitable transport network. The wireless network100may include one or more relay stations. A relay station is an entity that can receive a transmission of data from an upstream station (e.g., a base station110or a UE120) and send a transmission of the data to a downstream station (e.g., a UE120or a base station110). A relay station may be a UE120that can relay transmissions for other UEs120. In the example shown inFIG.1, the BS110d(e.g., a relay base station) may communicate with the BS110a(e.g., a macro base station) and the UE120din order to facilitate communication between the BS110aand the UE120d. A base station110that relays communications may be referred to as a relay station, a relay base station, a relay, or the like. The wireless network100may be a heterogeneous network that includes base stations110of different types, such as macro base stations, pico base stations, femto base stations, relay base stations, or the like. These different types of base stations110may have different transmit power levels, different coverage areas, and/or different impacts on interference in the wireless network100. For example, macro base stations may have a high transmit power level (e.g., 5 to 40 watts) whereas pico base stations, femto base stations, and relay base stations may have lower transmit power levels (e.g., 0.1 to 2 watts). A network controller130may couple to or communicate with a set of base stations110and may provide coordination and control for these base stations110. The network controller130may communicate with the base stations110via a backhaul communication link. The base stations110may communicate with one another directly or indirectly via a wireless or wireline backhaul communication link. The UEs120may be dispersed throughout the wireless network100, and each UE120may be stationary or mobile. A UE120may include, for example, an access terminal, a terminal, a mobile station, and/or a subscriber unit. A UE120may be a cellular phone (e.g., a smart phone), a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device, a biometric device, a wearable device (e.g., a smart watch, smart clothing, smart glasses, a smart wristband, smart jewelry (e.g., a smart ring or a smart bracelet)), an entertainment device (e.g., a music device, a video device, and/or a satellite radio), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, and/or any other suitable device that is configured to communicate via a wireless medium. Some UEs120may be considered machine-type communication (MTC) or evolved or enhanced machine-type communication (eMTC) UEs. An MTC UE and/or an eMTC UE may include, for example, a robot, a drone, a remote device, a sensor, a meter, a monitor, and/or a location tag, that may communicate with a base station, another device (e.g., a remote device), or some other entity. Some UEs120may be considered Internet-of-Things (IoT) devices, and/or may be implemented as NB-IoT (narrowband IoT) devices. Some UEs120may be considered a Customer Premises Equipment. A UE120may be included inside a housing that houses components of the UE120, such as processor components and/or memory components. In some examples, the processor components and the memory components may be coupled together. For example, the processor components (e.g., one or more processors) and the memory components (e.g., a memory) may be operatively coupled, communicatively coupled, electronically coupled, and/or electrically coupled. In general, any number of wireless networks100may be deployed in a given geographic area. Each wireless network100may support a particular RAT and may operate on one or more frequencies. A RAT may be referred to as a radio technology, an air interface, or the like. A frequency may be referred to as a carrier, a frequency channel, or the like. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. In some examples, two or more UEs120(e.g., shown as UE120aand UE120e) may communicate directly using one or more sidelink channels (e.g., without using a base station110as an intermediary to communicate with one another). For example, the UEs120may communicate using peer-to-peer (P2P) communications, device-to-device (D2D) communications, a vehicle-to-everything (V2X) protocol (e.g., which may include a vehicle-to-vehicle (V2V) protocol, a vehicle-to-infrastructure (V2I) protocol, or a vehicle-to-pedestrian (V2P) protocol), and/or a mesh network. In such examples, a UE120may perform scheduling operations, resource selection operations, and/or other operations described elsewhere herein as being performed by the base station110. Devices of the wireless network100may communicate using the electromagnetic spectrum, which may be subdivided by frequency or wavelength into various classes, bands, channels, or the like. For example, devices of the wireless network100may communicate using one or more operating bands. In 5G NR, two initial operating bands have been identified as frequency range designations FR1 (410 MHz-7.125 GHz) and FR2 (24.25 GHz-52.6 GHz). It should be understood that although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “Sub-6 GHz” band in various documents and articles. A similar nomenclature issue sometimes occurs with regard to FR2, which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz-300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band. The frequencies between FR1 and FR2 are often referred to as mid-band frequencies. Recent 5G NR studies have identified an operating band for these mid-band frequencies as frequency range designation FR3 (7.125 GHz-24.25 GHz). Frequency bands falling within FR3 may inherit FR1 characteristics and/or FR2 characteristics, and thus may effectively extend features of FR1 and/or FR2 into mid-band frequencies. In addition, higher frequency bands are currently being explored to extend 5G NR operation beyond 52.6 GHz. For example, three higher operating bands have been identified as frequency range designations FR4a or FR4-1 (52.6 GHz-71 GHz), FR4 (52.6 GHz-114.25 GHz), and FR5 (114.25 GHz-300 GHz). Each of these higher frequency bands falls within the EHF band. With the above examples in mind, unless specifically stated otherwise, it should be understood that the term “sub-6 GHz” or the like, if used herein, may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies. Further, unless specifically stated otherwise, it should be understood that the term “millimeter wave” or the like, if used herein, may broadly represent frequencies that may include mid-band frequencies, may be within FR2, FR4, FR4-a or FR4-1, and/or FR5, or may be within the EHF band. It is contemplated that the frequencies included in these operating bands (e.g., FR1, FR2, FR3, FR4, FR4-a, FR4-1, and/or FR5) may be modified, and techniques described herein are applicable to those modified frequency ranges. In some aspects, the UE120may include a communication manager140. As described in more detail elsewhere herein, the communication manager140may receive a first signal associated with a downlink control channel, wherein the first signal uses a modulation signature that identifies a first security key associated with a reconfigurable intelligent surface (RIS)160, and wherein the first signal includes a second security key; and receive a second signal associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the second signal is authenticated by the UE based at least in part on the first security key, the second security key, and the third security key. Additionally, or alternatively, the communication manager140may perform one or more other operations described herein. In some aspects, the base station110may include a communication manager150. As described in more detail elsewhere herein, the communication manager150may transmit, to the RIS160, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS; transmit a first signal that is associated with a downlink control channel, wherein the first signal includes a second security key, and wherein the first signal is to be reflected by the RIS; and transmit a second signal that is associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable a UE120to authenticate the second signal. Additionally, or alternatively, the communication manager150may perform one or more other operations described herein. As shown inFIG.1, the wireless network100may include an RIS160. The RIS160may include one or more reconfigurable elements capable of redirecting or reflecting signals transmitted by a base station110or a UE120. In some aspects, the RIS160may include a communication manager170. As described in more detail elsewhere herein, the communication manager170may receive, from a base station, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS; receive, from the base station, a first signal associated with a downlink control channel, wherein the first signal includes a second security key; redirect, to a UE120, the first signal by including the modulation signature that identifies the first security key in the first signal; and redirect, to the UE, a second signal, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable the UE to authenticate the second signal. Additionally, or alternatively, the communication manager170may perform one or more other operations described herein. As indicated above,FIG.1is provided as an example. Other examples may differ from what is described with regard toFIG.1. FIG.2is a diagram illustrating an example200of a base station110in communication with a UE120in a wireless network100, in accordance with the present disclosure. The base station110may be equipped with a set of antennas234athrough234t, such as T antennas (T≥1). The UE120may be equipped with a set of antennas252athrough252r, such as R antennas (R>1). At the base station110, a transmit processor220may receive data, from a data source212, intended for the UE120(or a set of UEs120). The transmit processor220may select one or more modulation and coding schemes (MCSs) for the UE120based at least in part on one or more channel quality indicators (CQIs) received from that UE120. The base station110may process (e.g., encode and modulate) the data for the UE120based at least in part on the MCS(s) selected for the UE120and may provide data symbols for the UE120. The transmit processor220may process system information (e.g., for semi-static resource partitioning information (SRPI)) and control information (e.g., CQI requests, grants, and/or upper layer signaling) and provide overhead symbols and control symbols. The transmit processor220may generate reference symbols for reference signals (e.g., a cell-specific reference signal (CRS) or a demodulation reference signal (DMRS)) and synchronization signals (e.g., a primary synchronization signal (PSS) or a secondary synchronization signal (SSS)). A transmit (TX) multiple-input multiple-output (MIMO) processor230may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, the overhead symbols, and/or the reference symbols, if applicable, and may provide a set of output symbol streams (e.g., T output symbol streams) to a corresponding set of modems232(e.g., T modems), shown as modems232athrough232t. For example, each output symbol stream may be provided to a modulator component (shown as MOD) of a modem232. Each modem232may use a respective modulator component to process a respective output symbol stream (e.g., for OFDM) to obtain an output sample stream. Each modem232may further use a respective modulator component to process (e.g., convert to analog, amplify, filter, and/or upconvert) the output sample stream to obtain a downlink signal. The modems232athrough232tmay transmit a set of downlink signals (e.g., T downlink signals) via a corresponding set of antennas234(e.g., T antennas), shown as antennas234athrough234t. At the UE120, a set of antennas252(shown as antennas252athrough252r) may receive the downlink signals from the base station110and/or other base stations110and may provide a set of received signals (e.g., R received signals) to a set of modems254(e.g., R modems), shown as modems254athrough254r. For example, each received signal may be provided to a demodulator component (shown as DEMOD) of a modem254. Each modem254may use a respective demodulator component to condition (e.g., filter, amplify, downconvert, and/or digitize) a received signal to obtain input samples. Each modem254may use a demodulator component to further process the input samples (e.g., for OFDM) to obtain received symbols. A MIMO detector256may obtain received symbols from the modems254, may perform MIMO detection on the received symbols if applicable, and may provide detected symbols. A receive processor258may process (e.g., demodulate and decode) the detected symbols, may provide decoded data for the UE120to a data sink260, and may provide decoded control information and system information to a controller/processor280. The term “controller/processor” may refer to one or more controllers, one or more processors, or a combination thereof. A channel processor may determine a reference signal received power (RSRP) parameter, a received signal strength indicator (RSSI) parameter, a reference signal received quality (RSRQ) parameter, and/or a CQI parameter, among other examples. In some examples, one or more components of the UE120may be included in a housing284. The network controller130may include a communication unit294, a controller/processor290, and a memory292. The network controller130may include, for example, one or more devices in a core network. The network controller130may communicate with the base station110via the communication unit294. One or more antennas (e.g., antennas234athrough234tand/or antennas252athrough252r) may include, or may be included within, one or more antenna panels, one or more antenna groups, one or more sets of antenna elements, and/or one or more antenna arrays, among other examples. An antenna panel, an antenna group, a set of antenna elements, and/or an antenna array may include one or more antenna elements (within a single housing or multiple housings), a set of coplanar antenna elements, a set of non-coplanar antenna elements, and/or one or more antenna elements coupled to one or more transmission and/or reception components, such as one or more components ofFIG.2. On the uplink, at the UE120, a transmit processor264may receive and process data from a data source262and control information (e.g., for reports that include RSRP, RSSI, RSRQ, and/or CQI) from the controller/processor280. The transmit processor264may generate reference symbols for one or more reference signals. The symbols from the transmit processor264may be precoded by a TX MIMO processor266if applicable, further processed by the modems254(e.g., for DFT-s-OFDM or CP-OFDM), and transmitted to the base station110. In some examples, the modem254of the UE120may include a modulator and a demodulator. In some examples, the UE120includes a transceiver. The transceiver may include any combination of the antenna(s)252, the modem(s)254, the MIMO detector256, the receive processor258, the transmit processor264, and/or the TX MIMO processor266. The transceiver may be used by a processor (e.g., the controller/processor280) and the memory282to perform aspects of any of the methods described herein (e.g., with reference toFIGS.5-13). At the base station110, the uplink signals from UE120and/or other UEs may be received by the antennas234, processed by the modem232(e.g., a demodulator component, shown as DEMOD, of the modem232), detected by a MIMO detector236if applicable, and further processed by a receive processor238to obtain decoded data and control information sent by the UE120. The receive processor238may provide the decoded data to a data sink239and provide the decoded control information to the controller/processor240. The base station110may include a communication unit244and may communicate with the network controller130via the communication unit244. The base station110may include a scheduler246to schedule one or more UEs120for downlink and/or uplink communications. In some examples, the modem232of the base station110may include a modulator and a demodulator. In some examples, the base station110includes a transceiver. The transceiver may include any combination of the antenna(s)234, the modem(s)232, the MIMO detector236, the receive processor238, the transmit processor220, and/or the TX MIMO processor230. The transceiver may be used by a processor (e.g., the controller/processor240) and the memory242to perform aspects of any of the methods described herein (e.g., with reference toFIGS.5-13). The controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform one or more techniques associated with security enhancements with an RIS, as described in more detail elsewhere herein. For example, the controller/processor240of the base station110, the controller/processor280of the UE120, and/or any other component(s) ofFIG.2may perform or direct operations of, for example, process700ofFIG.7, process800ofFIG.8, process900ofFIG.9, process1000ofFIG.10, and/or other processes as described herein. The memory242and the memory282may store data and program codes for the base station110and the UE120, respectively. In some examples, the memory242and/or the memory282may include a non-transitory computer-readable medium storing one or more instructions (e.g., code and/or program code) for wireless communication. For example, the one or more instructions, when executed (e.g., directly, or after compiling, converting, and/or interpreting) by one or more processors of the base station110and/or the UE120, may cause the one or more processors, the UE120, and/or the base station110to perform or direct operations of, for example, process700ofFIG.7, process800ofFIG.8, process900ofFIG.9, process1000ofFIG.10, and/or other processes as described herein. In some examples, executing instructions may include running the instructions, converting the instructions, compiling the instructions, and/or interpreting the instructions, among other examples. In some aspects, the UE120includes means for receiving a first signal associated with a downlink control channel, wherein the first signal uses a modulation signature that identifies a first security key associated with an RIS, and wherein the first signal includes a second security key; and/or means for receiving a second signal associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the second signal is authenticated by the UE based at least in part on the first security key, the second security key, and the third security key. The means for the UE120to perform operations described herein may include, for example, one or more of communication manager140, antenna252, modem254, MIMO detector256, receive processor258, transmit processor264, TX MIMO processor266, controller/processor280, or memory282. In some aspects, the base station110includes means for transmitting, to an RIS, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS; means for transmitting a first signal that is associated with a downlink control channel, wherein the first signal includes a second security key, and wherein the first signal is to be reflected by the RIS; and/or means for transmitting a second signal that is associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable a UE to authenticate the second signal. The means for the base station110to perform operations described herein may include, for example, one or more of communication manager150, transmit processor220, TX MIMO processor230, modem232, antenna234, MIMO detector236, receive processor238, controller/processor240, memory242, or scheduler246. In some aspects, the RIS160includes means for receiving, from a base station, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS; means for receiving, from the base station, a first signal associated with a downlink control channel, wherein the first signal includes a second security key; means for redirecting, to a UE, the first signal by including the modulation signature that identifies the first security key in the first signal; and/or means for redirecting, to the UE, a second signal, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable the UE to authenticate the second signal. In some aspects, the means for the RIS160to perform operations described herein may include, for example, one or more of communication manager170, a transmit processor, a TX MIMO processor, a modem, an antenna, a MIMO detector, a receive processor, a controller/processor, and/or a memory. While blocks inFIG.2are illustrated as distinct components, the functions described above with respect to the blocks may be implemented in a single hardware, software, or combination component or in various combinations of components. For example, the functions described with respect to the transmit processor264, the receive processor258, and/or the TX MIMO processor266may be performed by or under the control of the controller/processor280. As indicated above,FIG.2is provided as an example. Other examples may differ from what is described with regard toFIG.2. FIG.3is a diagram illustrating an example300of communications using an RIS, in accordance with the present disclosure. As shown inFIG.3, a base station110may communicate with a UE120in a wireless network, such as the wireless network100. The base station110and the UE120may use an RIS305to communicate with one another. For example, the RIS305may reflect or redirect a signal to the base station110and/or the UE120. The RIS305may also be referred to as an intelligent reflecting surface. In some examples, the RIS305may be a repeater. The RIS305may be, or may include, a planar or two-dimensional structure or surface that is designed to have properties to enable a dynamic control of signals or electromagnetic waves reflected and/or redirected by the RIS305. The RIS305may include one or more reconfigurable elements. For example, the RIS305may include an array of reconfigurable elements (e.g., an array of uniformly distributed reconfigurable elements). The reconfigurable elements may be elements with a reconfigurable electromagnetic characteristic. For example, the electromagnetic characteristic may include a reflection characteristic (e.g., a reflection coefficient), a scattering characteristic, an absorption characteristic, and/or a diffraction characteristic. The electromagnetic characteristic(s) of each reconfigurable element may be independently controlled and changed over time. The electromagnetic characteristic(s) of each reconfigurable element may be independently configured such that the combination of configured states of the reconfigurable elements reflects an incident signal or waveform in a controlled manner. For example, the reconfigurable elements may be configured to reflect or redirect an impinging signal in a controlled manner, such as by reflecting the impinging signal in a desired direction, with a desired beam width, with a desired phase, with a desired amplitude, and/or with a desired polarization, among other examples. In other words, the RIS305may be capable of modifying one or more properties (e.g., direction, beam width, phase, amplitude, and/or polarization) of an impinging signal. The reconfigurable elements of the RIS305may be controlled and/or configured by an RIS controller310. The RIS controller310may be a control module (e.g., a controller and/or a processor) that is capable of configuring the electromagnetic characteristic(s) of each reconfigurable element of the RIS305. The RIS controller310may be, or may be included in, the communication manager170. Alternatively, the communication manager170may be included in the RIS controller310. The RIS controller310may receive control communications (e.g., from a base station110and/or a UE120) indicating one or more properties of reflected signals (e.g., indicating a desired direction, a desired beam width, a desired phase, a desired amplitude, and/or a desired polarization). Therefore, in some examples, the RIS305may be capable of receiving communications (e.g., via the RIS305and/or the RIS controller310). In some examples, the RIS305and/or the RIS controller310may not have transmit capabilities (e.g., the RIS305may be capable of reflecting and/or redirecting impinging signals via the reconfigurable elements, but may not be capable of generating and/or transmitting signals). Alternatively, in some examples, the RIS305and/or the RIS controller310may have transmit capabilities (e.g., the RIS305may be capable of reflecting and/or redirecting impinging signals via the reconfigurable elements and may be capable of generating and/or transmitting signals). For example, the RIS305and/or the RIS controller310may include one or more antennas and/or antenna elements for receiving and/or transmitting signals. For example, as shown inFIG.3, the base station110may transmit a signal315. The signal315may be transmitted in a spatial direction toward the RIS305. The RIS305may configure the reconfigurable elements of the RIS305to reflect and/or redirect the signal315in a desired spatial direction and/or with one or more desired signal characteristics (e.g., beam width, phase, amplitude, frequency, and/or polarization). For example, as shown by reference number320, the RIS305may be capable of reflecting the signal315in one or more spatial directions. Although multiple beams are shown inFIG.3representing different beam states or beam directions of the RIS305, the RIS305may be capable of reflecting a signal with one beam state or one beam direction at a time. For example, in one case, as shown by reference number325, the RIS305may be configured to reflect the signal315using a first beam state (e.g., beam state 1). “Beam state” may refer to a spatial direction and/or a beam of a reflected signal (e.g., a signal reflected by the RIS305). The first beam state may cause the signal315to be reflected in a spatial direction toward a first UE120(e.g., UE 1). As shown by reference number330, in another case, the RIS305may be configured to reflect the signal315using a second beam state (e.g., beam state 2). The second beam state may cause the signal315to be reflected in a spatial direction toward a second UE120(e.g., UE 2). The RIS305may be deployed in a wireless network (such as the wireless network100) to improve communication performance and efficiency. For example, the RIS305may enable a transmitter (e.g., a base station110or a UE120) to control the scattering, reflection, and refraction characteristics of signals transmitted by the transmitter, to overcome the negative effects of wireless propagation. For example, the RIS305may effectively control signal characteristics (e.g., spatial direction, beam width, phase, amplitude, frequency, and/or polarization) of an impinging signal without a need for complex decoding, encoding, and radio frequency processing operations. Therefore, the RIS305may provide increased channel diversity for propagation of signals in a wireless network. The increased channel diversity provides robustness to channel fading and/or blocking, such as when higher frequencies are used by the base station110and/or the UE120(e.g., millimeter wave frequencies and/or sub-terahertz frequencies). Moreover, as the RIS305does not need to perform complex decoding, encoding, and radio frequency processing operations, the RIS305may provide a more cost and energy efficient manner of reflecting and/or redirecting signals in a wireless network (e.g., as compared to other mechanisms for reflecting and/or redirecting signals, such as a relay device). As indicated above,FIG.3is provided as an example. Other examples may differ from what is described with respect toFIG.3. FIG.4is a diagram illustrating an example400of communication links in a wireless network that includes an RIS, in accordance with the present disclosure. As shown, example400includes a base station110, a UE120, and the RIS305. The RIS305may be controlled and/or configured by the RIS controller310. As shown inFIG.4, the UE120may receive a communication (e.g., data and/or control information) directly from the base station110as a downlink communication. Additionally, or alternatively, the UE05may receive a communication (e.g., data and/or control information) indirectly from the base station110via the RIS305. For example, the base station110may transmit the communication in a spatial direction toward the RIS305, and the RIS305may redirect or reflect the communication to the UE120. In some examples, the UE120may communicate directly with the base station110via a direct link405. For example, a communication may be transmitted via the direct link405. A communication transmitted via the direct link405between the UE120and the base station110does not pass through and is not reflected or redirected by the RIS305. In some examples, the UE120may communicate indirectly with the base station110via an indirect link410. For example, a communication may be transmitted via different segments of the indirect link410. A communication transmitted via the indirect link410between the UE120and the base station110is reflected and/or redirected by the RIS305. As shown inFIG.4and by reference number415, the base station110may communicate with the RIS305(e.g., with the RIS controller310) via a control channel. For example, the base station110may indicate, in an RIS control message, spatial direction(s) and/or signal characteristics for signals reflected by the RIS305. The RIS controller310may configure reconfigurable elements of the RIS305in accordance with the RIS control message. In some examples, the RIS control message may indicate information associated with the wireless network, such as a frame structure, time synchronization information, and/or slot boundaries, among other examples. Using the communication scheme shown inFIG.4may improve network performance and increase reliability by providing the UE120with link diversity for communicating with the base station110. In some cases, the UE120may receive a communication (e.g., the same communication) from the base station110via both the direct link405and the indirect link410. In other cases, the base station110may select one of the links (e.g., either the direct link405or the indirect link410), and may transmit a communication to the UE120using only the selected link. Alternatively, the base station110may receive an indication of one of the links (e.g., either the direct link405or the indirect link410), and may transmit a communication to the UE120using only the indicated link. The indication may be transmitted by the UE120and/or the RIS305. In some examples, such selection and/or indication may be based at least in part on channel conditions and/or link reliability. However, channel characteristics of the direct link405and the indirect link410may be different. For example, the direct link405and the indirect link410may be distinguishable in the spatial domain and/or the time domain. Additionally, or alternatively, the direct link405and the indirect link410may be associated with different Doppler characteristics (e.g., Doppler spread and/or Doppler shift). Therefore, the direct link405and the indirect link410may need to be separately maintained. For example, separate beam management (e.g., separate beam acquisition and/or beam tracking) may need to be performed for the direct link405and the indirect link410. As another example, transmit and/or receive processing of signals associated with the direct link405and the indirect link410may be different due to different path delays and/or Doppler characteristics, and/or due to separate time and/or frequency synchronizations of the direct link405and the indirect link410. Moreover, transmit power allocation for the direct link405and the indirect link410may be different due to different fading conditions of the direct link405and the indirect link410. As a result, the direct link405and the indirect link410may be maintained simultaneously, but may need to be treated separately (e.g., by the base station110and/or the UE120). As indicated above,FIG.4is provided as an example. Other examples may differ from what is described with respect toFIG.4. Wireless communication systems may use a variety RATs, such as Global System for Mobility (GSM), UMTS, LTE, and NR. Typically, RATs can be configured to provide security functionality such as ciphering and integrity protection, which may be applied to both a control plane (e.g., radio resource control (RRC) signaling through a signaling radio bearer) and a user plane (e.g., a data radio bearer) in a packet data convergence protocol (PDCP) layer. Various radio access technologies may also provide access control through authentication (e.g., via Access Security Management Entity keys or another suitable system). However, some scheduled communications are not typically protected, such as medium access control (MAC) signaling (e.g., MAC control element (MAC-CE) signaling), broadcast information (e.g., system information block (SIB) signals), and paging information, and/or downlink communication channels (e.g., the physical downlink control channel (PDCCH) and the physical downlink shared channel (PDSCH)), among other examples. MAC signaling, broadcast information (e.g., SIB signals), and paging information are typically not protected by security functionalities because speed of communication (e.g., minimizing transfer delay) is judged more important than security for these signals. However, the signals provided in the PDCCH and PDSCH may include control information and content data (e.g., voice, and/or content for user services), and thus security may be more important for these signals. Malicious intruders or jammers may hinder or hijack the unprotected signals by fabricating a transmission with the same format (e.g., an appropriate PDCCH or PDSCH format). Without security protection, a wireless device intended to receive the PDCCH or PDSCH signals may be unable to distinguish between true and fabricated transmissions. Some techniques and apparatuses described herein enable security enhancements using an RIS. For example, an RIS may use a modulation signature (e.g., watermarking) to insert a signature or a security key that can be used (e.g., by a UE) to authenticate a message that has been reflected or redirected by the RIS. For example, the RIS may be configured (e.g., by a base station) with a first security key. The RIS may insert the first security key into a signal using modulation (e.g., phase modulation, amplitude modulation, and/or other types of modulation). A UE may receive a signal redirected by the RIS (e.g., that has been modulated with a modulation signature that identifies the first security key). The UE may decode the signal to obtain the first security key. The UE may receive a signal transmitted on a downlink control channel (e.g., a PDCCH). The signal associated with the PDCCH may indicate a second security key. In some aspects, a PDCCH signal may include the second security key in a payload of the PDCCH signal. The PDCCH signal may be modulated by an RIS using a modulation signature, such that the UE may obtain the first security key based on the modulation signature and may obtain the second security key based on the payload of the PDCCH signal. The UE may receive a signal transmitted on a downlink shared channel (e.g., a PDSCH). The PDSCH signal may include a third security key (e.g., in a payload of the PDSCH signal). The PDSCH signal may be redirected to the UE by the RIS. The UE may authenticate the PDSCH signal based on the first security key, the second security key, and the third security key. For example, the UE may use an authentication function that uses the first security key and the second security key as inputs. The UE may compare an output of the authentication function to the third security key. If the output of the authentication function and the third security key match (e.g., are the same), then the UE120may determine that the PDSCH signal is authentic. If the output of the authentication function and the third security key do not match (e.g., are not the same), then the UE120may determine that the PDSCH signal is not authentic (e.g., and may block, or not permit further communication with, a device associated with the PDSCH signal). As a result, a security associated with signals redirected by an RIS may be improved. For example, the UE may be enabled to identify fake and/or fabricated transmissions associated with the RIS and may be enabled to block or not receive further communications from the device that transmitted the fake and/or fabricated transmissions. Some techniques and apparatuses described herein enable improved security for MAC signaling, broadcast signaling, and/or paging signaling associated with an RIS. FIG.5is a diagram illustrating an example500associated with security enhancements associated with an RIS, in accordance with the present disclosure. As shown inFIG.5, a base station110and a UE120may communicate with one another in a wireless network, such as the wireless network100. As shown inFIG.5, in some aspects, the UE120and the base station110may communicate via an RIS502. The RIS502may be similar to the RIS305and/or the RIS160described elsewhere herein. As shown by reference number504, the base station110may transmit (e.g., using controller/processor240, transmit processor220, TX MIMO processor230, MOD232, antenna234, and/or memory242), and the UE120(e.g., using antenna252, DEMOD254, MIMO detector256, receive processor258, controller/processor280, and/or memory282) and/or the RIS502may receive, configuration information. In some aspects, the UE120may receive configuration information from another device (e.g., from another base station or another UE). In some aspects, the UE120may receive the configuration information via system information signaling, RRC signaling and/or MAC signaling (e.g., MAC-CEs). In some aspects, the configuration information may include an indication of one or more configuration parameters (e.g., already known to the UE120) for selection by the UE120and/or explicit configuration information for the UE120to use to configure itself. In some aspects, the configuration information may indicate that an indirect link between the base station110, the RIS502, and the UE120is to be established and/or maintained. In some aspects, the configuration information may indicate a security enhancement, using the RIS502, for signaling between the base station110and the UE120(e.g., using the security keys described herein). The security enhancement may be enabled and/or activated via the configuration information. In some aspects, the security enhancement may be enabled and/or activated in a different message (e.g., based at least in part on a report provided by the UE120, as described in more detail elsewhere herein). As used herein, “security enhancement” may refer to authenticating PDSCH messages using a security key that is added to a signal by an RIS using a modulation signature, as described in more detail elsewhere herein. In some aspects, the security enhancement may be a physical layer security enhancement. In some aspects, the configuration information may indicate that the UE120is to transmit an RIS report to the base station110(e.g., associated with the RIS502and/or other RISs deployed in the wireless network). The RIS report may be a report for a link associated with the RIS502(and/or other RISs deployed in the wireless network). The report may indicate an identifier associated with the RIS502and/or one or more measurements of one or more signals transmitted via the link with the RIS502, among other examples. The RIS report may enable the base station110to determine whether the security enhancement described herein is to be activated for the UE120, as described in more detail elsewhere herein. In some aspects, the configuration information may indicate one or more threshold values and/or conditions associated with using the security enhancement described herein. For example, the configuration information may indicate one or more threshold values associated with a link quality of the RIS link associated with the RIS502. If a measured quality (e.g., a measured RSRQ or other link quality parameter) of the RIS link satisfies the one or more threshold values, then the UE120and/or the base station110may determine that the security enhancement may be used for the RIS link. In some aspects, the configuration information may indicate an authentication function associated with the security enhancement. The authentication function may be a function that enables the UE120to obtain an authentication key from one or more security keys. The authentication key may be compared to a security key included in a PDSCH message to authenticate the PDSCH message, as described in more detail elsewhere herein. In some other aspects, the authentication function may be pre-configured on the UE120(e.g., without receiving any signaling indicating the authentication function). In some aspects, the configuration information may indicate a modulation signature associated with the RIS502. “Modulation signature” may refer to a pattern or sequence of modulation added to a signal that is reflected or redirected by the RIS502. The modulation signature may also be referred to as an RIS watermark. For example, the modulation signature may be a phase modulation signature, a polarization modulation signature, and/or an amplitude modulation signature, among other examples. In some aspects, the configuration information may indicate a beam state or a beam direction of the RIS502that is associated with the modulation signature (e.g., multiple modulation signatures may be indicated for multiple beam states and/or beam directions of the RIS502). In some aspects, the configuration information may indicate a pattern or sequence associated with the modulation signature. In some aspects, the configuration information may indicate that the RIS502is to modulate a signal reflected by the RIS, in accordance with the modulation signature, at symbol boundaries and/or in symbols that contain a reference signal (e.g., a DMRS, a phase tracking reference signal (PTRS), and/or a polarization detection reference signal). In some aspects, the configuration information may configure the reference signal that is to be associated with the signal to be reflected by the RIS502. For example, if the modulation signature is a phase modulation signature, then the configuration information may configure DMRSs and/or PTRSs to be transmitted with the signal. Similarly, if the modulation signature is a polarization modulation signature, then the configuration information may configure polarization detection reference signals and/or other reference signals to be transmitted with the signal. The reference signals may enable the UE120to identify and/or detect the modulation of the signal. In some aspects, the modulation signature may identify a security key, as described in more detail elsewhere herein. Additionally, or alternatively, the modulation signature may identify an identifier of the RIS502. As shown by reference number506, the UE120may configure (e.g., using controller/processor280and/or memory282) the UE120for communicating with the base station110and/or with the RIS502. In some aspects, the UE120may configure the UE120based at least in part on the configuration information. In some aspects, the UE120may be configured to perform one or more operations described herein. As shown by reference number508, the RIS502(and/or an RIS controller of the RIS502) may configure the RIS502for communicating with the base station110and/or the UE120. In some aspects, the RIS502(and/or an RIS controller of the RIS502) may configure the RIS502based at least in part on the configuration information. In some aspects, the RIS502may be configured to perform one or more operations described herein. In some aspects, the UE120may transmit (e.g., using controller/processor280, transmit processor264, TX MIMO processor266, MOD254, antenna252, and/or memory282), and the base station110may receive (e.g., using antenna234, DEMOD232, MIMO detector236, receive processor238, controller/processor240, and/or memory242), a capability message indicating whether the UE120supports the security enhancement described herein. For example, the UE120may transmit, and the base station110may receive, a message indicating whether the UE120is capable of authenticating PDSCH messages using a security key that is indicated by a signal via a modulation signature added by an RIS. In some aspects, the configuration information may be based at least in part on the capability message (e.g., the base station110may configure the UE120to use the security enhancement only if the UE120indicates that the UE120supports the security enhancement). The UE120may transmit the capability message via RRC signaling and/or physical uplink control channel (PUCCH) signaling, among other examples. As shown by reference number510, the base station110may transmit (e.g., using controller/processor240, transmit processor220, TX MIMO processor230, MOD232, antenna234, and/or memory242) one or more signals. The one or more signals may be transmitted on a beam associated with, and/or in a spatial direction toward, the RIS502. The one or more signals may be reference signals. In some aspects, the one or more signals may be an RIS reference signal. The RIS reference signal may be associated with measuring link parameters (e.g., link quality, signal strength, and/or other parameters) of an RIS link (e.g., an indirect link associated with an RIS). The one or more signals may be used by the UE120to measure link parameters and/or to identify the RIS link. As shown by reference number512, the RIS502may reflect and/or redirect the one or more signals toward the UE120(e.g., using a beam associated with the UE120and/or in a spatial direction toward the UE120). The RIS502may modulate the signal (e.g., the impinging signal that arrives at the RIS502) using a modulation signature that identifies an identifier associated with the RIS502. For example, the RIS502may modulate the signal in phase (e.g., for a phase modulation signature), may modulate a polarization of the signal (e.g., for a polarization modulation signature), and/or may modulate an amplitude of the signal (e.g., for an amplitude modulation signature). For example, for a phase modulation signature and/or a polarization modulation signature, the RIS502may modulate the signal in symbols of the signal that include a reference signal (e.g., a DMRS, a PTRS, and/or a polarization detection reference signal). As another example, for a polarization modulation signature, the RIS502may modulate a polarization state of the signal from a first polarization state of the signal as transmitted by the base station110to a second polarization state of the signal. A polarization state may include an angle of polarization or a polarization mode. For an amplitude modulation signature, the RIS502may modulate the amplitude of the signal by attenuating the amplitude of the signal in accordance with a pattern (e.g., the amplitude modulation signature) that identifies the RIS502. The RIS502may modulate the amplitude of the signal by puncturing the signal at one or more symbols of the signal, and/or by modulating a spatial direction of the signal. As shown by reference number514, the UE120may receive (e.g., using antenna252, DEMOD254, MIMO detector256, receive processor258, controller/processor280, and/or memory282) a signal (e.g., a modulated signal) that is redirected or reflected by the RIS502. The signal may be modulated by the RIS502using the modulation signature, as described in more detail elsewhere herein. The UE120may demodulate and/or decode the signal (e.g., the modulated signal) to identify that the signal was transmitted via a link that includes the RIS502. For example, the UE120may detect phase changes, polarization changes, and/or amplitude changes in the signal. The UE120may detect that the phase changes, polarization changes, and/or amplitude changes vary in a pattern or sequence that corresponds to the modulation signature associated with the RIS502. Therefore, the UE120may identify that the signal was reflected and/or redirected by the RIS502. In some aspects, the UE120may decode the one or more signals based on decoding information provided by the base station110(e.g., via the configuration information). For example, a decoding method may be indicated to the UE120by the base station110. In some other aspects, the decoding method may be defined (e.g., such that no signaling is required). In some aspects, the UE120may measure (e.g., using antenna252, DEMOD254, MIMO detector256, receive processor258, controller/processor280, and/or memory282) the one or more signals (e.g., using an RIS reference signal). The UE120may measure one or more link parameters such as a link quality (e.g., an RSRQ), a signal strength (e.g., an RSRP), a signal-to-noise ratio (SNR), and/or other link parameters. In some other aspects, the UE120may not measure the one or more signals. The UE120may identify an RIS identifier (e.g., indicated by the modulation signature) associated with the one or more signals. As shown by reference number516, the UE120may transmit (e.g., using controller/processor280, transmit processor264, TX MIMO processor266, MOD254, antenna252, and/or memory282), and the base station110may receive (e.g., using antenna234, DEMOD232, MIMO detector236, receive processor238, controller/processor240, and/or memory242), a report. The report may be an RIS report. In some aspects, the report may indicate an identifier associated with the RIS502and/or one or more measurements of one or more signals transmitted via the link with the RIS502, among other examples. In some aspects, the report may indicate whether the security enhancement described herein is to be activated. For example, in some aspects, the UE120may determine whether the measurement(s) of the one or more signals satisfy a threshold (e.g., a threshold indicated by the configuration information or a pre-defined threshold, such as a threshold defined, or otherwise fixed, by a wireless communication standard). If the measurement(s) satisfy the threshold, then the UE120may determine that the security enhancement described herein is to be activated. If the measurement(s) do not satisfy the threshold, then the UE120may determine that the security enhancement described herein is not to be activated. The report may enable the base station110to determine (e.g., using controller/processor240and/or memory242) whether the security enhancement described herein should be activated and/or applied. For example, the report may indicate whether the RIS link (e.g., associated with the RIS502) has a suitable link quality to support the security enhancement. For example, if the security enhancement were to be used when the link quality is poor (e.g., does not satisfy a threshold), then the UE120may be unable to obtain or receive one or more security keys for the security enhancement. As a result, the UE120may be unable to authenticate PDSCH messages and/or may incorrectly determine that a PDSCH message is not authenticated. Therefore, enabling the security enhancement based at least in part on the link quality of the RIS link may ensure that the UE120is able to properly apply the security enhancement and/or authenticate PDSCH messages, as described in more detail elsewhere herein. For example, in some cases, the UE120may receive reflected signals from multiple RISs (e.g., with different identifiers), which may accumulate at the UE120and may be indistinguishable from each other. As another example, when a line-of-sight (LoS) path is a dominant path (e.g., is associated with a highest link parameter), the UE120may be unable to identify an RIS signature and/or a modulation signature. As a result, the UE120may be unable to receive and/or decode a modulation signature applied by the RIS502in some scenarios. The report transmitted by the UE120may enable the base station110to identify whether one of the scenarios (e.g., that prevents or reduces the UE's120ability to receive and/or decode a modulation signature applied by the RIS502) is currently present. As shown by reference number518, the base station110may determine (e.g., using controller/processor240and/or memory242) whether the security enhancement is to be used. The base station110may determine whether the security enhancement is to be used based at least in part on the report (e.g., the RIS report) transmitted by the UE120. For example, the report may indicate one or more RIS identifiers. The base station110may measure and/or identify a measurement of an RIS link associated with an RIS identifier indicated by the report. For example, the base station110may measure and/or identify a measurement of an RIS link associated with the RIS502. The base station110may determine whether the measurement of the RIS link (e.g., a measurement of a link quality of the RIS link) satisfies a threshold. If the measurement satisfies the threshold, then the base station110may determine that the security enhancement is to be used. If the measurement does not satisfy the threshold, then the base station110may determine that the security enhancement should not be used. In some aspects, as described above, the UE120may determine whether the RIS link is suitable for the security enhancement. The UE120may indicate, in the report, whether the RIS link is suitable for the security enhancement (e.g., as described in more detail elsewhere herein). In such examples, the base station110may determine whether the security enhancement is to be used based at least in part on the indication in the report from the UE120. As shown by reference number520, the base station110may transmit (e.g., using controller/processor240, transmit processor220, TX MIMO processor230, MOD232, antenna234, and/or memory242), and the RIS502may receive (e.g., the RIS controller of the RIS502may receive) an indication of a first security key associated with the RIS502. The first security key may be referred to herein as “S3.” The first security key may be a security key that is to be added, using a modulation signature, to signals reflected or redirected by the RIS502. As used herein, “security key” may refer to a unique key or code. For example, a security key may include a random sequence of numbers and/or letters, a hash key, an encryption key, an access security management entity (ASME) key, a sequence of bits, a specific waveform, and/or a PDCCH DMRS sequence, among other examples. The base station110may configure the RIS502with the first security key based at least in part on determining that the security enhancement is to be used, as described in more detail elsewhere herein. For example, the base station110may determine that the security enhancement is to be used based at least in part on the report transmitted by the UE120. The base station110may transmit, to the RIS502, an indication of the first security key to cause the RIS502to insert the first security key into one or more signals using a modulation signature. In some aspects, the first security key may be based at least in part on an identifier associated with the RIS502. For example, the base station110may determine and/or generate the first security key using the identifier associated with the RIS502. In some aspects, the base station110may indicate one or more beams and/or a spatial direction for which the RIS502is to insert the first security key. For example, the one or more beams and/or the spatial direction may be associated with (e.g., may be toward) the UE120. The base station110may indicate that the RIS502is to insert the first security key (e.g., using modulation) into signals reflected and/or redirected in the direction of the one or more beams and/or the spatial direction associated with the UE120. In other words, the RIS502may be configured to insert the first security key for signals associated with some beams and/or spatial directions and may be configured to not insert the first security key for signals associated with other beams and/or spatial directions. As shown by reference number522, the base station110may transmit (e.g., using controller/processor240, transmit processor220, TX MIMO processor230, MOD232, antenna234, and/or memory242) a control message (e.g., a signal associated with a downlink control channel), such as a PDCCH message (e.g., a message transmitted via the PDCCH). The base station110may transmit the control message using a beam associated with, or in a spatial direction toward, the RIS502. The control message may include a second security key. For example, the second security key may be included in a payload of the control message. The second security key may be referred to herein as “S1.” The base station110may determine the second security key. In some aspects, the base station110may transmit the control message to the UE120using a direct link (e.g., without using the RIS502). In some other aspects, the control message may be redirected and/or reflected toward the UE120by the RIS502. The control message may be a message that schedules one or more PDSCH messages. In some aspects, as shown by reference number524, the RIS502may modulate the signal (e.g., the signal associated with the control message) using a modulation signature to insert the first security key. For example, the RIS502may insert the first security key into a PDCCH signal. For example, the signal may be reflected or redirected by the RIS502to the UE120. The RIS502may modulate the signal (e.g., the impinging signal that arrives at the RIS502) using the modulation signature. The RIS502may modulate the signal based at least in part on the signal being associated with a beam and/or spatial direction toward the UE120. For example, the RIS502may be configured (e.g., by the base station110) to modulate signals (e.g., PDCCH signals and/or other signals) that are to be redirected and/or reflected toward the UE120(e.g., to insert the first security key and to enable the security enhancement described herein). For example, the RIS502may modulate a phase of the signal (e.g., for a phase modulation signature), may modulate a polarization of the signal (e.g., for a polarization modulation signature), and/or may modulate an amplitude of the signal (e.g., for an amplitude modulation signature). For example, for a phase modulation signature and/or a polarization modulation signature, the RIS502may modulate the signal in symbols of the signal that include a reference signal (e.g., a DMRS, a PTRS, and/or a polarization detection reference signal). For a polarization modulation signature, the RIS502may modulate a polarization state of the signal from a first polarization state of the signal as transmitted by the base station110to a second polarization state of the signal. For an amplitude modulation signature, the RIS502may modulate the amplitude of the signal by attenuating the amplitude of the signal in accordance with a pattern (e.g., the amplitude modulation signature) that identifies the first security key. The RIS502may modulate the amplitude of the signal by puncturing the signal at one or more symbols of the signal, and/or by modulating a spatial direction of the signal. In some aspects, the RIS502may use a modulation signature that identifies the first security key and identifies the RIS502. In some aspects, the signal modulated by the RIS502using the modulation signature may be a signal associated with the control message (e.g., that includes the second security key in the payload of the control message) and/or may be a signal associated with another PDCCH message. As shown by reference number526, the UE120may receive (e.g., using antenna252, DEMOD254, MIMO detector256, receive processor258, controller/processor280, and/or memory282) the control message that includes the second security key. In some aspects, the signal of the control message may be modulated using the modulation signature that identifies the first security key. In some aspects, the control message that includes the second security key may not be modulated by the RIS502and/or may be transmitted via a direct link between the UE120and the base station110. In such examples, the UE120may receive another signal (e.g., another PDCCH signal) that is modulated using the modulation signature. The UE120may receive the control message and/or the other message (e.g., using antenna252, DEMOD254, MIMO detector256, receive processor258, controller/processor280, and/or memory282) to obtain the first security key and the second security key. For example, the UE120may decode the control message to identify the second security key in the payload of the control message. Similarly, the UE120may decode the signal modulated by the RIS502to identify the modulation signature. The UE120may determine the first security key based at least in part on the modulation signature (e.g., the modulation signature may include a pattern or sequence that identifies the first security key). The first security key and the second security key may be used when decoding data scheduled by the control message. For example, the data may be broadcast information or other signaling. The UE120may use the first security key and the second security key to decode and authenticate the data, as described in more detail elsewhere herein. As shown by reference number528, the base station110may transmit (e.g., using controller/processor240, transmit processor220, TX MIMO processor230, MOD232, antenna234, and/or memory242) another signal (e.g., a second signal) that includes a third security key. The third security key may be referred to herein as “S2.” The signal that includes the third security key may be a data signal and/or a PDSCH signal. For example, the signal that includes the third security key may be associated with broadcast signaling, SIB signaling, MAC signaling, and/or paging signaling, among other examples. In some aspects, the base station110may determine the third security key. For example, the base station110may determine the third security key based at least in part on the first security key and the second security key. In some aspects, the base station110may determine the third security key to be the output of an authentication function when the first security key and the second security key are provided as inputs to the authentication function. In some aspects, the second security key and/or the third security key may be based at least in part on a fourth security key. The fourth security key may be referred to herein as “S0.” The fourth security key may be a security key associated with another type of signaling. For example, the fourth security key may be an access security key. The fourth security key may be associated with UE-specific data security (e.g., ciphering and integrity protection) and/or MAC signaling security, among other examples. For example, the fourth security key may be established and/or generated as part of a connection establishment procedure between the UE120and the base station110. For example, the base station110may transmit, and the UE120may receive, an indication of the fourth security key (e.g., as part of the connection establishment procedure). In some aspects, the second security key (e.g., the security key included in the PDCCH signal or the control signal) may be derived from S0 and S1 (e.g., may be derived from the key established between the UE120and the base station and a random security key). In some other aspects, the second security key may be a random security key and may not be based at least in part on S0. In some aspects, the third security key (e.g., the security key included in the PDSCH signal) may be derived as a function of S0 and/or S2. In some aspects, at least one of the second security key or the third security key may be based at least in part on S0 (e.g., at least one of the second security key or the third security key may be a function of S0). In some aspects, as shown by reference number530, the RIS502may reflect and/or redirect the signal (e.g., the PDSCH signal and/or the signal that includes the third security key in a payload of the third signal) toward the UE120. For example, the RIS502may redirect the signal using a beam and/or spatial direction associated with the UE120. In some aspects, the RIS502may modulate the signal to insert the modulation signature (e.g., in a similar manner as described above). In some other aspects, the RIS502may not modulate the signal (e.g., the PDSCH signal and/or the signal that includes the third security key in a payload of the third signal). In some aspects, the signal may be transmitted via a direct link between the UE120and the base station110(e.g., via a link that does not include the RIS502). As shown by reference number532, the UE120may authenticate the signal based at least in part on the first security key, the second security key, and the third security key (e.g., using controller/processor280and/or memory282). For example, the UE120may receive and decode the signal to obtain the third security key. The UE120may generate, using an authentication function, an authentication key based at least in part on the first security key and the second security key. In some aspects, the authentication function may be or may include a one-way function, a key derivation function, a secure hash function, or another suitable function. In some aspects, the authentication function may include a process, algorithm, mathematical transform, or another operation or series of operations, which may be provisioned in the UE120(e.g., by the base station110). In some embodiments, the base station110may statically provision the UE120with the authentication function. In some other aspects, the base station110may provision the UE120with the authentication function dynamically. For example, the base station110may change or provide a new authentication function to the UE120from time to time. The UE120may authenticate the signal based at least in part on whether the authentication key matches the third security key. For example, if the authentication key matches the third security key, then the UE120may determine that the signal (e.g., that includes the third security key in the payload) is authentic. If the authentication key does not match the third security key, then the UE120may determine that the signal (e.g., that includes the third security key in the payload) is fake and/or fabricated (e.g., not authentic). For example, the UE120may perform a matching operation to determine whether the authentication key matches the third security key. For example, the matching operation may be expressed as S2=ƒa(S1, S3), where ƒa( ) is the authentication function. In some aspects, such as where at least one of the second security key or the third security key is based at least in part on S0, the matching operation may be expressed as S2=ƒa(S0, S1, S3). If the UE120determines that the signal is authentic (e.g., based at least in part on the matching operation), then the UE120may enable communications with the base station110. If the UE120determines that the signal is not authentic (e.g., based at least in part on the matching operation), then the UE120may disable or block communications with the base station110. This may improve a security associated with the PDSCH signaling because the UE120is enabled to determine when signals transmitted via the PDSCH are authentic. As a result, a security associated with signals redirected by the RIS502may be improved. For example, the UE120may be enabled to identify fake and/or fabricated transmissions associated with the RIS502and may be enabled to block or not receive further communications from the device that transmitted the fake and/or fabricated transmissions. Some techniques and apparatus described herein enable improved security for MAC signaling, broadcast signaling, and/or paging signaling associated with an RIS502. As indicated above,FIG.5is provided as an example. Other examples may differ from what is described with respect toFIG.5. FIG.6is a diagram illustrating an example600associated with security enhancements with an RIS, in accordance with the present disclosure. As shown inFIG.6, a base station110may communicate with one or more UEs120(e.g., UE1 and UE2) in a wireless network, such as the wireless network100. The base station110and the UEs120may use an RIS605to communicate with one another. For example, the RIS605may reflect or redirect a signal to the base station110and/or the UEs120. The RIS605may be the same as, or similar to, the RIS502described in connection withFIG.5. The reconfigurable elements of the RIS605may be controlled and/or configured by an RIS controller610. The RIS controller610may be a control module (e.g., a controller and/or a processor) that is capable of configuring the electromagnetic characteristic(s) of each reconfigurable element of the RIS605(e.g., in a similar manner as described in connection withFIG.3). As shown by reference number615, the base station110may transmit, and the RIS605and/or the RIS controller610may receive, an indication of the first security key (e.g., S3), in a similar manner as described in connection withFIG.5. For example, the base station110and/or the UE 1 may determine that the security enhancement described herein is to be used. Therefore, the base station110may configure the RIS605to insert the first security key into signals reflected and/or redirected toward the UE 1. The base station110may transmit a signal620. The signal620may be transmitted in a spatial direction toward the RIS605. The RIS605may configure the reconfigurable elements of the RIS605to reflect and/or redirect the signal620in a desired spatial direction and/or with one or more desired signal characteristics (e.g., beam width, phase, amplitude, frequency, and/or polarization). Additionally, as shown by reference number625, the RIS605may modulate the signal620(e.g., in phase, polarization state, and/or amplitude). For example, the RIS605may modulate the signal620using a modulation signature to insert the first security key (e.g., S3) into a signal reflected and/or redirected by the RIS605. The RIS605may modulate the signal620using the modulation signature in a similar manner as described in connection withFIG.5. Although multiple beams are shown inFIG.6representing different beam states or beam directions of the RIS605, the RIS605may be capable of reflecting a signal with one beam state or one beam direction at a time. For example, in one case, as shown by reference number630, the RIS305may be configured to reflect the signal620using a first beam state (e.g., beam state 1). The first beam state may cause the signal620to be reflected in a spatial direction toward a first UE120(e.g., UE 1). The reflected signal may be a modulated signal (e.g., modulated in accordance with the modulation signature) to identify the first security key. For example, the security enhancements described herein may be enabled or activated for the UE 1. Therefore, the RIS605may modulate the reflected signal to insert the first security key into the reflected signal. This may enable the UE 1 to receive and decode the reflected signal to obtain the first security key. The UE 1 may use the first security key (and/or additional security keys, as described in more detail elsewhere herein) to authenticate future messages transmitted by the base station110, as described in more detail in connection withFIG.5. As shown by reference number635, in another case, the RIS605may be configured to reflect the signal620using a second beam state (e.g., beam state 2). The first beam state may cause the signal620to be reflected in a spatial direction toward a second UE120(e.g., UE 2). In some aspects, the security enhancements described herein may not be enabled and/or may not be activated for the UE 2. Therefore, as shown inFIG.6, the RIS605may reflect the signal620toward the UE 2 without modulating the signal620. In this way, the UE 2 may be enabled to decode the reflected signal. This provides additional flexibility for the base station110to enable the security enhancement for some UEs and to disable the security enhancement for other UEs (e.g., based at least in part on RIS link qualities associated with the different UEs). As indicated above,FIG.6is provided as an example. Other examples may differ from what is described with respect toFIG.6. FIG.7is a diagram illustrating an example process700performed, for example, by a UE, in accordance with the present disclosure. Example process700is an example where the UE (e.g., UE120) performs operations associated with security enhancements with an RIS. As shown inFIG.7, in some aspects, process700may include receiving a first signal associated with a downlink control channel, wherein the first signal uses a modulation signature that identifies a first security key associated with an RIS, and wherein the first signal includes a second security key (block710). For example, the UE (e.g., using communication manager140and/or reception component1102, depicted inFIG.11) may receive a first signal associated with a downlink control channel, wherein the first signal uses a modulation signature that identifies a first security key associated with an RIS, and wherein the first signal includes a second security key, as described above. As further shown inFIG.7, in some aspects, process700may include receiving a second signal associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the second signal is authenticated by the UE based at least in part on the first security key, the second security key, and the third security key (block720). For example, the UE (e.g., using communication manager140and/or reception component1102, depicted inFIG.11) may receive a second signal associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the second signal is authenticated by the UE based at least in part on the first security key, the second security key, and the third security key, as described above. Process700may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process700includes decoding the second signal to authenticate the second signal using the first security key, the second security key, and the third security key. In a second aspect, alone or in combination with the first aspect, decoding the second signal to authenticate the second signal includes generating, using an authentication function, an authentication key based at least in part on the first security key and the second security key, and authenticating the second signal based at least in part on whether the authentication key matches the third security key. In a third aspect, alone or in combination with one or more of the first and second aspects, process700includes transmitting, to a base station, a report for a link associated with the RIS, wherein the report indicates at least one of an identifier associated with the RIS and one or more measurements of one or more signals transmitted via the link, and wherein the first signal using the modulation signature that identifies the first security key is based at least in part on the report. In a fourth aspect, alone or in combination with one or more of the first through third aspects, process700includes measuring the one or more signals transmitted via the link using an RIS reference signal associated with the one or more signals to obtain the one or more measurements. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, the first signal using the modulation signature that identifies the first security key is based at least in part on the one or more measurements satisfying a threshold. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, the report includes an indication of whether the one or more measurements satisfy a threshold, and the first signal using the modulation signature that identifies the first security key is based at least in part on whether the one or more measurements satisfy the threshold. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, the modulation signature is at least one of a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, the first security key is based at least in part on an identifier associated with the RIS. In a ninth aspect, alone or in combination with one or more of the first through eighth aspects, the modulation signature further identifies an identifier associated with the RIS. In a tenth aspect, alone or in combination with one or more of the first through ninth aspects, process700includes receiving, from a base station, an indication of a fourth security key, wherein at least one of the second security key or the third security key is based at least in part on the fourth security key. In an eleventh aspect, alone or in combination with one or more of the first through tenth aspects, receiving the indication of the fourth security key includes receiving, from the base station, the indication of the fourth security key as part of a connection establishment procedure with the base station. In a twelfth aspect, alone or in combination with one or more of the first through eleventh aspects, the fourth security key is associated with at least one of UE-specific data security or medium access control signaling security. In a thirteenth aspect, alone or in combination with one or more of the first through twelfth aspects, the first signal using the modulation signature that identifies the first security key is based at least in part on the first signal being associated with a beam state or a spatial direction that is associated with the UE. AlthoughFIG.7shows example blocks of process700, in some aspects, process700may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.7. Additionally, or alternatively, two or more of the blocks of process700may be performed in parallel. FIG.8is a diagram illustrating an example process800performed, for example, by a base station, in accordance with the present disclosure. Example process800is an example where the base station (e.g., base station110) performs operations associated with security enhancements with an RIS. As shown inFIG.8, in some aspects, process800may include transmitting, to an RIS, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS (block810). For example, the base station (e.g., using communication manager150and/or transmission component1204, depicted inFIG.12) may transmit, to an RIS, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS, as described above. As further shown inFIG.8, in some aspects, process800may include transmitting a first signal that is associated with a downlink control channel, wherein the first signal includes a second security key, and wherein the first signal is to be reflected by the RIS (block820). For example, the base station (e.g., using communication manager150and/or transmission component1204, depicted inFIG.12) may transmit a first signal that is associated with a downlink control channel, wherein the first signal includes a second security key, and wherein the first signal is to be reflected by the RIS, as described above. As further shown inFIG.8, in some aspects, process800may include transmitting a second signal that is associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable a user equipment (UE) to authenticate the second signal (block830). For example, the base station (e.g., using communication manager150and/or transmission component1204, depicted inFIG.12) may transmit a second signal that is associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable a UE to authenticate the second signal, as described above. Process800may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, process800includes receiving, from the UE, a report for a link associated with the RIS, wherein the report indicates at least one of an identifier associated with the RIS and one or more measurements of one or more signals transmitted via the link, and wherein transmitting, to the RIS, the indication of the first security key is based at least in part on the report. In a second aspect, alone or in combination with the first aspect, process800includes identifying a link quality associated with the link based at least in part on the one or more measurements. In a third aspect, alone or in combination with one or more of the first and second aspects, transmitting, to the RIS, the indication of the first security key is based at least in part on the one or more measurements satisfying a threshold. In a fourth aspect, alone or in combination with one or more of the first through third aspects, the report includes an indication of whether the one or more measurements satisfy a threshold, and transmitting, to the RIS, the indication of the first security key is based at least in part on whether the one or more measurements satisfy the threshold. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, the modulation signature is at least one of a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, the first security key is based at least in part on an identifier associated with the RIS. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, process800includes transmitting, to the UE, an indication of a fourth security key, wherein at least one of the second security key or the third security key is based at least in part on the fourth security key. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, transmitting the indication of the fourth security key includes transmitting, to the UE, the indication of the fourth security key as part of a connection establishment procedure with the UE. In a ninth aspect, alone or in combination with one or more of the first through eighth aspects, the fourth security key is associated with at least one of UE-specific data security or medium access control signaling security. In a tenth aspect, alone or in combination with one or more of the first through ninth aspects, transmitting, to the RIS, the indication of the first security key includes transmitting an indication of a beam state or a spatial direction to which the first security key is to be added by the RIS. AlthoughFIG.8shows example blocks of process800, in some aspects, process800may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.8. Additionally, or alternatively, two or more of the blocks of process800may be performed in parallel. FIG.9is a diagram illustrating an example process900performed, for example, by an RIS, in accordance with the present disclosure. Example process900is an example where the RIS (e.g., RIS160, RIS502, and/or RIS605) performs operations associated with security enhancements with an RIS. As shown inFIG.9, in some aspects, process900may include receiving, from a base station, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS (block910). For example, the RIS (e.g., using communication manager170and/or reception component1302, depicted inFIG.13) may receive, from a base station, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS, as described above. As further shown inFIG.9, in some aspects, process900may include receiving, from the base station, a first signal associated with a downlink control channel, wherein the first signal includes a second security key (block920). For example, the RIS (e.g., using communication manager170and/or reception component1302, depicted inFIG.13) may receive, from the base station, a first signal associated with a downlink control channel, wherein the first signal includes a second security key, as described above. As further shown inFIG.9, in some aspects, process900may include redirecting, to a UE, the first signal by including the modulation signature that identifies the first security key in the first signal (block930). For example, the RIS (e.g., using communication manager170and/or redirection component1308, depicted inFIG.13) may redirect, to a UE, the first signal by including the modulation signature that identifies the first security key in the first signal, as described above. As further shown inFIG.9, in some aspects, process900may include redirecting, to the UE, a second signal, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable the UE to authenticate the second signal (block940). For example, the RIS (e.g., using communication manager170and/or redirection component1308, depicted inFIG.13) may redirect, to the UE, a second signal, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable the UE to authenticate the second signal, as described above. Process900may include additional aspects, such as any single aspect or any combination of aspects described below and/or in connection with one or more other processes described elsewhere herein. In a first aspect, the modulation signature is at least one of a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. In a second aspect, alone or in combination with the first aspect, the modulation signature is a polarization modulation signature, and redirecting the first signal includes modulating a polarization state of the first signal from a first polarization state of the first signal as transmitted by the base station to a second polarization state of the first signal, wherein the polarization state includes an angle of polarization or a polarization mode. In a third aspect, alone or in combination with one or more of the first and second aspects, the modulation signature identifies the first security key and an identifier associated with the RIS. In a fourth aspect, alone or in combination with one or more of the first through third aspects, the first security key is based at least in part on an identifier associated with the RIS. In a fifth aspect, alone or in combination with one or more of the first through fourth aspects, at least one of the second security key or the third security key is based at least in part on a fourth security key. In a sixth aspect, alone or in combination with one or more of the first through fifth aspects, the fourth security key is established as part on a connection establishment procedure between the base station and the UE. In a seventh aspect, alone or in combination with one or more of the first through sixth aspects, redirecting the first signal including the modulation signature is based at least in part on a quality of a link between the RIS and the UE satisfying a threshold. In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, receiving the indication of the first security key includes receiving an indication of a beam state or a spatial direction to which the first security key is to be added by the RIS, and redirecting the first signal by including the modulation signature that identifies the first security key in the first signal is based at least in part on the first signal being redirected using the beam state or the spatial direction. AlthoughFIG.9shows example blocks of process900, in some aspects, process900may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.9. Additionally, or alternatively, two or more of the blocks of process900may be performed in parallel. FIG.10is a diagram illustrating an example process1000performed, for example, by a UE, in accordance with the present disclosure. Example process1000is an example where the UE (e.g., UE120) performs operations associated with security enhancements with an RIS. As shown inFIG.10, in some aspects, process1000may include receiving a first signal that is modulated by an RIS using a modulation signature, wherein the modulation signature identifies a first security key (e.g., S3) (block1010). For example, the UE (e.g., using communication manager140and/or reception component1102, depicted inFIG.11) may receive a first signal that is modulated by an RIS using a modulation signature, wherein the modulation signature identifies a first security key, as described above. The modulation signature may be a phase modulation signature, a polarization modulation signature, and/or an amplitude modulation signature, among other examples. As shown inFIG.10, in some aspects, process1000may include receiving a PDCCH signal indicating a second security key (e.g., S1) (block1020). For example, the UE (e.g., using communication manager140and/or reception component1102, depicted inFIG.11) may receive a PDCCH signal indicating a second security key (e.g., S1), as described above. In some aspects, the PDCCH signal may be the first signal. In some aspects, the PDCCH signal may indicate the second security key (S1) in a payload of the PDCCH signal. In some aspects, the PDCCH signal may schedule a PDSCH signal. As shown inFIG.10, in some aspects, process1000may include receiving a PDSCH signal indicating a third security key (e.g., S2) (block1030). For example, the UE (e.g., using communication manager140and/or reception component1102, depicted inFIG.11) may receive a PDSCH signal indicating a third security key (e.g., S2), as described above. The PDSCH signal may indicate the third security key in a payload of the PDSCH signal. In some aspects, the PDSCH signal may be scheduled by the PDCCH signal. As shown inFIG.10, in some aspects, process1000may include determining whether the PDSCH signal is authenticated (block1040). For example, the UE (e.g., using communication manager140and/or signal authentication component1108, depicted inFIG.11) may determine whether the PDSCH signal is authenticated, as described above. For example, the UE may authenticate the PDSCH signal based at least in part on the first security key, the second security key, and the third security key. In some aspects, the UE may generate, using an authentication function, an authentication key (e.g., ƒa( )) based at least in part on the first security key and the second security key. The UE120may determine whether the PDSCH signal is authenticated based at least in part on whether the authentication key matches the third security key (e.g., whether S2=ƒa(S1, S3)). As shown inFIG.10, in some aspects, if the UE determines that the PDSCH signal is authenticated, then process1000may include enabling communications with a device that transmitted the PDSCH signal (block1050). For example, the UE (e.g., using communication manager140and/or signal authentication component1108, depicted inFIG.11) may enable communications with a device that transmitted the PDSCH signal, as described above. As shown inFIG.10, in some aspects, if the UE determines that the PDSCH signal is not authenticated, then process1000may include blocking communications with a device that transmitted the PDSCH signal (block1060). For example, the UE (e.g., using communication manager140and/or signal authentication component1108, depicted inFIG.11) may block communications with a device that transmitted the PDSCH signal, as described above. For example, the UE may prevent or refrain from receiving future communications from the device if the PDSCH signal is not authenticated. AlthoughFIG.10shows example blocks of process1000, in some aspects, process1000may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted inFIG.10. Additionally, or alternatively, two or more of the blocks of process1000may be performed in parallel. FIG.11is a diagram of an example apparatus1100for wireless communication. The apparatus1100may be a UE, or a UE may include the apparatus1100. In some aspects, the apparatus1100includes a reception component1102and a transmission component1104, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1100may communicate with another apparatus1106(such as a UE, a base station, or another wireless communication device) using the reception component1102and the transmission component1104. As further shown, the apparatus1100may include the communication manager140. The communication manager140may include one or more of a signal authentication component1108, and/or a measurement component1110, among other examples. In some aspects, the apparatus1100may be configured to perform one or more operations described herein in connection withFIGS.5-6. Additionally, or alternatively, the apparatus1100may be configured to perform one or more processes described herein, such as process700ofFIG.7and/or process1000ofFIG.10, or a combination thereof. In some aspects, the apparatus1100and/or one or more components shown inFIG.11may include one or more components of the UE described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.11may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1102may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1106. The reception component1102may provide received communications to one or more other components of the apparatus1100. In some aspects, the reception component1102may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1100. In some aspects, the reception component1102may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. The transmission component1104may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1106. In some aspects, one or more other components of the apparatus1100may generate communications and may provide the generated communications to the transmission component1104for transmission to the apparatus1106. In some aspects, the transmission component1104may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1106. In some aspects, the transmission component1104may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the UE described in connection withFIG.2. In some aspects, the transmission component1104may be co-located with the reception component1102in a transceiver. The reception component1102may receive a first signal associated with a downlink control channel, wherein the first signal uses a modulation signature that identifies a first security key associated with an RIS, and wherein the first signal includes a second security key. The reception component1102may receive a second signal associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the second signal is authenticated by the UE based at least in part on the first security key, the second security key, and the third security key. The signal authentication component1108may decode the second signal to authenticate the second signal using the first security key, the second security key, and the third security key. The signal authentication component1108may generate, using an authentication function, an authentication key based at least in part on the first security key and the second security key. The signal authentication component1108may authenticate the second signal based at least in part on whether the authentication key matches the third security key. The transmission component1104may transmit, to a base station, a report for a link associated with the RIS, wherein the report indicates at least one of an identifier associated with the RIS and one or more measurements of one or more signals transmitted via the link, wherein the first signal using the modulation signature that identifies the first security key is based at least in part on the report. The measurement component1110may measure the one or more signals transmitted via the link using an RIS reference signal associated with the one or more signals to obtain the one or more measurements. The reception component1102may receive, from a base station, an indication of a fourth security key, wherein at least one of the second security key or the third security key is based at least in part on the fourth security key. The number and arrangement of components shown inFIG.11are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.11. Furthermore, two or more components shown inFIG.11may be implemented within a single component, or a single component shown inFIG.11may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.11may perform one or more functions described as being performed by another set of components shown inFIG.11. FIG.12is a diagram of an example apparatus1200for wireless communication. The apparatus1200may be a base station, or a base station may include the apparatus1200. In some aspects, the apparatus1200includes a reception component1202and a transmission component1204, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1200may communicate with another apparatus1206(such as a UE, a base station, or another wireless communication device) using the reception component1202and the transmission component1204. As further shown, the apparatus1200may include the communication manager150. The communication manager150may include a determination component1208, among other examples. In some aspects, the apparatus1200may be configured to perform one or more operations described herein in connection withFIGS.5-6. Additionally, or alternatively, the apparatus1200may be configured to perform one or more processes described herein, such as process800ofFIG.8, or a combination thereof. In some aspects, the apparatus1200and/or one or more components shown inFIG.12may include one or more components of the base station described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.12may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1202may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1206. The reception component1202may provide received communications to one or more other components of the apparatus1200. In some aspects, the reception component1202may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1200. In some aspects, the reception component1202may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. The transmission component1204may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1206. In some aspects, one or more other components of the apparatus1200may generate communications and may provide the generated communications to the transmission component1204for transmission to the apparatus1206. In some aspects, the transmission component1204may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1206. In some aspects, the transmission component1204may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the base station described in connection withFIG.2. In some aspects, the transmission component1204may be co-located with the reception component1202in a transceiver. The transmission component1204may transmit, to an RIS, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS. The transmission component1204may transmit a first signal that is associated with a downlink control channel, wherein the first signal includes a second security key, and wherein the first signal is to be reflected by the RIS. The transmission component1204may transmit a second signal that is associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable a UE to authenticate the second signal. The reception component1202may receive, from the UE, a report for a link associated with the RIS, wherein the report indicates at least one of an identifier associated with the RIS and one or more measurements of one or more signals transmitted via the link, and wherein transmitting, to the RIS, the indication of the first security key is based at least in part on the report. The determination component1208may identify a link quality associated with the link based at least in part on the one or more measurements. The determination component1208may determine whether the RIS is to use the first security key when redirecting or reflecting signals to the UE. The transmission component1204may transmit, to the UE, an indication of a fourth security key, wherein at least one of the second security key or the third security key is based at least in part on the fourth security key. The number and arrangement of components shown inFIG.12are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.12. Furthermore, two or more components shown inFIG.12may be implemented within a single component, or a single component shown inFIG.12may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.12may perform one or more functions described as being performed by another set of components shown inFIG.12. FIG.13is a diagram of an example apparatus1300for wireless communication. The apparatus1300may be an RIS, or an RIS may include the apparatus1300. In some aspects, the apparatus1300includes a reception component1302and a transmission component1304, which may be in communication with one another (for example, via one or more buses and/or one or more other components). As shown, the apparatus1300may communicate with another apparatus1306(such as a UE, a base station, or another wireless communication device) using the reception component1302and the transmission component1304. As further shown, the apparatus1300may include the communication manager170. The communication manager170may include one or more of a redirection component1308, and/or a modulation component1310, among other examples. In some aspects, the apparatus1300may be configured to perform one or more operations described herein in connection withFIGS.5-6. Additionally, or alternatively, the apparatus1300may be configured to perform one or more processes described herein, such as process900ofFIG.9, or a combination thereof. In some aspects, the apparatus1300and/or one or more components shown inFIG.13may include one or more components of the RIS described in connection withFIG.2. Additionally, or alternatively, one or more components shown inFIG.13may be implemented within one or more components described in connection withFIG.2. Additionally, or alternatively, one or more components of the set of components may be implemented at least in part as software stored in a memory. For example, a component (or a portion of a component) may be implemented as instructions or code stored in a non-transitory computer-readable medium and executable by a controller or a processor to perform the functions or operations of the component. The reception component1302may receive communications, such as reference signals, control information, data communications, or a combination thereof, from the apparatus1306. The reception component1302may provide received communications to one or more other components of the apparatus1300. In some aspects, the reception component1302may perform signal processing on the received communications (such as filtering, amplification, demodulation, analog-to-digital conversion, demultiplexing, deinterleaving, de-mapping, equalization, interference cancellation, or decoding, among other examples), and may provide the processed signals to the one or more other components of the apparatus1300. In some aspects, the reception component1302may include one or more antennas, a modem, a demodulator, a MIMO detector, a receive processor, a controller/processor, a memory, or a combination thereof, of the RIS described in connection withFIG.2. The transmission component1304may transmit communications, such as reference signals, control information, data communications, or a combination thereof, to the apparatus1306. In some aspects, one or more other components of the apparatus1300may generate communications and may provide the generated communications to the transmission component1304for transmission to the apparatus1306. In some aspects, the transmission component1304may perform signal processing on the generated communications (such as filtering, amplification, modulation, digital-to-analog conversion, multiplexing, interleaving, mapping, or encoding, among other examples), and may transmit the processed signals to the apparatus1306. In some aspects, the transmission component1304may include one or more antennas, a modem, a modulator, a transmit MIMO processor, a transmit processor, a controller/processor, a memory, or a combination thereof, of the RIS described in connection withFIG.2. In some aspects, the transmission component1304may be co-located with the reception component1302in a transceiver. The reception component1302may receive, from a base station, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS. The reception component1302may receive, from the base station, a first signal associated with a downlink control channel, wherein the first signal includes a second security key. The redirection component1308may redirect, to a UE, the first signal by including the modulation signature that identifies the first security key in the first signal. The redirection component1308may redirect, to the UE, a second signal, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable the UE to authenticate the second signal. The modulation component1310may modulate the first signal in phase, polarization state, and/or amplitude in accordance with the modulation signature to insert the first security key into the first signal. The number and arrangement of components shown inFIG.13are provided as an example. In practice, there may be additional components, fewer components, different components, or differently arranged components than those shown inFIG.13. Furthermore, two or more components shown inFIG.13may be implemented within a single component, or a single component shown inFIG.13may be implemented as multiple, distributed components. Additionally, or alternatively, a set of (one or more) components shown inFIG.13may perform one or more functions described as being performed by another set of components shown inFIG.13. The following provides an overview of some Aspects of the present disclosure: Aspect 1: A method of wireless communication performed by a user equipment (UE), comprising: receiving a first signal associated with a downlink control channel, wherein the first signal uses a modulation signature that identifies a first security key associated with a reconfigurable intelligent surface (RIS), and wherein the first signal includes a second security key; and receiving a second signal associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the second signal is authenticated by the UE based at least in part on the first security key, the second security key, and the third security key. Aspect 2: The method of Aspect 1, further comprising: decoding the second signal to authenticate the second signal using the first security key, the second security key, and the third security key. Aspect 3: The method of any of Aspects 1-2, wherein decoding the second signal to authenticate the second signal comprises: generating, using an authentication function, an authentication key based at least in part on the first security key and the second security key; and authenticating the second signal based at least in part on whether the authentication key matches the third security key. Aspect 4: The method of any of Aspects 1-3, further comprising: transmitting, to a base station, a report for a link associated with the RIS, wherein the report indicates at least one of an identifier associated with the RIS and one or more measurements of one or more signals transmitted via the link, wherein the first signal using the modulation signature that identifies the first security key is based at least in part on the report. Aspect 5: The method of Aspect 4, further comprising: measuring the one or more signals transmitted via the link using an RIS reference signal associated with the one or more signals to obtain the one or more measurements. Aspect 6: The method of any of Aspects 4-5, wherein the first signal using the modulation signature that identifies the first security key is based at least in part on the one or more measurements satisfying a threshold. Aspect 7: The method of any of Aspects 4-6, wherein the report includes an indication of whether the one or more measurements satisfy a threshold, and wherein the first signal using the modulation signature that identifies the first security key is based at least in part on whether the one or more measurements satisfy the threshold. Aspect 8: The method of any of Aspects 1-7, wherein the modulation signature is at least one of a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. Aspect 9: The method of any of Aspects 1-8, wherein the first security key is based at least in part on an identifier associated with the RIS. Aspect 10: The method of any of Aspects 1-9, wherein the modulation signature further identifies an identifier associated with the RIS. Aspect 11: The method of any of Aspects 1-10, further comprising: receiving, from a base station, an indication of a fourth security key, wherein at least one of the second security key or the third security key is based at least in part on the fourth security key. Aspect 12: The method of Aspect 11, wherein receiving the indication of the fourth security key comprises: receiving, from the base station, the indication of the fourth security key as part of a connection establishment procedure with the base station. Aspect 13: The method of any of Aspects 11-12, wherein the fourth security key is associated with at least one of UE-specific data security or medium access control signaling security. Aspect 14: The method of any of Aspects 1-13, wherein the first signal using the modulation signature that identifies the first security key is based at least in part on the first signal being associated with a beam state or a spatial direction that is associated with the UE. Aspect 15: A method of wireless communication performed by a base station, comprising: transmitting, to a reconfigurable intelligent surface (RIS), an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS; transmitting a first signal that is associated with a downlink control channel, wherein the first signal includes a second security key, and wherein the first signal is to be reflected by the RIS; and transmitting a second signal that is associated with a downlink shared channel, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable a user equipment (UE) to authenticate the second signal. Aspect 16: The method of Aspect 15, further comprising: receiving, from the UE, a report for a link associated with the RIS, wherein the report indicates at least one of an identifier associated with the RIS and one or more measurements of one or more signals transmitted via the link, and wherein transmitting, to the RIS, the indication of the first security key is based at least in part on the report. Aspect 17: The method of Aspect 16, further comprising: identifying a link quality associated with the link based at least in part on the one or more measurements. Aspect 18: The method of any of Aspects 16-17, wherein transmitting, to the RIS, the indication of the first security key is based at least in part on the one or more measurements satisfying a threshold. Aspect 19: The method of any of Aspects 16-18, wherein the report includes an indication of whether the one or more measurements satisfy a threshold, and wherein transmitting, to the RIS, the indication of the first security key is based at least in part on whether the one or more measurements satisfy the threshold. Aspect 20: The method of any of Aspects 15-19, wherein the modulation signature is at least one of a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. Aspect 21: The method of any of Aspects 15-20, wherein the first security key is based at least in part on an identifier associated with the RIS. Aspect 22: The method of any of Aspects 15-21, further comprising: transmitting, to the UE, an indication of a fourth security key, wherein at least one of the second security key or the third security key is based at least in part on the fourth security key. Aspect 23: The method of Aspect 22, wherein transmitting the indication of the fourth security key comprises: transmitting, to the UE, the indication of the fourth security key as part of a connection establishment procedure with the UE. Aspect 24: The method of any of Aspects 22-23, wherein the fourth security key is associated with at least one of UE-specific data security or medium access control signaling security. Aspect 25: The method of any of Aspects 15-24, wherein transmitting, to the RIS, the indication of the first security key comprises: transmitting an indication of a beam state or a spatial direction to which the first security key is to be added by the RIS. Aspect 26: A method of wireless communication performed by a reconfigurable intelligent surface (RIS), comprising: receiving, from a base station, an indication of a first security key associated with the RIS, wherein the first security key is to be added, using a modulation signature, to signals reflected by the RIS; receiving, from the base station, a first signal associated with a downlink control channel, wherein the first signal includes a second security key; redirecting, to a user equipment (UE), the first signal by including the modulation signature that identifies the first security key in the first signal; and redirecting, to the UE, a second signal, wherein the second signal includes a third security key, and wherein the first security key, the second security key, and the third security key enable the UE to authenticate the second signal. Aspect 27: The method of Aspect 26, wherein the modulation signature is at least one of a phase modulation signature, a polarization modulation signature, or an amplitude modulation signature. Aspect 28: The method of any of Aspects 26-27, wherein the modulation signature is a polarization modulation signature, and wherein redirecting the first signal comprises: modulating a polarization state of the first signal from a first polarization state of the first signal as transmitted by the base station to a second polarization state of the first signal, wherein the polarization state includes an angle of polarization or a polarization mode. Aspect 29: The method of any of Aspects 26-28, wherein the modulation signature identifies the first security key and an identifier associated with the RIS. Aspect 30: The method of any of Aspects 26-29, wherein the first security key is based at least in part on an identifier associated with the RIS. Aspect 31: The method of any of Aspects 26-30, wherein at least one of the second security key or the third security key is based at least in part on a fourth security key. Aspect 32: The method of Aspect 31, wherein the fourth security key is established as part on a connection establishment procedure between the base station and the UE. Aspect 33: The method of any of Aspects 26-32, wherein redirecting the first signal including the modulation signature is based at least in part on a quality of a link between the RIS and the UE satisfying a threshold. Aspect 34: The method of any of Aspects 26-33, wherein receiving the indication of the first security key comprises: receiving an indication of a beam state or a spatial direction to which the first security key is to be added by the RIS; and wherein redirecting the first signal by including the modulation signature that identifies the first security key in the first signal is based at least in part on the first signal being redirected using the beam state or the spatial direction. Aspect 35: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 1-14. Aspect 36: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 1-14. Aspect 37: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 1-14. Aspect 38: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 1-14. Aspect 39: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 1-14. Aspect 40: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 15-25. Aspect 41: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 15-25. Aspect 42: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 15-25. Aspect 43: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 15-25. Aspect 44: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 15-25. Aspect 45: An apparatus for wireless communication at a device, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform the method of one or more of Aspects 26-34. Aspect 46: A device for wireless communication, comprising a memory and one or more processors coupled to the memory, the one or more processors configured to perform the method of one or more of Aspects 26-34. Aspect 47: An apparatus for wireless communication, comprising at least one means for performing the method of one or more of Aspects 26-34. Aspect 48: A non-transitory computer-readable medium storing code for wireless communication, the code comprising instructions executable by a processor to perform the method of one or more of Aspects 26-34. Aspect 49: A non-transitory computer-readable medium storing a set of instructions for wireless communication, the set of instructions comprising one or more instructions that, when executed by one or more processors of a device, cause the device to perform the method of one or more of Aspects 26-34. The foregoing disclosure provides illustration and description but is not intended to be exhaustive or to limit the aspects to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the aspects. As used herein, the term “component” is intended to be broadly construed as hardware and/or a combination of hardware and software. “Software” shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, and/or functions, among other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. As used herein, a “processor” is implemented in hardware and/or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the aspects. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code, since those skilled in the art will understand that software and hardware can be designed to implement the systems and/or methods based, at least in part, on the description herein. As used herein, “satisfying a threshold” may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various aspects. Many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. The disclosure of various aspects includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a+b, a+c, b+c, and a+b+c, as well as any combination with multiples of the same element (e.g., a+a, a+a+a, a+a+b, a+a+c, a+b+b, a+c+c, b+b, b+b+b, b+b+c, c+c, and c+c+c, or any other ordering of a, b, and c). No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the terms “set” and “group” are intended to include one or more items and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms that do not limit an element that they modify (e.g., an element “having” A may also have B). Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).
136,233
11943611
DETAILED DESCRIPTION FIGS.1A through2I, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. The operation principle of the disclosure is described in detail with reference to the accompanying drawings. Detailed descriptions of well-known functions and structures incorporated herein may be omitted to avoid obscuring the subject matter of the disclosure. Further, the following terms are defined in consideration of the functionality in the disclosure, and they may vary according to the intention of a user or an operator, usage, etc. Therefore, the definition should be made on the basis of the overall content of the present specification. Detailed descriptions of well-known functions and structures incorporated herein may be omitted to avoid obscuring the subject matter of the disclosure. Exemplary embodiments of the disclosure are described hereinafter in detail with reference to the accompanying drawings. The terms used, in the following description, for indicating access nodes, network entities, messages, interfaces between network entities, and diverse identity informations are provided for convenience of explanation. Accordingly, the terms used in the following description are not limited to specific meanings but may be replaced by other terms equivalent in technical meanings. In the following description, the terms and definitions given in the 3rdGeneration Partnership Project Long Term Evolution (3GPP LTE) standard are used. However, the disclosure is not limited by the terms and definitions but can be applied to other standard communication systems. In the following description, the terms “eNB” and “gNB” are interchangeably used for convenience of explanation. That is, a base station prementioned as eNB may be referred to as gNB. In the following description, the term “terminal” may be used to refer to any of hand-held phones, NB-IoT devices, sensors, and other wireless communication devices. Embodiment A In a next generation mobile communication system, terminals and base stations cipher and decipher data to be transmitted and received. Typically, an ciphering and deciphering algorithm is used to cipher and decrypt data with an ciphering key (or security key). The ciphering key includes ciphering keys (e.g., KgNB and K_RRCenc) agreed between a terminal and a base station and security keys (e.g., COUNT values) varying with data. Because the COUNT value consists of a PDCP sequence number and a hyper frame number (HFN), it is necessary to acquire synchronization of a PDCP sequence number between a transmitting PDCP entity and a receiving PDCP entity. Given that the PDCP sequence number increases from 0 to 2{circumflex over ( )}(PDCP sequence number length)−1 and, if reaching its maximum value, it goes back to 0 and the HFN number increases by 1; if the PDCP sequence number restarts from 0 after it reaches its maximum value, the COUNT value in use for data ciphering by the transmitting PDCP entity and the COUNT value in use for data description by the receiving PDCP entity may differ from each other, which leads to a decoding failure and HFN desynchronization problem. Such decoding failure and HFN desynchronization problem may be caused by a large amount of data loss or unexpected data invasion by a hacker. Accordingly, in case of necessity, e.g., if suspected of decoding failure or HFN desynchronization problem or data invasion by a hacker, it is necessary for the base station to verify whether the COUNT value is well synchronized between the transmitting PDCP entity and the receiving PDCP entity. The disclosure discloses a method and apparatus for solving a decoding failure and HFN desynchronization problem that may arise during data communication by allowing for a transmitter (e.g., base station) to verify whether a COUNT value is well synchronized between a transmitting PDCP entity and a receiving PDCP entity in case of necessity, e.g., if suspected of decoding failure or HFN desynchronization problem or data invasion by a hacker. FIG.1Aillustrates a diagram of architecture of an LTE system to which the disclosure is applied. In reference toFIG.1A, a radio access network of the LTE system includes evolved Node Bs (hereinafter, interchangeably referred to as eNB, node B, and base station)1a-05,1a-10,1a-15, and1a-20; a mobility management entity (MME)1a-25; and a serving gateway (S-GW)1a-30. A user terminal (hereinafter, interchangeably referred to as user equipment (UE) and terminal)1a-35connects to an external network via the eNBs1a-05,1a-10,1a-15, and1a-20and the S-GW1a-30. The eNBs1a-05,1a-10,1a-15, and1a-20correspond to the legacy node Bs of the universal mobile telecommunications system (UMTS). The UE1a-35connects to one of the eNBs via a radio channel, and the eNB has more complex functions than the legacy node B. In the LTE system where all user traffic including real time services such as Voice over IP (VoIP) is served through shared channels, there is a need of an entity for collecting UE-specific status information (such as buffer status, power headroom status, and channel status) and scheduling the UEs based on the collected information, and the eNB takes charge of such functions. Typically, one eNB hosts multiple cells. For example, the LTE system adopts Orthogonal Frequency Division Multiplexing (OFDM) as a radio access technology to secure a data rate of up to 100 Mbps in a bandwidth of 20 MHz. The LTE system also adopts Adaptive Modulation and Coding (AMC) to determine the modulation scheme and channel coding rate in adaptation to the channel condition of the UE. The S-GW1a-30handles data bearer functions to establish and release data bearer under the control of the MME1a-25. The MME1a-25handles various control functions for the UE as well as the mobile management function and has connections with the eNBs1a-05,1a-10,1a-15, and1a-20. FIG.1Billustrates a diagram of a protocol stack of an LTE system to which the disclosure is applied. As shown inFIG.1B, the protocol stack of the interface between the UE1b-50and the eNB1b-60in the LTE system includes Packet Data Convergence Protocol (PDCP)1b-05and1b-40, Radio Link Control (RLC)1b-10and1b-35, and Medium Access Control (MAC)1b-15and1b-30. The PDCP1b-05and1b-40takes charge of compressing/decompressing an IP header. The main functions of the PDCP1b-05and1b-40can be summarized as followsHeader compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUs at PDCP re-establishment procedure for RLCFor split bearers in DC (only support for RLC AM): PDCP PDU routing for transmission and PDCP PDU reordering for receptionDuplicate detection of lower layer SDUs at PDCP re-establishment procedure for RLC AMRetransmission of PDCP SDUs at handover and, for split bearers in DC, of PDCP PDUs at PDCP data-recovery procedure, for RLC AMCiphering and decipheringTimer-based SDU discard in uplink The RLC1b-10and1b-35takes charge of reformatting PDCP PDUs in order to fit them into the size for ARQ operation. The main functions of the RLC layer can be summarized as follows:Transfer of upper layer PDUsError Correction through ARQ (only for AM data transfer)Concatenation, segmentation and reassembly of RLC SDUs (only for UM and AM data transfer)Re-segmentation of RLC data PDUs (only for AM data transfer)Reordering of RLC data PDUs (only for UM and AM data transfer)Duplicate detection (only for UM and AM data transfer)Protocol error detection (only for AM data transfer)RLC SDU discard (only for UM and AM data transfer)RLC re-establishment The MAC1b-15and1b-30allows for connection of multiple RLC entities established for one UE and takes charge of multiplexing RLC PDUs from the RLC layer into a MAC PDU and demultiplexing a MAC PDU into RLC PDUs. The main functions of the MAC layer can be summarized as follows:Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUs belonging to one or different logical channels into/from transport blocks (TB) delivered to/from the physical layer on transport channelsScheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding The PHY1b-20and1b-25takes charge of channel-coding and modulation on higher layer data to generate and transmit OFDM symbols over a radio channel, and demodulating and channel-decoding on OFDM symbols received over the radio channel to deliver the decoded data to the higher layers. FIG.1Cillustrates a diagram of architecture of a next generation mobile communication system to which the disclosure is applied. As shown inFIG.1C, the next generation mobile communication system includes a radio access network with a next generation base station (New Radio Node B (NR gNB))1c-10and a new radio core network (NR CN)1c-05. A new radio user equipment (NR UE)1c-15connects to an external network via the NR gNB1c-10and the NR CN1c-05. InFIG.1C, the NR gNB1c-10corresponds to an evolved Node B (eNB) of the legacy LTE. The NR gNB1c-10to which the NR UE1c-15connects through a radio channel is capable of providing superior services in comparison with the legacy eNB. In the next generation mobile communication system where all user traffic is served through shared channels, it is necessary to schedule the NR UEs based on scheduling information such as buffer status, power headroom status, and channel status collected by the NR UEs, and the NR gNB1c-10takes charge of this function. Typically, one NR gNB hosts multiple cells. In order to achieve a data rate higher than the peak data rate of legacy LTE systems, the next generation mobile communication system may adopt a beamforming technique along with orthogonal frequency division multiple access (OFDMA) as a radio access technology. The next generation mobile communication system may also adopt an adaptive modulation and coding (AMC) to determine the modulation scheme and channel coding rate in adaptation to the channel condition of the NR UE. The NR CN1c-05takes charge of mobility support, bearer setup, and QoS configuration. The NR CN1c-05may take charge of a NR UE mobility management function, and a plurality of NR gNBs may connect to the NR CN1c-05. The next generation mobile communication system may also interoperate with a legacy LTE system and, in this case, the NR CN1c-05connects to an MME1c-25through a network interface. The MME1c-25communicates with the eNB1c-40as a legacy base station. FIG.1Dillustrates a diagram of a protocol stack of a next generation mobile communication system to which the disclosure is applied. As shown inFIG.1D, the protocol stack of the interface between an UE1d-50and an NR gNB1d-60in a next generation mobile communication system includes NR service data adaptation protocol (NR SDAP)1d-01and1d-45, NR PDCP1d-05and1d-40, NR RLC1d-10and1d-35, NR MAC1d-15and1d-30. The main functions of the NR SDAP1d-01and1d-45may include some of the following functionsTransfer of user plane dataMapping between a QoS flow and a DRB for both DL and UL)Marking QoS flow ID in both DL and UL packetsReflective QoS flow to DRB mapping for the UL SDAP PDUs). The UE1d-50may receive an RRC message for configuring an SDAP layer entity1d-01so as to determine whether to use PDCP entity-specific, bearer-specific, or logical channel-specific SDAP header and whether to use SDAP layer function via an RRC message and, if configured to use a specific PDAP header, receive a 1-bit NAS reflective QoS indicator and an AS reflective QoS indicator in the SDAP header indicative of instructing the UE1d-50to update or reconfigure uplink and downlink QoS flow-data bearer mappings. The SDAP header may include QoS flow ID indicating a QoS. The QoS information may be used as data processing priority and scheduling information for guaranteeing service reliability. The main functions of the NR PDCP1d-05and1d-40may include some of the following functions:Header compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsPDCP PDU reordering for receptionDuplicate detection of lower layer SDUsRetransmission of PDCP SDUsCiphering and decipheringTimer-based SDU discard in uplink The PDCP PDU reordering function of an NR PDCP entity1d-05and1d-40is to reorder the PDCP PDUs delivered from a lower layer based on the PDCP sequence number (PDCP SN) and may include delivering the reordered data to an upper layer, recording the missing PDCP PDUs among the reordered PDCP PDUs, transmitting a status report indicating the missing PDCP PDUs to the sender, and requesting for retransmission of the missing PDCP PDUs. The main functions of the NR RLC1d-10and1d-35may include some of the following functions. Transfer of upper layer PDUsIn-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsError Correction through ARQConcatenation, segmentation and reassembly of RLC SDUsRe-segmentation of RLC data PDUsReordering of RLC data PDUsDuplicate detectionProtocol error detectionRLC SDU discardRLC re-establishment The in-sequence delivery function of an NR RLC entity1d-10and1d-35is to deliver the RLC SDUs received from the lower layer to the upper layer and may include reassembling, when multiple segmented RLC SDUs constituting an original RLC SDU are received, the RLC SDUs and delivering the reassembled RLC SDU to the upper layer; reordering the received RLC PDUs based on the RLC sequence number(SN) or PDCP SN; recording the missing RLC PDUs among the reordered RLC PDUs; transmitting a status report indicating the missing RLC PDUs to the sender; requesting for retransmission of the missing RLC PDUs; and delivering, when there is a missing RLC PDU, the RLC PDUs before the missing RLC PDU in sequence, delivering, if a predetermined timer expires even when there is any missing RLC SDU, all RLC SDUs received before the start of the timer to the upper layer in sequence, or delivering, if a predetermined timer expires even when there is any missing RLC SDU, all RLC SDUs received until then to the upper layer in sequence. It may also be possible to process the RLC PDUs in the receiving sequence (in the order of arrival regardless of sequence number) and deliver the RLC PDUs to the PDCP entity out of order (out-of-sequence delivery) and, if an RLC PDU is transmitted in the form of segments, to store the received segments, or wait until all segments constituting the RLC PDU are received and reassemble the segments into the original RLC PDU, which is delivered to the PDCP entity. The NR RLC layer1d-10and1d-35may have no concatenation function and, in this case, the concatenation function may be performed in the NR MAC layer1d-15and1d-30or replaced by the multiplexing function of the NR MAC layer1d-15and1d-30. The out-of-sequence delivery function of an NR RLC entity1d-10and1d-35is to deliver the RLC SDUs received from the lower layer to the upper layer out of order and may include reassembling, when multiple segmented RLC SDUs constituting an original RLC SDU are received, the segmented RLC SDUs, delivering the reassembled RLC SDUs to the upper layer, arranging the received RLC PDUs based on the RLC SN or PDCP SN, and recording the SN of the missing RLC PDUs. The NR MAC1d-15and1d-30may be connected to multiple NR RLC entities, and the main functions of the NR MAC1d-15and1d-30may include some of the following functions:Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUsScheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding The NR PHY layer1-20and1d-25takes charge of channel-coding and modulation on upper layer data to generate and transmit OFDM symbols over a radio channel and demodulating and channel-decoding on OFDM symbols received over the radio channel to deliver the decoded data to the upper layers. FIG.1Eillustrates a signal flow diagram of an RRC connection configuration procedure between a UE and a base station for establishing a connection to a network in a next generation mobile communication system according to some embodiments of the disclosure. In reference toFIG.1E, if there is no data transmission/reception to/from the UE1e-90in an RRC connected mode for any reason or during a predetermined period, the base station1e-91may transmit an RRCConnectionRelease message to the UE1e-90at step1e-01to transition the UE1e-90to an RRC idle mode. If data to be transmitted are produced at the UE1e-90with no currently established connection (hereinafter, referred to as idle mode UE), the UE1e-90performs an RRC connection establishment procedure with the base station1e-91. The UE1e-90acquires uplink transmission synchronization with the base station1e-91through a random access procedure and transmits an RRCConnectionRequest message to the base station1e-91at step1e-05. The RRCConnectionRequest message may include an identifier of the UE1e-90and a connection establishment cause (establishmentCause). The base station1e-91transmits an RRCConnectionSetup message to the1e-90at step1e-10for RRC connection setup. The RRCConnectionSetup message may include at least one of per-logical channel configuration information, per-bearer configuration information, PDCP entity configuration information, RLC entity configuration information, or MAC entity configuration information. The RRCConnectionSetup message may be used to assign per-bearer identifiers (e.g., signaling radio bearers (SRB) identifier and data radio bearer (DRB) identifiers) and configure per-bearer PDCP, RLC, MAC entity, and PHY entities. The RRCConnectionSetup message may also be used to configure per-bearer PDCP sequence number lengths for use by the PDCP entities (e.g., 12 bits and 18 bits) and RLC sequence number lengths for sue by the RLC entities (e.g., 6 bits, 12 bits, and 18 bits). The RRCConnectionSetup message may also be used to indicate whether a header compression/decompression protocol is used and an integrity protection or verification procedure is used in uplink or downlink for PDCP entities per bearer. The RRCConnectionSetup message may also be used to indicate to the UE1e-90whether the out-of-order delivery is performed at the PDCP entities. After completing RRC connection setup, the UE1e-90may transmit an RRCConnectionSetupComplete message to the base station1e-91at step1e-15. The RRCConnectionSetupComplete message may include a control message called SERVICE REQUEST for requesting to an AMF1e-92or an MME1e-92for establishing a bearer for a certain service. At step1e-20, the base station1e-91transmits the SERVICE REQUEST message included in the RRCConnectionSetupComplete message to the AMF1e-92or the MME1e-92. The AMF1e-92or the MME1e-92may determine whether to provide the service requested by the UE1e-90. If it is determined to provide the service requested by the UE1e-90, the AMF/MME1e-92transmit an INITIAL CONTEXT SETUP REQUEST message to the base station1e-91at step1e-25. The INITIAL CONTEXT SETUP REQUEST message may include quality of service (QoS) information to be applied in configuring a DRB and security information (e.g., Security Key and Security Algorithm) to be applied to the DRB. For security configuration, the base station1e-91transmits a SecurityModeCommand message to the UE1e-90at step1e-30, and the UE1e-90transmits a SecurityModeComplete message to the base station1e-91at step1e-35. After completing security configuration, the base station1e-91transmits an RRCConnectionReconfiguration message to the UE1e-90at step1e-40. The RRCConnectionReconfiguration message may be used to assign per-bear identifier (e.g., SRB identifier or DRB identifier) and configure per-bearer PDCP, RLC, MAC, and PHY entities. The RRCConnectionSetup message may also be used to configure per-bearer PDCP sequence number lengths for use by the PDCP entities (e.g., 12 bits and 18 bits) and RLC sequence number lengths for sue by the RLC entities (e.g., 6 bits, 12 bits, and 18 bits). The RRCConnectionSetup message may also be used to indicate whether a header compression/decompression protocol is used and an integrity protection or verification procedure is used in uplink or downlink for PDCP entities per bearer. The RRCConnectionSetup message may also be used to indicate to the UE1e-90whether the out-of-order delivery is performed at the PDCP entities. The RRCConnectionReconfiguration message may include configuration information of a DRB on which user data are processed, and the UE1e-90configures a DRB based on the configuration information and transmits an RRCConnectionReconfigurationComplete message to the base station1e-91at step1e-45. After configuring the DRB bearer with the UE1e-90, the base station1e-91may transmit an INITIAL CONTEXT SETUP COMPLETE message the AMF/MME1e-92at step1e-50to complete establishment of a connection. After completing the above procedure, the UE1e-90and the base station1e-92may communicate date via a core network1e-93at steps1e-33and1e-60. According to some embodiments, the data transfer procedure may consist of three phases: RRC connection configuration, security configuration, and DRB configuration. The base station1e-91may transmit an RRCConnectionReconfiguration message to the UE1e-90at step1e-65for updating, adding, or modifying the configuration. The RRCConnectionReconfiguration message may be used to assign per-bear identifier (e.g., SRB identifier or DRB identifier) and configure per-bearer PDCP, RLC, MAC, and PHY entities. The RRCConnectionSetup message may also be used to configure per-bearer PDCP sequence number lengths for use by the PDCP entities (e.g., 12 bits and 18 bits) and RLC sequence number lengths for sue by the RLC entities (e.g., 6 bits, 12 bits, and 18 bits). The RRCConnectionSetup message may also be used to indicate whether a header compression/decompression protocol is used and an integrity protection or verification procedure is used in uplink or downlink for PDCP entities per bearer. The RRCConnectionSetup message may also be used to indicate to the UE1e-90whether the out-of-order delivery is performed at the PDCP entities. The above-described connection configuration procedure between the UE1e-90and the base station1e-91may be applied either between a UE and an LTE eNB or between a UE and an NR gNB. In the disclosed embodiments, the term “bearer” may be used to include SRB (signaling radio bearer) and DRB (data radio bearer). Meanwhile, a UM DRB denotes a DRB in use by an RRC entity operating in an unacknowledged mode (UM), and an AM DRB denotes a DRB in use by an RRC entity operating in an acknowledged mode (AM). The following disclosure discloses a method and apparatus for solving a decoding failure and HFN desynchronization problem that may arise during data communication by allowing for a transmitter (e.g., base station) to verify whether a COUNT value is well synchronized between a transmitting PDCP entity and a receiving PDCP entity in case of necessity, e.g., if suspected of decoding failure or HFN desynchronization problem or data invasion by a hacker. Before undertaking the detailed description of the method and apparatus for checking the COUNT value, a brief description is made of the operations of transmit and receive PDCP entities. In the disclosed embodiments, a transmitting PDCP entity operates as follows. The transmitting PDCP entity uses a first COUNT variable, called TX-NEXT, for maintaining a COUNT value to be allocated for transmitting next data in processing data. The operations of the transmitting PDCP entity that are proposed in the disclosed embodiments are as follows.If data (e.g., PDCP SDU) arrives from the upper layer, the transmitting PDCP entity starts a PDCP data discard timer and, if the timer expires, discards the data.Next, the transmitting PDCP entity assigns (allocates) a COUNT value corresponding to TX_NEXT to the data arrived from the upper layer. The TX_NEXT may be set to an initial value of 0 and maintain the COUNT value of the next data (PDCP SDU) to be transmitted.If a header compression protocol is configured to the transmitting PDCP entity, the transmitting PDCP entity performs header compression of the data.If an integrity protection is configured to the transmitting PDCP entity, the transmitting PDCP entity generates a PDCP header and perform integrity protection of the PDCP header and data using a security key and the COUNT value of the TX_NEXT assigned to the data.Next, the transmitting PDCP entity performs a ciphering procedure using the security key for the data and the COUNT value of the TX_NEXT assigned to the data. Next, the transmitting PDCP entity sets least significant bits (LSBs) equal in length to a PDCP sequence number in the COUNT value of the TX_NEXT variable as the PDCP sequence number.Next, the transmitting PDCP entity increments the COUNT value of the TX_NEXT variable by 1 and sends the processed data prefixed with a PDCP header to the lower layers. FIG.1Fillustrates a diagram for explaining an operation of a receiving PDCP entity according to a proposed embodiment. In a disclosed embodiment, the receiving PDCP entity operates as follows. The receiving PDCP entity maintains and manages 4 COUNT variables in processing received data. When processing the received data, the receiving PDCP entity uses a second COUNT variable maintaining the COUNT value of the next data (e.g., PDCP SDU) expected to be received, the second COUNT variable being referred to as RX_NEXT. When processing the received data, the receiving PDCP entity uses a third COUNT variable maintaining the COUNT value of the first data (e.g., PDCP SDU) that has not been delivered to the upper layers, the third COUNT variable being referred to as RX_DELIV. When processing the received data, the receiving PDCP entity uses a fourth COUNT variable maintaining the COUNT value of the data (e.g., PDCP SDU) that has triggered a PDCP reordering timer (t-Reordering), the fourth COUNT variable being referred to as RX_REORD. When processing the received data, the receiving PDCP entity may uses a fifth COUNT variable maintaining the COUNT value of the currently received data (e.g., PDCP SDU), the fifth COUNT variable being referred to as RCVE_COUNT. The PDCP reordering timer may be set to a timer value or period configured via an RRC message in the upper layer (RRC layer) as described inFIGS.1E and1tis used for detecting lost PDCP PDUs, and only one timer can run at one timer. Following variables are defined and used for the operations of the receiving PDCP entity of a UE.HFN: The Hyper Frame Number (HFN) part of the window state variableSN: The Sequence Number (SN) part of the window state variableRCVD_SN: The PDCP SN of the received PDCP PDU, included in PDU headerRCVD_HFN: The HFN value of the received PDCP PDU, calculated by the receiving PDCP entity In a disclosed embodiment, the receiving PDCP entity operates as follows. When a PDCP PDU is received from the lower layers, the receiving PDCP entity determines a COUNT value of the received PDCP PDU.If RCVD_SN<=SN(RX_DELIV)−Window_Size,update RCVD_HFN to RCVD_HFN=HFN(RX_DELIV)+1,else if RCVD_SN>SN(RX_DELIV)+Window_Size,update RCVD_HFN to RCVD_HFN=HFN(RX_DELIV)−1,else,update RCVD_HFN to RCVD_HFN=HFN(RX_DELIV).The RCVD_COUNT is determined as RCVD_COUNT=[RCVD_HFN, RCVD_SN]. After determining the COUNT value of the received PDCP PDU, the receiving PDCP entity updates the window state variables and processes the PDCP PDU as follows.The receiving PDCP entity performs deciphering and integrity verification of the PDCP PDU using the RCVD_COUNT value.If integrity verification fails,the receiving PDCP entity indicate the integrity verification failure to upper layers and discard the received PDCP PDU (data part of the PDCP PDU).If RCVD_COUNT<RX_DELIV or if the PDCP PDU with the RCVD_COUNT value has been received before (packet expired, outdated, or out of window, or duplicated packet),the receiving PDCP entity discards the received PDCP Data PDU (data part of PDCP PDU). If the received PDCP PDU is not discarded, the receiving PDCP entity operates as follows.The receiving PDCP entity stores the processed PDCP SDU in the reception buffer.if RCVD_COUNT>=RX_NEXT,the receiving PDCP entity updates RX_NEXT to RCVD_COUNT+1.If out-of-date delivery indicator (outOfOrderDelivery) is configured (if out-of-date delivery operation is indicated),the receiving PDCP entity delivers the PDCP SDU to upper layers.If RCVD_COUNT=RX_DELIV,the receiving PDCP entity performs head compression, if not decompressed before, and delivers to upper layers in an order of the COUNT value.The receiving PDCP entity delivers all stored PDCP SDUs with consecutive COUNT values starting from COUNT=RX_DELIV to the upper layer.The receiving PDCP entity updates RX_DELIV value to the COUNT value of the first PDCP SDU that is equal to or greater than the current RX_DELIV and has not been delivered to the upper layers.If the timer t-Reordering is running and if the RX_DELIV value is equal to or greater than RX_REORD,the receiving PDCP entity stops and resets the timer t-Reordering.If the timer t-Reordering is not running (includes the case where t-Reordering is stopped under the above condition) and if RX_DELIV is less than RX_NEXT,the receiving PDCP entity updates RX_REORD value to RX_NEXT, andstarts the timer t-Reordering. When the PDCP reordering timer (t-Reordering) expires, the receiving PDCP entity may operate as follows.The receiving PDCP entity performs header decompression, if not decompressed before, and deliver to upper layers in an order of the COUNT values.The receiving PDCP entity delivers all PDCP SDUs with COUNT values greater than the RX_REORD value.The receiving PDCP entity delivers all PDCP SDUs with consecutive COUNT values starting from the RX_REORD value.The receiving PDCP entity updates the RX_DELIV value to the COUNT value of the first PDCP SDU which has not been delivered to upper layers and of which the COUNT value is equal to or greater than the RX_REORD.If the RX_DELIV value is less than the RX_NEXT value,the receiving PDCP entity updates the RX_REORD value to the RX_NEXT value, andstarts timer t-Reordering. FIG.1Gillustrates a diagram of a format of a COUNT value for use in a next generation mobile communication system according to an embodiment of the disclosure. A PDCP entity maintains the COUNT value for ciphering and integrity protection between the UE and the base station and use the COUNT value as a parameter of a preconfigured ciphering and integrity protection algorithm in performing ciphering and integrity protection on a PDCP packet. The detailed description thereof is made with reference toFIG.1G. All PDCP packets (data packets and control message packets) have a PDCP sequence number (SN), which increments by 1 for every packet. If reaching a preconfigured PDCP SN size, the SN is recounted from 0 and HFN increases by 1. In this case, an SN that has been used before may be assigned to the current PDCP packet. If a hacker intercepts a PDCP SN value and attempts hacking into the communication between a UE and a base station using this previously used PDCP SN value, the PDCP data transmitted by the hacker may continuously increase the PDCP SN, resulting in HFY desynchronization problem between the transmitter and the receiver. Even if there is no hacking invasion, if a large amount of data are lost, this is likely to cause the HFN desynchronization problem and decoding failure of the received data. The COUNT value has a length of 32 bits and consists of the HFN1g-05and the PDCP SN1g-10. A UE and a base station maintain the COUNT value for use in ciphering and integrity protection. In actual data transmission, the PDCP packet (PDCP PDU) includes only the PDCP SN. Accordingly, it is difficult for a hacker to acquire an accurate COUNT value only with the PDCP SN communicating over a radio channel. The base station transmits to the UE an RRC message including PDCP configuration information indicative of a PDCP SN length set to 12 or 18 bits, and the PDCP SN length determines the HFN length in the COUNT value (32 bits—PDCP SN length). FIG.1Hillustrates a diagram for explaining a ciphering procedure of a PDCP entity, using a COUNT value, according to an embodiment of the disclosure. In reference toFIG.1H, the transmitting PDCP entity performs ciphering on the data received from the upper layers, and the receiving PDCP entity performs deciphering on the data. In the next generation mobile communication system, all packets are transmitted/received without being ciphered before the AS security is activated; if the AS security is activated, all traffic (control data (CP) and user data (UP)) are ciphered before being transmitted. That is, as described with reference toFIG.1E, after the securing configuration has been completed between the base station and the UE by exchanging the SecurityModeCommand and SecurityModeComplete messages, the RRC messages being exchanged between the base station and the terminal are ciphered and integrity-protected, and the corresponding IP packets are secured. After AS security setup, if data arrive from upper layers at step1h-05, the transmitting PDCP entity performs an exclusive operation on a key stream block acquired through a key generation algorithm (EPS Encryption Algorithm (EEA) for ciphering at the UE and a pure data block at step1h-20to generate a ciphered user packet. Here, the key stream block for use in ciphering may be acquired through a key generation algorithm with the input of parameters such as a key (K_UPenc1h-10) for user plane ciphering that is obtained from K_gNB, COUNT (32-bit uplink NAS COUNT value), Bearer ID, Direction (message transmission direction 0 for uplink and 1 for downlink), and Length (key stream block length). If the user data packet encrypted by the transmitting PDCP entity is received, the receiving PDCP entity generates the same key stream block used for encryption by applying the same key generation algorithm applied by the UE and performs the exclusive operation thereon at step1h-35. As in the terminal, the key stream block for use in deciphering, at the base station, may be acquired through a key generation algorithm with the input of parameters such as a key (K_UPenc1h-10) for user plane ciphering that is obtained from K_gNB, COUNT (32-bit uplink NAS COUNT value), Bearer ID, Direction (message transmission direction 0 for uplink and 1 for downlink), and Length (key stream block length). The receiver may perform a deciphering procedure in the reverse order of the ciphering procedure of the transmitter. In order to perform the ciphering and deciphering procedures precisely, the COUNT value stored in the UE and the base station should be accurate. That is, it is necessary to check whether the COUNT value is accurate to apply an accurate ciphering key to the PDCP packet to be ciphered. In order to accomplish this object, a disclosed embodiment proposes a method for a base station to instruct a UE to perform a COUNT CHECK operation in a next generation mobile communication system. That is, the UE verifies the validity of a COUNT value in response to a request from the base station and transmits, if the validity is verified, the current COUNT value to the base station. FIG.1Iillustrates a signal flow diagram of a COUNT CHECK procedure proposed in an embodiment of the disclosure. FIG.1Ishows the entire operation for a gNB1i-02to check for the COUNT value of the UE1i-01, and the gNB1e-02may verify whether a COUNT value configured per bearer is valid and whether to synchronize the COUNT value through the proposed procedure. In reference toFIG.1I, the UE1i-01may establish an RRC connection with the gNB1i-02at step1i-05. If necessary, e.g., if suspected of decoding failure or HFN desynchronization problem or data invasion by a hacker, the gNB1i-02transmits a CounterCheck RRC message to the UE1i-01, at step1i-10, to request for per-bearer COUNT check and report so as to determine whether the COUNT value is well synchronized between the transmitting PDCP entity and the receiving PDCP entity. This message may be carried by an RRCConnectionReconfiguration or RRCConnectionReestablishment message being transmitted on a dedicated common control channel (DCCH). The CounterCheck message may include a list of bearers (e.g., DRB or SRB) for per-bearer COUNT check, drb-CountMSB-InfoList, which contains DRB identifier, countMSB-Uplink(25 bit), and countMSB-Downlink(25 bit). That is, the list includes per-bearer identifiers of the bearers for which the COUNT check is required and 25 most significant bits (MSBs) of each of the uplink and downlink COUNT values of the gNB1i-02on the corresponding bearers. Upon receipt of this message, the UE1i-01may compare the 25 MSBs of the uplink COUNT value (countMSB-Uplink) for the bearer identified by the bearer identifier included in the drb-CountMSB-InfoList of the message with the 25 MSBs of the uplink COUNT value stored in the UE1i-01. The UE1i-01may also compare the 25 MSBs of the downlink COUNT value (countMSB-Uplink) for the bearer identified by the bearer identifier included in the drb-CountMSB-InfoList of the message with the 25 MSBs of the downlink COUNT value stored in the UE1i-01. If it is determined that the two values, in downlink or uplink, are different from each other, the UE1i-01configures, at step1i-15, a drb-CountInfoList for reporting 32-bit full COUNT values (full COUNT 32 bits) for the corresponding bearers and generates a Counter Check Response message including the identifiers of the bearers for which the compared values are different from each other and the 32-bit full COUNT values (full COUNT 32 bits). If it is determined that the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated by the eNB1i-02are all identical with those of the COUNT value stored in the UE1i-01in both the uplink and downlink, the corresponding values are not included in the reporting list. At step1i-15, the UE1i-01also configures a reporting list (drb-CountInfoList) for reporting the 32-bit full COUNT values (full COUNT 32 bits) for the bearers that are not indicated by the bearer identifiers in the drb-CountMSB-InfoList of the Count Check message received from the eNB1i-02and generates the Counter Check Response message including the 32-bit full COUNT values (full COUNT 32 bits) along with the identifiers of the non-indicated bearers. After the per-bearer COUNT value comparison and report determination, the UE1i-01transmits, at step1i-20, a CounterCheckResponse message including the reporting list configured at the previous step to the eNB1i-02. Hereinafter, a description is made of the operation of the UE1i-01for determining the COUNT values to compare and apply in comparing 25 MSBs of the COUNT values (countMSB-Uplink or countMSB-Downlink) indicated per bearer by the eNB1i-02in the COUNT CHECK procedure proposed in the embodiment ofFIG.1I. According to a first embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE1i-01determines COUNT values to be compared and reported as follows. In the first embodiment, the UE1i-01uses, for uplink, the TX_NEXT value as the first COUNT variable maintaining the COUNT value of the next data to be transmitted by the transmitting PDCP entity for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, the second COUNT variable (RX_NEXT) maintaining the COUNT value of the next data (e.g., PDCP SDU) expected to be received among 4 COUNT variables (i.e., the first COUNT variable (RX_NEXT), the third COUNT variable (RX_DELIV), the fourth COUNT variable (RX_REORD), and the fifth COUNT variable (RCVD_COUNT)) in use by the receiving PDCP entity for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. In the first embodiment, when the Counter Check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT and RX_NEXT in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT or RX_NEXT) for the bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT and RX_NEXT in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. According to a second embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01determines COUNT values to be compared and reported as follows. In the second embodiment, the UE1i-01uses, for uplink, a value obtained by subtracting 1 from the TX_NEXT value as the first COUNT variable maintaining the COUNT value of the next data to be transmitted by the transmitting PDCP entity for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, a value obtained subtracting 1 from the second COUNT variable (RX_NEXT) maintaining the COUNT value of the next data (e.g., PDCP SDU) expected to be received among 4 COUNT variables (i.e., the first COUNT variable (RX_NEXT), the third COUNT variable (RX_DELIV), the fourth COUNT variable (RX_REORD), and the fifth COUNT variable (RCVD_COUNT)) in use by the receiving PDCP entity for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. Because the COUNT variables (TX_NEXT and RX_NEXT) indicate the COUNT values of the next data to be received or delivered, they should be subtracted by 1 to indicate the current COUNT value. In the second embodiment, when the Counter Check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction, not in use.the UE1i-01assumes that the COUNT value is 0 for the directionIf the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RX_NEXT-1 in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT-1 or RX_NEXT-1) for the bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RX_NEXT-1 in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. According to a third embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01determines COUNT values to be compared and reported as follows. In the third embodiment, the UE1i-01uses, for uplink, the TX_NEXT value as the first COUNT variable maintaining the COUNT value of the next data to be transmitted by the transmitting PDCP entity for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, the third COUNT variable (RX_DELIV) maintaining the COUNT value of the first data (e.g., PDCP SDU) that has not been delivered to the upper layers among 4 COUNT variables (i.e., the first COUNT variable (RX_NEXT), the third COUNT variable (RX_DELIV), the fourth COUNT variable (RX_REORD), and the fifth COUNT variable (RCVD_COUNT)) in use by the receiving PDCP entity for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. In the third embodiment, when the Counter check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT and RX_DELIV in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT or RX_DELIV) for the bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT and RX_DELIV in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. According to a fourth embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01determines COUNT values to be compared and reported as follows. In the fourth embodiment, the UE1i-01uses, for uplink, a value obtained by 1 from the TX_NEXT value as the first COUNT variable maintaining the COUNT value of the next data to be transmitted by the transmitting PDCP entity for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, a value obtained by subtracting 1 from the third COUNT variable (RX_DELIV) maintaining the COUNT value of the first data (e.g., PDCP SDU) that has not been delivered to the upper layers among 4 COUNT variables (i.e., the first COUNT variable (RX_NEXT), the third COUNT variable (RX_DELIV), the fourth COUNT variable (RX_REORD), and the fifth COUNT variable (RCVD_COUNT)) in use by the receiving PDCP entity for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. Because the COUNT variables (TX_NEXT and RX_DELIV) indicate the COUNT values of the next data to be received or delivered, they should be subtracted by 1 to indicate the current COUNT value. In the fourth embodiment, when the Counter check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RX_DELIV-1 in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT-1 or RX_DELIV-1) for the bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RX_DELIV-1 in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. According to a fifth embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01determines COUNT values to be compared and reported as follows. In the fifth embodiment, the UE1i-01uses, for uplink, the TX_NEXT value as the first COUNT variable maintaining the COUNT value of the next data to be transmitted by the transmitting PDCP entity for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, the fifth COUNT variable (RCVD_COUNT) maintaining the currently received COUNT among 4 COUNT variables (i.e., the first COUNT variable (RX_NEXT), the third COUNT variable (RX_DELIV), the fourth COUNT variable (RX_REORD), and the fifth COUNT variable (RCVD_COUNT)) in use by the receiving PDCP entity for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. In the fifth embodiment, when the Counter check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT and RCVD_COUNT in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT or RCVD_COUNT) for the bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT and RCVD_COUNT in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. According to a sixth embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01determines COUNT values to be compared and reported as follows. In the sixth embodiment, the UE1i-01uses, for uplink, a value acquired by subtracting 1 from the TX_NEXT value as the first COUNT variable maintaining the COUNT value of the next data to be transmitted by the transmitting PDCP entity for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, a value acquired by subtracting 1 from the fifth COUNT variable (RCVD_COUNT) maintaining the currently received COUNT among 4 COUNT variables (i.e., the first COUNT variable (RX_NEXT), the third COUNT variable (RX_DELIV), the fourth COUNT variable (RX_REORD), and the fifth COUNT variable (RCVD_COUNT)) in use by the receiving PDCP entity for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. Because the COUNT variables (TX_NEXT and RCVD_COUNT) indicate the COUNT values of the next data to be received or delivered, they should be subtracted by 1 to indicate the current COUNT value. In the sixth embodiment, when the Counter check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RCVD_COUNT-1 in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT-1 or RCVD_COUNT-1) for the bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RCVD_COUNT-1 in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. The first to sixth embodiments of the disclosure are directed to the COUNT check procedures in which an NR PDCP entity is established per data bearer in the UE1i-01. However, the UE1i-01accessible to both the LTE eNB and NR gNB may be configured to have an NR PDCP entity or an LTE PDCP entity per data bearer with the version change of the PDCP entity. In this case, the method for selecting the COUNT values to be compared and reported in the counter check procedure may vary according to whether the PDCP entity configured for each bearer (DRB) is the NR PDCP entity or the LTE PDCP entity. Hereinafter, a description is made of the method for selecting a COUNT value to be compared and reported differently according to whether the PDCP entity configured for each bearer (e.g., DRB) is an NR PDCP entity or an LTE PDCP entity. According to a seventh embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01determines COUNT values to be compared and reported according to whether the PDCP entity configured per bearer is an LTE PDCP entity or an NR PDCP entity as follows. In the seventh embodiment, if an NR PDCP entity is configured for a bearer, the UE1i-01uses, for uplink, a value acquired by subtracting 1 form the TX_NEXT value as the first COUNT variable maintaining the COUNT value of the next data to be transmitted by the transmitting PDCP entity for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, a value acquired by subtracting 1 from the second COUNT variable (RX_NEXT) maintaining the COUNT value of the next data (e.g., PDCP SDU) expected to be received among 4 COUNT variables (i.e., the first COUNT variable (RX_NEXT), the third COUNT variable (RX_DELIV), the fourth COUNT variable (RX_REORD), and the fifth COUNT variable (RCVD_COUNT)) in use by the receiving PDCP entity for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. Because the COUNT variables (TX_NEXT and RX_NEXT) indicate the COUNT values of the next data to be received or delivered, they should be subtracted by 1 to indicate the current COUNT value. In the seventh embodiment, if an NR PDCP entity is configured for a bearer, the UE1i-01uses, for uplink, a value acquired by subtracting 1 from the Next_PDCP_TX_SN value as the first window variable maintaining the PDCP SN of the next data to be transmitted by the transmitting PDCP entity and a COUNT value generated based on the TX_HFN variable maintaining the HFN value of the transmitter for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, a value acquired by subtracting 1 from the second window variable (Next_PDCP_RX_SN) maintaining the PDCP SN value of the next data (e.g., PDCP SDU) expected to be received among 3 window variables (i.e., the second window variable (Next_PDCP_RX_SN indicating the PDCP SN of the next data expected to be received), the third window variable (Last_Submitted_PDCP_RX_SN indicating PDCP SN of last data delivered to upper layers), and the fourth window variable (Reordering_PDCP_RX_COUNT indicating COUNT value triggering a timer)) in use by the receiving PDCP entity and a COUNT value generated based on the RX_HFN variable maintaining the HFN value of the receiver for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. Because the window variables (Next_PDCP_TX_SN and Next_PDCP_RX_SN) indicate the PDCP SN values of the next data to be received or delivered, they should be subtracted by 1 to indicate the current PDCP SN value. In the seventh embodiment, when the Counter check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE and if an LTE PDCP entity is configured for the bearer identified by the bearer identifier,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to the COUNT value based on the TX_HFN and Next_PDCP_TX_SN−1 and the COUNT value based on the RX_HFN and Next_PDCP_RX_SN−1 in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT-1 or RX_NEXT-1) for the bearer of the UE and if an LTE PDCP entity is configured for the bearer,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to the COUNT value based on the TX_HFN and Next_PDCP_TX_SN−1 and the COUNT value based on the RX_HFN and Next_PDCP_RX_SN−1 in the drb-CountInfoList of the Counter Check Response message.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE and if an NR PDCP entity is configured for the bearer identified by the bearer identifier,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RX_NEXT-1.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT-1 or RX_NEXT-1) for the bearer of the UE and if an NR PDCP entity is configured for the bearer,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to TX_NEXT-1 and RX_NEXT-1 in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. Hereinafter, a description is made of the method for selecting a COUNT value to be compared and reported regardless of whether the PDCP entity configured per bearer (e.g., DRB) is an NR PDCP entity or an LTE PDCP entity. According to an eighth embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01determines COUNT values to be compared and reported regardless of whether the PDCP entity configured per bearer is an LTE PDCP entity or an NR PDCP entity as follows. In the eighth embodiment, the UE1i-01uses, for uplink, a COUNT value corresponding to the data with the highest (or largest) PDCP SN among the data (e.g., PDCP SDUs or PDCP PDUs) transmitted by the transmitting PDCP entity until then for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, a COUNT value corresponding to the data with the highest (or largest) PDCP SN among the data (e.g., PDCP SDUs or PDCP PDUs) received by the receiving PDCP entity until then for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. In the eighth embodiment, when the Counter check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE and if an LTE PDCP entity is configured for the bearer identified by the bearer identifier,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to the COUNT value corresponding to the data with the highest (or largest) PDCP SN among the data (e.g., PDCP SDUs or PDCP PDUs) transmitted until then and the COUNT value corresponding to the data with the highest (or largest) PDCP SN among the data (e.g., PDCP SDUs or PDCP PDUs) received until then in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT-1 or RX_NEXT-1) for the bearer of the UE and if an LTE PDCP entity is configured for the bearer,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to the COUNT value corresponding to the data with the highest (or largest) PDCP SN among the data (e.g., PDCP SDUs or PDCP PDUs) transmitted until then and the COUNT value corresponding to the data with the highest (or largest) PDCP SN among the data (e.g., PDCP SDUs or PDCP PDUs) received until then in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. Hereinafter, a description is made of the Counter check procedure for the terminal1i-01accessible only to the LTE eNB. That is, a method for the UE1i-01allowed for establishing only per-bearer (e.g., DRB) LTE PDCP entities to select and report COUNT values. According to a ninth embodiment, when the eNB1i-02transmits the Counter Check message indicating use of the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink), the UE i1-01configured with an LTE PDCP entity per bearer determines COUNT values to be compared and reported as follows. In the ninth embodiment, because an LTE PDCP entity is configured per bearer, the UE1i-01uses, for uplink, a value acquired by subtracting 1 from the Next_PDCP_TX_SN value as the first window variable maintaining the PDCP SN of the next data to be transmitted by the transmitting PDCP entity and a COUNT value generated based on the TX_HFN variable maintaining the HFN value of the transmitter for the case of comparing the countMSB-Uplink and the 25 MSBs and, for downlink, a value acquired by subtracting 1 from the second window variable (Next_PDCP_RX_SN) maintaining the PDCP SN value of the next data (e.g., PDCP SDU) expected to be received among 3 window variables (i.e., the second window variable (Next_PDCP_RX_SN indicating the PDCP SN of the next data expected to be received), the third window variable (Last_Submitted_PDCP_RX_SN indicating PDCP SN of last data delivered to upper layers), and the fourth window variable (Reordering_PDCP_RX_COUNT indicating COUNT value triggering a timer)) in use by the receiving PDCP entity and a COUNT value generated based on the RX_HFN variable maintaining the HFN value of the receiver for the case of comparing the countMSB-Downlink and the 25 MSBs and reporting the COUNT values via the Counter Check Response message. Because the window variables (Next_PDCP_TX_SN and Next_PDCP_RX_SN) indicate the PDCP SN values of the next data to be received or delivered, they should be subtracted by 1 to indicate the current PDCP SN value. In the ninth embodiment, when the Counter check message is received, the UE1i-01operates as follows.For each established bearer (e.g., DRB),if there is no COUNT value for a given direction (uplink or downlink) because the bearer is a unidirectional bearer configured only in an opposite direction,the UE1i-01assumes that the COUNT value is 0 for the direction not in use.If the drb-CountMSB-InfoList does not include a bearer identifier of any bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to the COUNT value based on the TX_HFN and Next_PDCP_TX_SN−1 and the COUNT value based on the RX_HFN and Next_PDCP_RX_SN−1 in the drb-CountInfoList of the Counter Check Response message.(if the drb-CountMSB-InfoList includes a bearer identifier of a bearer of the UE) if the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) indicated in the drb-CountMSB-InfoList for at least one direction differs from the COUNT value (TX_NEXT-1 or RX_NEXT-1) for the bearer of the UE,the UE includes the bearer identifier of the bearer and the count-Uplink and count-Downlink values set respectively to the COUNT value based on the TX_HFN and Next_PDCP_TX_SN−1 and the COUNT value based on the RX_HFN and Next_PDCP_RX_SN−1 in the drb-CountInfoList of the Counter Check Response message.For each bearer (e.g., DRB) indicated in the drb-CountMSB-InfoList of the Counter Check message but not established,the UE includes the bearer identifier of the bearer and count-Uplink 25 MSBs of the countMSB-Uplink and countMSB-Downlink set respectively to be equal to the 25 MSBs of the countMSB-Uplink or countMSB-Downlink indicated in the drb-CountMSB-InfoList along with the 7 least significant bits (LSBs) set to 0 in the drb-CountInfoList of the Counter Check response message.The Counter Check Response message configured as above is sent to low layers for transmission. FIG.1Jillustrates a diagram for explaining a method for reducing a size of MSBs of a COUNT value indicated in a proposed Counter check procedure according to an embodiment of the disclosure. The proposed Counter check procedure aims to check for the HFN value of per-bearer COUNT value. Accordingly, the size of the MSBs of the COUNT value indicated by the gNB1i-02may be dramatically reduced according to the length of a configurable PDCP SN. As aforementioned, in the next generation mobile communication system, the length of the PDCP SN may be set to 12 bits in the case as denoted by reference number1j-05or 18 bits in the case as denoted by reference number1j-10. For all bearers, 20 MSBs of the COUNT value are enough even when comparing HFN values as denoted by reference number1j-15. Instead of using 25 MSBs of the COUNT value as described in the above Count check procedures, if 20 MSBs of the COUNT value is used, it is possible to reduce overhead by 5 bits×number of bearers. It may also be possible to further reduce the header overhead by setting the number of MSBs to (32 bits—PDCP SN length per bearer). Given that an RLC entity supports 6-bit RLC SN, if a new PDCP SN length of 6 bits is introduced for reducing the header overhead, the HFN occupying the 25 MSBs of the COUNT may be extended to 26 bits in the above Count check procedure for comparison accuracy. The proposed Counter check procedure may be applied when a Counter check message transmitted on SRB1in use by the gNB1i-02corresponding to a master cell group (MCG). Hereinafter, a description is made of the operation of a UE when the UE receives a Counter check message transmitted on SRB3in use by the gNB1i-02corresponding to a secondary cell group (SCG) rather than an MCG. The description is made with reference toFIG.1I. The UE operations of determining the COUNT values to be compared and reported upon receipt of the Counter Check message indicating use of 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) from the eNB1i-02according to the first to sixth embodiments are applicable to the Count check procedure in which the Counter check message is transmitted on SRB3in use by an SCG. The method for reducing overhead of MSBs of the COUNT value, as described with reference toFIG.1J, may also be applicable to the Count check procedure in which the Counter check message is transmitted on SRB3in use by an SCG. In the case where an RRC connection is established between the UE1i-01and the gNB1i-02, the gNB1i-02transmits a CounterCheck RRC message to the UE1i-01at step1i-10to request for per-DRB COUNT check and report. This message may be carried by an RRCConnectionReconfiguration or RRCConnectionReestablishment message being transmitted on a dedicated common control channel (DCCH). The CounterCheck message may include a list of bearers (e.g., DRB or SRB) for per-bearer COUNT check, drb-CountMSB-InfoList, which contains DRB identifier, countMSB-Uplink(25 bit), and countMSB-Downlink(25 bit). That is, the list includes per-bearer identifiers of the bearers for which the COUNT check is required and 25 most significant bits (MSBs) of each of the uplink and downlink COUNT values of the gNB1i-02on the corresponding bearers. Meanwhile, the gNB1i-02may transmit the CounterCheck message on SRB1or SRB3. That is, the COUNT CHECK request may be made through an MCG SRB (e.g., SRB1) for the case where the UE1i-01is connected to the MCG or through an SCG SRB (e.g., SRB3) for the case where the UE1i-01is connected to the SCG. The COUNT CHECK request may also be made simultaneously through both SRB1and SRB3. The UE1i-01determines whether the CounterCheck message has been received on SRB1or SRB3and performs a subsequent operation at step1i-15as follows. Receipt on SRB1(first operation): The UE generates a COUNT CHECK RESPONSE message including full COUNTs of the first and third DRB groups. Receipt on SRB3(second operation): The UE generate a COUNT CHECK RESPONSE message including full COUNTs of the second DRB and third DRB groups. The DRB groups for use in the first and second operations are defined as follows. First DRB group: A set of DRBs that are not included in drb-CountMSB-InfoList among MCG bearers (or MCG terminated bearers, i.e., bearers for which a PDCP entity exists in MCG) and MCG split bearers. Second DRB group: A set of DRBs that are not included in drb-CountMSB-InfoList among SCG bearers (or SCG terminated bearers, i.e., bearers for which a PDCP entity exists in SCG) and SCG split bearers. Third DRB group: A set of DRBs with 25 MSBs that are not matched among DRBs included in drb-CountMSB-InfoList. For example, if the CounterCheck message is received through SRB1, the UE1i-01includes full COUNT values for the DRBs that are not included in the configured DRB list among the MCG bearers and MCG split bearers and for the DRBs with non-matching result of comparison between the 25 MSBs for the DRBs configured in the received CounterCheck message and the 25 MSBs (in both countMSB-Uplink for uplink and countMSB-Downlink for downlink) stored in the UE1i-01. If the COUNT value transmitted by the gNB1i-02matches the COUNT value calculated by the UE1i-01, the corresponding COUNT value is not included in the reporting list. Here, it is necessary to select a PDCP SDU of which COUNT value is to be compared with a value (countMSB-Uplink (25 bits) or countMSB-Downlink (25 bits)) configured in the CounterCheck message. The UE1i-01may use one of two methods as follows: selecting the PDCP SDU with the highest COUNT (NEXT_RX_COUNT-1) among the PDCP SDUs received until then; and selecting PDCP SDU with the highest COUNT among PDCP SDUs reordered until then. It is also necessary to select a PDCP SDU of which COUNT value is reported. The UE1i-01may use one of three methods as follows: selecting the PDCP SDU with a matching result of COUNT value comparison; selecting the PDCP SDU with the highest COUNT value at reporting time point; and selecting the PDCP SDU with the highest COUNT value among PDCP SDUs reordered at reporting time point. After generating the CountCheck result information as above, the UE1i-01transmits, at step1i-20, an RRC message (CounterCheckResponse) including the corresponding information to the gNB1i-01. FIG.1Killustrates a flowchart of an operation of a UE in a proposed Counter check procedure according to an embodiment of the disclosure. InFIG.1K, the UE receives an RRC message, i.e., a Counter check message, at step1k-05; upon receipt of this message, the UE determines at step1k-10whether the drb-CountMSB-InfoList includes a bearer identifier per bearer (e.g., DRB) established for the UE. If it is determined that the drb-CountMSB-InfoList does not include a bearer identifier of any bearer for the UE, the UE selects count-Uplink and count-Downlink values at step1k-15according to one of the first to sixth embodiments and includes the bearer identifiers of the corresponding bearers and the selected uplink and downlink COUNT values in the drb-CountInfoList of a Counter Check Response message at step1k-20. If it is determined that the drb-CountMSB-InfoList includes a bearer identifier of any bearer for the UE, the UE selects the uplink and downlink COUNT values at step1k-25according to one of the first to sixth embodiments and compares, at step1k-30, the selected COUNT values with MSBs of the COUNT value that is indicated by the base station. If the 25 MSBs of the COUNT value (countMSB-Uplink or countMSB-Downlink) differs from the COUNT value (uplink or downlink COUNT value) for the bearer that is selected by the UE in at least one direction, the UE includes the bearer identifier of the corresponding bearer and the selected uplink and downlink COUNT values in the drb-CountInfoList of the Counter Check Response message at step1k-35. FIG.1Lillustrates a block diagram of a configuration of a UE according to an embodiment of the disclosure. In reference toFIG.1L, the UE includes a radio frequency (RF) processor1l-10, a baseband processor1l-20, a storage unit1l-30, and a controller1l-40. The RF processor1l-10has a function for transmitting/receiving a signal over a radio channel such as band conversion and amplification of the signal. That is, the RF processing unit1l-10up-converts a baseband signal from the baseband processor1l-20to an RF band signal and transmits the RF signal via an antenna and down-converts the RF signal received via the antenna to a baseband signal. For example, the RF processor1l-10may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a digital-to-analog converter (DAC), and an analog-to-digital converter (ADC). Although one antenna is depicted in the drawing, the UE may be provided with a plurality of antennas. The RF processor1l-10may also include a plurality of RF chains. The RF processor1l-10may perform beamforming. For beamforming, the RF processor1l-10may adjust the phase and size of a signal to be transmitted/received by means of the antennas or antenna elements. The RF processor1l-10may be configured to support a MIMO scheme with which the UE can receive multiple layers simultaneously. The RF processor1l-10may configure the plurality of antennas or antenna elements appropriately, under the control of the controller1l-40, to perform beam sweeping and adjust the beam direction and beam width to achieve an alignment of the reception and transmission beam. The baseband processor1l-20has a baseband signal-bit string conversion function according to a physical layer standard of the system. For example, in a data transmission mode, the baseband processor1l-20performs encoding and modulation on the transmission bit string to generate complex symbols. In a data reception mode, the baseband processor1l-20performs demodulation and decoding on the baseband signal from the RF processor1l-10to recover the transmitted bit string. In the case of using an OFDM scheme for data transmission, the baseband processor1l-20performs encoding and modulation on the transmission bit string to generate complex symbols, maps the complex symbols to subcarriers, performs inverse fast Fourier transform (IFFT) on the symbols, and inserts a cyclic prefix (CP) into the symbols to generate OFDM symbols. In the data reception mode, the baseband processor1l-20splits the baseband signal from the RF processor1l-10into OFDM symbols, perform fast Fourier transform (FFT) on the OFDM symbols to recover the signals mapped to the subcarriers, and performs demodulation and decoding on the signals to recover the transmitted bit string. The baseband processor1l-20and the RF processor1l-10process the transmission and reception signals as described above. Accordingly, the baseband processor1l-20and the RF processor1l-10may be referred to as a transmitter, a receiver, a transceiver, or a communication unit/circuit. At least one of the baseband processor1l-20and the RF processor1l-10may include a plurality of communication modules for supporting different radio access technologies. At least one of the baseband processor1l-20and the RF processor1l-10may also include multiple communication modules for processing the signals in different frequency bands. For example, the different radio access technologies may include a wireless local area network (WLAN) (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11) and a cellular network (e.g., LTE). The different frequency bands may include a super high frequency (SHF) band (e.g., 2.2 GHz and 2 GHz bands) and an mmWave band (e.g., 60 GHz). The storage unit1l-30stores data such as basic programs for operation of the UE, application programs, and setting information. The storage unit1l-30provides the stored information in response to a request from the controller1l-40. The controller1l-40controls overall operations of the UE. For example, the controller1l-40controls the baseband processor1l-20and the RF processor1l-10for transmitting and receiving signals. The controller1l-40writes and reads data to and from the storage unit1l-40. For this purpose, the controller1l-40may include at least one processor. For example, the controller1l-40may include a communication processor (CP) for controlling communications and an application processor (AP) for controlling higher layer programs such as applications. The controller1l-40may be electrically connected to the transceiver. FIG.1Millustrates a block diagram of a configuration of a base station in a wireless communication according to an embodiment of the disclosure. In reference toFIG.1M, the base station includes an RF processor1m-10, a baseband processor1m-20, a backhaul communication unit1m-30, a storage unit1m-40, and a controller1m-50. The RF processor1m-10has a function for transmitting/receiving a signal over a radio channel such as band conversion and amplification of the signal. That is, the RF processing unit1m-10up-converts a baseband signal from the baseband processor1m-20to an RF band signal and transmits the RF signal via an antenna and down-converts the RF signal received via the antenna to a baseband signal. For example, the RF processor1m-10may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a DAC, and an ADC. Although one antenna is depicted in the drawing, the base station may be provided with a plurality of antennas. The RF processor1m-10may also include a plurality of RF chains. The RF processor1i-10may perform beamforming. For beamforming, the RF processor1m-10may adjust the phase and size of a signal to be transmitted/received by means of the antennas or antenna elements. The RF processor1m-10may be configured to transmit one or more layers for a downlink MIMO operation. The baseband processor1m-20has a baseband signal-bit string conversion function according to a physical layer standard of the system. For example, in a data transmission mode, the baseband processor1m-20performs encoding and modulation on the transmission bit string to generate complex symbols. In a data reception mode, the baseband processor1m-20performs demodulation and decoding on the baseband signal from the RF processor1m-10to recover the transmitted bit string. In the case of using an OFDM scheme for data transmission, the baseband processor1m-20performs encoding and modulation on the transmission bit string to generate complex symbols, maps the complex symbols to subcarriers, performs inverse fast Fourier transform (IFFT) on the symbols, and inserts a cyclic prefix (CP) into the symbols to generate OFDM symbols. In the data reception mode, the baseband processor1m-20splits the baseband signal from the RF processor1m-10into OFDM symbols, performs fast Fourier transform (FFT) on the OFDM symbols to recover the signals mapped to the subcarriers, and performs demodulation and decoding on the signals to recover the transmitted bit string. The baseband processor1m-20and the RF processor1m-10process the transmission and reception signals as described above. Accordingly, the baseband processor1m-20and the RF processor1m-10may be referred to as a transmitter, a receiver, a transceiver, or a communication unit. The communication unit1m-30provides an interface for communication with other nodes in the network. The storage unit1m-40stores data such as basic programs for operation of the base station, application programs, and setting information. The storage unit1m-40may also store the information on the bearers established for UEs and measurement results reported by the connected UEs. The storage unit1m-40may also store the information for use by a UE in determining whether to enable or disable multi-connectivity. The storage unit1m-40may provide the stored data in reference to a request from the controller1m-50. The controller1m-50controls overall operations of the base station. For example, the controller1m-50controls the baseband processor1m-20, the RF processor1m-10, and the backhaul communication unit1m-30for transmitting and receiving signals. The controller1m-50writes and reads data to and from the storage unit1m-40. For this purpose, the controller1m-50may include at least one processor. The controller may be electrically connected to the transceiver. Embodiment B The disclosure proposes a bearer management and data processing method of wireless nodes in a next generation mobile communication system supporting wireless backhaul and a method for recovering lost data caused by radio link breakage or congestion at the wireless nodes. In detail, the disclosure proposes a PDCP status report-based lost data retransmission method and procedure between PDCP entities of two wireless end nodes in a wireless backhaul network. Detailed descriptions of the proposed methods are made hereinafter in various disclosed embodiments hereinafter. FIG.2Aillustrates a diagram of architecture of an LTE system to which the disclosure is applied. In reference toFIG.2A, a radio access network of the LTE system includes evolved Node Bs (hereinafter, interchangeably referred to as eNB, node B, and base station)2a-05,2a-10,2a-15, and2a-20; a mobility management entity (MME)2a-25; and a serving gateway (S-GW)2a-30. A user terminal (hereinafter, interchangeably referred to as user equipment (UE) and terminal)2a-35connects to an external network via the eNBs2a-05,2a-10,2a-15, and2a-20and the S-GW2a-30. The eNBs2a-05,2a-10,2a-15, and2a-20correspond to the legacy node Bs of the universal mobile telecommunications system (UMTS). The UE2a-35connects to one of the eNBs via a radio channel, and the eNB has more complex functions than the legacy node B. In the LTE system where all user traffic including real time services such as Voice over IP (VoIP) is served through shared channels, there is a need of an entity for collecting UE-specific status information (such as buffer status, power headroom status, and channel status) and scheduling the UEs based on the collected information, and the eNB takes charge of such functions. Typically, one eNB hosts multiple cells. For example, the LTE system adopts Orthogonal Frequency Division Multiplexing (OFDM) as a radio access technology to secure a data rate of up to 100 Mbps in a bandwidth of 20 MHz. The LTE system also adopts Adaptive Modulation and Coding (AMC) to determine the modulation scheme and channel coding rate in adaptation to the channel condition of the UE. The S-GW2a-30handles data bearer functions to establish and release data bearer under the control of the MME2a-25. The MME2a-25handles various control functions for the UE as well as the mobile management function and has connections with the eNBs2a-05,2a-10,2a-15, and2a-20. FIG.2Billustrates a diagram of a protocol stack of an LTE system to which the disclosure is applied. As shown inFIG.2B, the protocol stack of the interface between the UE2b-50and the eNB2b-60in the LTE system includes Packet Data Convergence Protocol (PDCP)2b-05and2b-40, Radio Link Control (RLC)2b-10and2b-35, and Medium Access Control (MAC)2b-15and2b-30. The PDCP2b-05and2b-40takes charge of compressing/decompressing an IP header. The main functions of the PDCP2b-05and2b-40can be summarized as follows:Header compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUs at PDCP re-establishment procedure for RLC AMFor split bearers in DC (only support for RLC AM): PDCP PDU routing for transmission and PDCP PDU reordering for receptionDuplicate detection of lower layer SDUs at PDCP re-establishment procedure for RLC AMRetransmission of PDCP SDUs at handover and, for split bearers in DC, of PDCP PDUs at PDCP data-recovery procedure, for RLC AMCiphering and decipheringTimer-based SDU discard in uplink The RLC2b-10and2b-35takes charge of reformatting PDCP PDUs in order to fit them into the size for ARQ operation. The main functions of the RLC layer can be summarized as follows:Transfer of upper layer PDUs)Error Correction through ARQ (only for AM data transfer)Concatenation, segmentation and reassembly of RLC SDUs (only for UM and AM data transfer)Re-segmentation of RLC data PDUs (only for AM data transfer)Reordering of RLC data PDUs (only for UM and AM data transferDuplicate detection (only for UM and AM data transfer)Protocol error detection (only for AM data transfer)RLC SDU discard (only for UM and AM data transfer)RLC re-establishment The MAC2b-15and2b-30allows for connection of multiple RLC entities established for one UE and takes charge of multiplexing RLC PDUs from the RLC layer into a MAC PDU and demultiplexing a MAC PDU into RLC PDUs. The main functions of the MAC layer can be summarized as follows:Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUs belonging to one or different logical channels into/from transport blocks (TB) delivered to/from the physical layer on transport channelsScheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding The PHY layer2b-20and2b-25takes charge of channel-coding and modulation on higher layer data to generate and transmit OFDM symbols over a radio channel, and demodulating and channel-decoding on OFDM symbols received over the radio channel to deliver the decoded data to the higher layers. FIG.2Cillustrates a diagram of architecture of a next generation mobile communication system to which the disclosure is applied. As shown inFIG.2C, the next generation mobile communication system includes a radio access network with a next generation base station (New Radio Node B (NR gNB))2c-10and a new radio core network (NR CN)2c-05. A new radio user equipment (NR UE)2c-15connects to an external network via the NR gNB2c-10and the NR CN2c-05. InFIG.2C, the NR gNB2c-10corresponds to an evolved Node B (eNB) of the legacy LTE. The NR gNB2c-10to which the NR UE2c-15connects through a radio channel is capable of providing superior services in comparison with the legacy eNB. In the next generation mobile communication system where all user traffic is served through shared channels, it is necessary to schedule the NR UEs based on scheduling information such as buffer status, power headroom status, and channel status collected by the NR UEs, and the NR gNB2c-10takes charge of this function. Typically, one NR gNB hosts multiple cells. In order to achieve a data rate higher than the peak data rate of legacy LTE systems, the next generation mobile communication system may adopt a beamforming technique along with orthogonal frequency division multiple access (OFDMA) as a radio access technology. The next generation mobile communication system may also adopt an adaptive modulation and coding (AMC) to determine the modulation scheme and channel coding rate in adaptation to the channel condition of the NR UE. The NR CN2c-05takes charge of mobility support, bearer setup, and QoS configuration. The NR CN2c-05may take charge of a NR UE mobility management function, and a plurality of NR gNBs may connect to the NR CN2c-05. The next generation mobile communication system may also interoperate with a legacy LTE system and, in this case, the NR CN2c-05connects to an MME2c-25through a network interface. The MME2c-25communicates with the eNB2c-40as a legacy base station. FIG.2Dillustrates a diagram of a protocol stack of a next generation mobile communication system to which the disclosure is applied. As shown inFIG.2D, the protocol stack of the interface between an NR UE2d-50and an NR gNB2d-60in a next generation mobile communication system includes NR service data adaptation protocol (NR SDAP)2d-01and2d-45, NR PDCP2d-05and2d-40, NR RLC2d-10and2d-35, and NR MAC2d-15and2d-30. The main functions of the NR SDAP2d-01and2d-45may include some of the following functions:transfer of user plane datamapping between a QoS flow and a DRB for both DL and ULmarking QoS flow ID in both DL and UL packets)reflective QoS flow to DRB mapping for the UL SDAP PDUs The UE2d-50may receive an RRC message for configuring an SDAP entity2d-01so as to determine whether to use PDCP entity-specific, bearer-specific, or logical channel-specific SDAP header and whether to use SDAP layer function via an RRC message and, if configured to use a specific PDAP header, receive a 1-bit NAS reflective QoS indicator and an AS reflective QoS indicator in the SDAP header indicative of instructing the UE2d-50to update or reconfigure uplink and downlink QoS flow-data bearer mappings. The SDAP header may include QoS flow ID indicating a QoS. The QoS information may be used as data processing priority and scheduling information for guaranteeing service reliability. The main functions of the NR PDCP2d-05and2d-40may include some of the following functions:Header compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsPDCP PDU reordering for receptionDuplicate detection of lower layer SDUsRetransmission of PDCP SDUsCiphering and decipheringTimer-based SDU discard in uplink The PDCP PDU reordering function of an NR PDCP entity2d-05and2d-40is to reorder the PDCP PDUs delivered from a lower layer based on the PDCP sequence number (PDCP SN) and may include delivering the reordered data to an upper layer, recording the missing PDCP PDUs among the reordered PDCP PDUs, transmitting a status report indicating the missing PDCP PDUs to the sender, and requesting for retransmission of the missing PDCP PDUs. The main functions of the NR RLC2d-10and2d-35may include some of the following functions.Transfer of upper layer PDUs)In-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsError Correction through ARQConcatenation, segmentation and reassembly of RLC SDUsRe-segmentation of RLC data PDUsReordering of RLC data PDUsDuplicate detectionProtocol error detectionRLC SDU discardRLC re-establishment The in-sequence delivery function of an NR RLC entity2d-10and2d-35is to deliver the RLC SDUs received from the lower layer to the upper layer and may include reassembling, when multiple segmented RLC SDUs constituting an original RLC SDU are received, the RLC SDUs and delivering the reassembled RLC SDU to the upper layer; reordering the received RLC PDUs based on the RLC sequence number(SN) or PDCP SN; recording the missing RLC PDUs among the reordered RLC PDUs; transmitting a status report indicating the missing RLC PDUs to the sender; requesting for retransmission of the missing RLC PDUs; and delivering, when there is a missing RLC PDU, the RLC PDUs before the missing RLC PDU in sequence, delivering, if a predetermined timer expires even when there is any missing RLC SDU, all RLC SDUs received before the start of the timer to the upper layer in sequence, or delivering, if a predetermined timer expires even when there is any missing RLC SDU, all RLC SDUs received until then to the upper layer in sequence. It may also be possible to process the RLC PDUs in the receiving sequence (in the order of arrival regardless of sequence number) and deliver the RLC PDUs to the PDCP entity out of order (out-of-sequence delivery) and, if an RLC PDU is transmitted in the form of segments, to store the received segments, or wait until all segments constituting the RLC PDU are received and reassemble the segments into the original RLC PDU, which is delivered to the PDCP entity. The NR RLC layer2d-10and2d-35may have no concatenation function and, in this case, the concatenation function may be performed in the NR MAC layer2d-15and2d-30or replaced by the multiplexing function of the NR MAC layer2d-15and2d-30. The out-of-sequence delivery function of an NR RLC entity2d-10and2d-35is to deliver the RLC SDUs received from the lower layer to the upper layer out of order and may include reassembling, when multiple segmented RLC SDUs constituting an original RLC SDU are received, the segmented RLC SDUs, delivering the reassembled RLC SDUs to the upper layer, arranging the received RLC PDUs based on the RLC SN or PDCP SN, and recording the SN of the missing RLC PDUs. The NR MAC2d-15and2d-30may be connected to multiple NR RLC entities, and the main functions of the NR MAC2d-15and2d-30may include some of the following functions:Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUsScheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding The NR PHY layer2d-20and2d-25takes charge of channel-coding and modulation on upper layer data to generate and transmit OFDM symbols over a radio channel and demodulating and channel-decoding on OFDM symbols received over the radio channel to deliver the decoded data to the upper layers. FIG.2Eillustrates a signal flow diagram of a procedure for transitioning a UE from an RRC connected mode to an RRC idle mode based on connection release triggered by a base station and transitioning the UE from the RRC idle mode to the RRC connected mode based on connection establishment triggered by the UE according to an embodiment of the disclosure. In reference toFIG.2E, if there is no data transmission/reception to/from the UE2e-90in the RRC connected mode for any reason or during a predetermined period, the base station2e-91may transmit an RRCConnectionRelease message to the UE2e-90at step2e-01to transition the UE2e-90to an RRC idle mode. If data to be transmitted are produced at the UE2e-90with no currently established connection (hereinafter, referred to as idle mode UE), the UE2e-90performs an RRC connection establishment procedure with the base station2e-91. The UE2e-90acquires uplink transmission synchronization with the base station2e-91through a random access procedure and transmits an RRCConnectionRequest message to the base station2e-91at step2e-05. The RRCConnectionRequest message may include an identifier of the UE2e-90and a connection establishment cause (establishmentCause). The base station2e-91transmits an RRCConnectionSetup message to the2e-90at step2e-10for RRC connection setup. The RRCConnectionSetup message includes RRC connection configuration information. The RRCConnectionSetup message may also include a UE identifier for use in identifying the UE2e-90connected to the base station2e-91. The RRCConnectionSetup message may also include a list of identifiers of other UEs that are currently connected to the base station2e-91. The list of the identifiers of other UEs that are currently connected to the base station2e-91may be periodically updated based on the system information broadcast by the base station2e-91in order for the UEs located within coverage of the base station2e-91to identify other UEs available for communication. When a wireless device are installed in a factory, the identifiers of other wireless devices available for communication may preset. The UE identifier may be a cell radio network temporary identifier (C-RNTI), part of the C-RNTI, or part of a NAS layer identifier (e.g., globally unique temporary identifier (GUTI)). An RRC connection may be referred to as signaling radio bearer and used for communicating RRC messages as control messages between the UE23-90and the base station2e-91. After establishing the RRC connection, the UE2e-90transmits an RRCConnectionSetupComplete message to the base station2e-91at step2e-15. The RRCConnectionSetupComplete message includes a control message called SERVICE REQUEST for requesting to an MME2e-92for establishing a bearer for a certain service. At step2e-20, the base station2e-91transmits the SERVICE REQUEST message included in the RRCConnectionSetupComplete message to the MME2e-92, and the MME2e-92determines whether provide the service requested by the UE2e-90. If it is determined to provide the service requested by the UE2e-90, the MME2e-92transmit an INITIAL CONTEXT SETUP REQUEST message to the base station2e-91at step2e-25. The this message includes quality of service (QoS) information to be applied in configuring a DRB and security information (e.g., Security Key and Security Algorithm) to be applied to the DRB. For security configuration, the base station2e-91transmits a SecurityModeCommand message to the UE2e-90at step2e-30, and the UE2e-90transmits a SecurityModeComplete message to the base station2e-91at step2e-35. After completing security configuration, the base station2e-91transmits an RRCConnectionReconfiguration message to the UE2e-90at step2e-40. The RRCConnectionReconfiguration message may include UE identifier for use in identifying the UE2e-90within coverage of the base station2e-91. This message may also include a list of identifiers of other UEs that are currently connected to the base station2e-91. The list of the identifiers of other UEs that are currently connected to the base station2e-91may be periodically updated based on the system information broadcast by the base station2e-91in order for the UEs located within coverage of the base station2e-91to identify other UEs available for communication. When a wireless device are installed in a factory, the identifiers of other wireless devices available for communication may preset. The UE identifier may be a cell radio network temporary identifier (C-RNTI), part of the C-RNTI, or part of a NAS layer identifier (e.g., globally unique temporary identifier (GUTI)). The RRCConnectionReconfiguration message include DRB configuration information for processing user data, and the UE2e-90configures a DRB based on this configuration information and transmits an RRCConnectionReconfigurationComplete message to the base station2e-91at step2e-45. After completing DRB configuration with the UE2e-90, the base station2e-91transmits an INITIAL CONTEXT SETUP COMPLETE message to the MME2e-92at step2e-50; upon receipt of the INITIAL CONTEXT SETUP COMPLETE message, the MME2e-92configures a S1 bearer with an S-GW2e-93by transmitting an S1 BEARER SETUP message to the S-GW2e-93at step2e-55and receiving an S1 BEARER SETUP RESPONSE message from the S-GW2e-93at step2e-60. The S1 bearer is a connection for data transfer between the S-GW2e-93and the base station2e-91and mapped to the DRB1one to one. After completing the above procedure, the UE2e-90performs data communication via base station2e-91and S-GW2e-93at steps2e-65and2e-70. This typical data communication procedure consists of three phases: RRC connection configuration, security configuration, and DRB configuration. The base station2e-91may transmit an RRCConnectionReconfiguration message to the UE2e-90at step2e-75for updating, adding, or modifying the configuration. Hereinafter, a description is made of the low latency data communication procedure between wireless devices. FIG.2Fillustrates a signal flow diagram of a procedure for establishing a point-to-point link between wireless devices for data communication according to an embodiment of the disclosure. The one-to-one link denotes a link established for direct data communication between the wireless devices without intervention of the base station. The proposed procedure of configuring a point-to-point communication link between wireless devices may be divided into four steps: wireless device discovery, inter-device point-to-point wireless link or direct link assessment and measurement, inter-device direct wireless link establishment, and data communication through the inter-device direct link; the procedure is characterized by one or more of the following:1. A gNB2f-03may share and manage UE identifiers of wireless devices within its coverage for supporting wireless data communication.2. The gNB2f-03may configure the wireless devices within its coverage for supporting wireless data communication to always remain in an RRC connected mode or an RRC deactivated mode.3. A wireless device transmits a transmission resource request message including an identifier of a destination device or a source device to the gNB2f-03to request for allocation of transmission resources for point-to-point communication.4. The gNB2f-03, upon being requested by the wireless device to allocate transmission resources for point-to-point communication, performs a procedure for discovering the destination wireless device (e.g., transmitting paging message) using the identifier of the destination wireless device. If the gNB2f-03fails to discover the destination wireless device or if the destination wireless device is not located within the coverage of the gNB2f-03, the gNB2f-03allocates uplink transmission resources to the source wireless device and relays data between the source and destination wireless devices.5. The gNB2f-03may allocate part of normal uplink transmission resources of the UE as transmission resources for point-to-point communication.6. The gNB2f-03, when allocating transmission resources for point-to-point communication to wireless devices, may inform the wireless devices of a source wireless device identifier or a destination wireless device identifier, send frequency configuration information for point-to-point wireless link to the wireless devices, and instruct the wireless devices to perform frequency measurement or transmit a reference signal.7. The wireless devices allocated the transmission resources for point-to-point communication performs frequency measurement on point-to-point wireless links for point-to-point communication and report frequency measurement results to the gNB2f-30.8. The gNB2f-03may receive the frequency measurement results from the source and destination wireless devices and instruct the source wireless device to perform data transmission based on the frequency measurement results using a newly defined L1 signal (e.g., DCI) or L2 signal (e.g., MAC CE) and the destination wireless device to perform data transmission based on the frequency measurement results using the newly defined L1 signal (e.g., DCI) or L2 signal (e.g., MAC CE).9. The wireless device that is instructed to perform data transmission through the newly defined L1 signal (e.g., DCI) or L2 signal (e.g., MAC CE) starts data transmission. The proposed procedure for establishing a point-to-point wireless link between wireless devices is described in more detail hereinafter. The gNB2f-03may share and manage UE identifiers of wireless devices within its coverage for supporting wireless data communication. The gNB2f-03may configure the wireless devices2f-01or2f-03within its coverage for supporting wireless data communication to always remain in an RRC connected mode or an RRC deactivated mode to maintain low transmission latency. At step2f-05, the wireless device2f-01may transmit a transmission resource request message including an identifier of a destination device2f-02or the source device2f-01to the gNB2f-03to request for allocation of transmission resources for point-to-point communication. The point-to-point transmission resource request message may include QoS requirements. For example, the resource request message being transmitted from the wireless device2f-01to the2f-03may include an average packet size, a transmission bit rate, a transmission delay requirement, a reliability, and an error rate. Upon being requested by the (source) wireless device2f-01to allocate transmission resources, the2f-03may perform, at step2f-10, a procedure for discovering the destination wireless device2f-02(e.g., transmitting paging message) using the identifier of the destination wireless device2f-02. If the gNB2f-03fails to discover the destination wireless device2f-02or if the destination wireless device2f-02is not located within the coverage of the gNB2f-03, the gNB2f-03allocates uplink transmission resources to the source wireless device2f-01and relays data in such a way of receiving the data from the source wireless device2f-01and transmit the data to a network. The paging message may include the identifier of the source wireless device2f-01or the destination wireless device2f-02. If the destination wireless device2f-02receives the paging message, it establishes a connection with the gNB2f-03at step2f-15. Then, the gNB2f-03may transmit a point-to-point response message to the source wireless device2f-01at step2f-20, in response to the request for allocation of transmission resources for point-to-point communication, and a point-to-point configuration message to the destination wireless device2f-02at step2f-25. The gNB2f-03may allocate part of normal uplink transmission resources of the UE as transmission resources for point-to-point communication. The allocated transmission resources may be transmission resources being allocated repetitively at a predetermined interval. In this case, once the transmission resources are configured, the wireless devices2f-01and2f-02may perform point-to-point communication continuously with the transmission resources without intervention of the gNB2f-03. Such transmission resources may be allocated via system information broadcasted by the gNB2f-03rather than a dedicated signaling, and the gNB2f-03may inform the wireless device2f-01and2f-02of the resources for use in point-to-point communication among the transmission resources indicated in the system information. If the wireless device2f-01and2f-02is allocated transmission resources by the gNB2f-03via both the system information and dedicated signaling, it may prioritize the transmission resources allocated by the gNB2f-03via the dedicated signaling. When allocating transmission resources for point-to-point communication to wireless devices, the gNB2f-03may inform the wireless devices2f-01and2f-02of the identifiers of the source and destination wireless devices2f-01and2f-02, send frequency configuration information for point-to-point wireless link to the wireless devices, and instruct the wireless devices to perform frequency measurement or transmit a reference signal. The transmission resources for point-to-point communication may include time resources, frequency resources, code resources, the identifier of the source wireless device2f-01or the identifier of the destination wireless device2f-02, modulation or demodulation coding information (MCS), a transport block (TB) size, an identifier for activating the wireless information (e.g., RNTI). The source and destination wireless devices2f-01and2f-02that have been allocated the transmission resources for point-to-point communication may transmit a reference signal on the transmission resources, perform frequency measurement on the point-to-point wireless link, at step2f-30, for point-to-point communication, and report frequency measurement results to the gNB2f-03at steps2f-35and2f-40. Upon receipt of the frequency measurement results from the source and destination wireless devices2f-01and2f-02, the gNB2f-03may instruct, at step2f-45, the source wireless device2f-01to perform data transmission based on the frequency measurement results using a newly defined L1 signal (e.g., DCI) or L2 signal (e.g., MAC CE) and, at step2f-50, the destination wireless device2f-02to perform data transmission based on the frequency measurement results using the newly defined L1 signal (e.g., DCI) or L2 signal (e.g., MAC CE). Upon being instructed by the gNB2f-03, via the newly defined L1 signal (e.g., DCI) or L2 signal (e.g., MAC CE), to perform data transmission, the source wireless device2f-01or the destination wireless device2f-02may perform data transmission, at step2f-55, on the transmission resources allocated for point-to-point communication. In the above procedure, the source and destination wireless devices2f-01and2f-02may perform reliability measurement on the data being transmitted/received or frequency measurement on the reference signal on the direct wireless link periodically to request to the gNB2f-03, if the measurement result is equal to or less than a level predetermined or preconfigured for the configured point-to-point link, for update of the point-to-point link or a new point-to-point link as at step2f-05. When perform reliability measurement on the data being transmitted/received, the source and destination wireless devices2f-01and2f-02may check sequence numbers of the data to identify a number, size, or amount of lost data to assess the quality of the direct wireless link. When requesting to the gNB2f-03for update of the point-to-point wireless link, the source wireless device2f-01or the destination wireless device2f-02may report the reliability, transmission latency, or error assessed or experienced on the currently configured direct wireless link. FIG.2Gillustrates a flowchart of an operation of a wireless device for configuring a point-to-point direct wireless link according to an embodiment of the disclosure. At step2g-05, the source wireless device may transmit a direct wireless link resource request message including an identifier of a destination wireless device or an identifier of the source wireless device to a base station to request for allocation of transmission resources for point-to-point communication. Upon being requested by the source wireless device to allocate transmission resources for point-to-point communication, the base station may perform a procedure for discovering the destination wireless device (e.g., transmitting a paging message) with the identifier of the destination wireless device. If the destination wireless device receives the paging message, it establishes a connection with the base station. Then the base station may transmit a response message indicating the transmission resources for point-to-point communication to the source wireless device in reply to the direct wireless link resource request message and a point-to-point configuration message to the destination wireless device to allocate transmission resources; the wireless devices receive the response message at step2g-10. When allocating the transmission resources for point-to-point communication to the wireless devices, the base station may notify the wireless devices of the identifier of the source or destination wireless device, configure a frequency of the point-to-point link to the wireless devices, instruct the wireless devices to perform frequency measurement, and/or configure and instruct the wireless devices to transmit reference signals. After being allocated the transmission resources for point-to-pit communication, the source and destination wireless devices may transmit reference signals on the transmission resources, perform frequency measurement on a point-to-point link for point-to-point communication, and report frequency measurement results to the base station at step2g-15. Upon receipt of the frequency measurement results from the source and destination wireless devices, the base station may instruct the source wireless device to perform data transmission based on the frequency measurement results using a newly defined L1 signal (e.g., DCI with an identifier) or L2 signal (e.g., MAC CE) and the destination wireless device to perform data transmission based on the frequency measurement results using the newly defined L1 signal (e.g., DCI) or L2 signal (e.g., MAC CE); the wireless devices activate a direct link and receive the data transmission instruction at step2g-20. Upon receipt of the instruction for instructing to perform data transmission through the newly defined L1 signal (e.g., DCI with an identifier) or L2 signal (e.g., MAC CE), the source or destination wireless device may perform data transmission on the transmission resources at step2g-25. FIG.2Hillustrates a diagram of a configuration of a UE or a wireless node according to an embodiment of the disclosure. In reference toFIG.2H, the UE includes a radio frequency (RF) processor2h-10, a baseband processor2h-20, a storage unit2h-30, and a controller2h-40. The RF processor2h-10has a function for transmitting/receiving a signal over a radio channel such as band conversion and amplification of the signal. That is, the RF processing unit2h-10up-converts a baseband signal from the baseband processor2h-20to an RF band signal and transmits the RF signal via an antenna and down-converts the RF signal received via the antenna to a baseband signal. For example, the RF processor2h-10may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a digital-to-analog converter (DAC), and an analog-to-digital converter (ADC). Although one antenna is depicted in the drawing, the UE may be provided with a plurality of antennas. The RF processor2h-10may also include a plurality of RF chains. The RF processor2h-10may perform beamforming. For beamforming, the RF processor2h-10may adjust the phase and size of a signal to be transmitted/received by means of the antennas or antenna elements. The RF processor2h-10may be configured to support a MIMO scheme with which the UE can receive multiple layers simultaneously. The RF processor2h-10may configure the plurality of antennas or antenna elements appropriately, under the control of the controller2h-40, to perform beam sweeping and adjust the beam direction and beam width to achieve an alignment of the reception and transmission beam. The baseband processor2h-20has a baseband signal-bit string conversion function according to a physical layer standard of the system. For example, in a data transmission mode, the baseband processor2h-20performs encoding and modulation on the transmission bit string to generate complex symbols. In a data reception mode, the baseband processor2h-20performs demodulation and decoding on the baseband signal from the RF processor2h-10to recover the transmitted bit string. In the case of using an OFDM scheme for data transmission, the baseband processor2h-20performs encoding and modulation on the transmission bit string to generate complex symbols, maps the complex symbols to subcarriers, performs inverse fast Fourier transform (IFFT) on the symbols, and inserts a cyclic prefix (CP) into the symbols to generate OFDM symbols. In the data reception mode, the baseband processor2h-20splits the baseband signal from the RF processor2h-10into OFDM symbols, perform fast Fourier transform (FFT) on the OFDM symbols to recover the signals mapped to the subcarriers, and performs demodulation and decoding on the signals to recover the transmitted bit string. The baseband processor2h-20and the RF processor2h-10process the transmission and reception signals as described above. Accordingly, the baseband processor2h-20and the RF processor2h-10may be referred to as a transmitter, a receiver, a transceiver, or a communication unit/circuit. At least one of the baseband processor2h-20and the RF processor2h-10may include a plurality of communication modules for supporting different radio access technologies. At least one of the baseband processor2h-20and the RF processor2h-10may also include multiple communication modules for processing the signals in different frequency bands. For example, the different radio access technologies may include a wireless local area network (WLAN) (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11) and a cellular network (e.g., LTE). The different frequency bands may include a super high frequency (SHF) band (e.g., 2.2 GHz and 2 GHz bands) and an mmWave band (e.g., 60 GHz). The storage unit2h-30stores data such as basic programs for operation of the UE, application programs, and setting information. The storage unit2h-30provides the stored information in response to a request from the controller2h-40. The controller2h-40controls overall operations of the UE. For example, the controller2h-40controls the baseband processor2h-20and the RF processor2h-10for transmitting and receiving signals. The controller2h-40writes and reads data to and from the storage unit2h-40. For this purpose, the controller2h-40may include at least one processor. For example, the controller2h-40may include a communication processor (CP) for controlling communications and an application processor (AP) for controlling higher layer programs such as applications. The controller2h-40may be electrically connected to the transceiver. FIG.2Iillustrates a block diagram of a configuration of a base station or a wireless node in a wireless communication system according to an embodiment of the disclosure. In reference toFIG.2I, the base station includes an RF processor2i-10, a baseband processor2i-20, a backhaul communication unit2i-30, a storage unit2i-40, and a controller2i-50. The RF processor2i-10has a function for transmitting/receiving a signal over a radio channel such as band conversion and amplification of the signal. That is, the RF processing unit2i-10up-converts a baseband signal from the baseband processor2i-20to an RF band signal and transmits the RF signal via an antenna and down-converts the RF signal received via the antenna to a baseband signal. For example, the RF processor2i-10may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a DAC, and an ADC. Although one antenna is depicted in the drawing, the base station may be provided with a plurality of antennas. The RF processor2i-10may also include a plurality of RF chains. The RF processor1i-10may perform beamforming. For beamforming, the RF processor2i-10may adjust the phase and size of a signal to be transmitted/received by means of the antennas or antenna elements. The RF processor2i-10may be configured to transmit one or more layers for a downlink MIMO operation. The baseband processor2i-20has a baseband signal-bit string conversion function according to a physical layer standard of the system. For example, in a data transmission mode, the baseband processor2i-20performs encoding and modulation on the transmission bit string to generate complex symbols. In a data reception mode, the baseband processor2i-20performs demodulation and decoding on the baseband signal from the RF processor2i-10to recover the transmitted bit string. In the case of using an OFDM scheme for data transmission, the baseband processor2i-20performs encoding and modulation on the transmission bit string to generate complex symbols, maps the complex symbols to subcarriers, performs inverse fast Fourier transform (IFFT) on the symbols, and inserts a cyclic prefix (CP) into the symbols to generate OFDM symbols. In the data reception mode, the baseband processor2i-20splits the baseband signal from the RF processor2i-10into OFDM symbols, performs fast Fourier transform (FFT) on the OFDM symbols to recover the signals mapped to the subcarriers, and performs demodulation and decoding on the signals to recover the transmitted bit string. The baseband processor2i-20and the RF processor2i-10process the transmission and reception signals as described above. Accordingly, the baseband processor2i-20and the RF processor2i-10may be referred to as a transmitter, a receiver, a transceiver, or a communication unit. The communication unit2i-30provides an interface for communication with other nodes in the network. The storage unit2i-40stores data such as basic programs for operation of the base station, application programs, and setting information. The storage unit2i-40may also store the information on the bearers established for UEs and measurement results reported by the connected UEs. The storage unit2i-40may also store the information for use by a UE in determining whether to enable or disable multi-connectivity. The storage unit2i-40may provide the stored data in reference to a request from the controller2i-50. The controller2i-50controls overall operations of the base station. For example, the controller2i-50controls the baseband processor2i-20, the RF processor2i-10, and the backhaul communication unit2i-30for transmitting and receiving signals. The controller2i-50writes and reads data to and from the storage unit2i-40. For this purpose, the controller2i-50may include at least one processor. The controller may be electrically connected to the transceiver. Although the description has been made with reference to particular embodiments, the disclosure can be implemented with various modifications without departing from the scope of the disclosure. Thus, the disclosure is not limited to the particular embodiments disclosed but will include the following claims and their equivalents. Although the present disclosure has been described with various embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.
128,328
11943612
DETAILED DESCRIPTION FIG.1is a simplified representation of a communication system allowing to authenticate a communication apparatus operated by an identified user. The system comprises a communication apparatus100, that is to say a piece of equipment with communication capabilities and when needed, capacity of data capture, sensing, data storage, sensing and/or data processing. The communication apparatus is for example a smartphone, a tablet computer or an IoT device. In this description, the expression IoT device refers to a piece of equipment with communication capabilities and optionally capacity of data capture, sensing, data storage, sensing and/or data processing. An IoT device comprises for example a wireless communication module also called Machine Type Communication (MTC) module allowing transmission of data from one IoT device to another or exchange of data between machines through UMTS/HSDPA, CDMA/EVDO, LTE, 5G, LoRa or other networks. The communication apparatus100embeds a subscription module110. A subscription module is an entity implemented in software and/or hardware and comprising at least means for authenticating a subscriber in a communication network. The subscription module can be for example a Universal Integrated Circuit Card (UICC) comprising a SIM and a USIM application, an eUICC adapted to be provisioned with one or several subscription profile or a software SIM. The skilled person will understand that this list is non-limitative and that other types of subscription modules can also be advantageously used in conjunction with the invention. The subscription module110memorizes at least one identifier of the subscriber, for example an International Mobile Subscriber Identity (IMSI). The subscription module110also memorizes one or several credentials needed for authenticating the subscriber by the network, such as a secret key Ki which is used in standard authentication algorithms such as AKA Milenage or COMP-128. The communication apparatus100embeds at least an electronic circuit111capable of memorizing an identifier which can be used as an identifier of the communication apparatus100. This identifier is preferably unique. According to an embodiment, this identifier is an International Mobile Equipment Identity (IMEI). Other types of identifier can also be used such as a unique device identifier (UDI) used to identify medical devices in the United States. A mobile equipment identifier (MEID) as defined in the 3GPP2 report S.R0048 can also be used, which can be seen as an IMEI but with hexadecimal digits. The communication apparatus100also memorizes a hardware diversification key which is one important element for authenticating the apparatus. This hardware diversification key stored in the communication apparatus100is a secret key and is designated in the sequel as the first hardware diversification key F_HWDK. According to a preferred embodiment, the first hardware diversification key F_HWDK is stored in a tamper resistant area of the communication apparatus100. According to another embodiment, the first hardware diversification key F_HWDK is not memorized permanently is derived from a secret stored in the communication apparatus100. This secret is for example a master key KMF memorized in a tamper resistant area of the electronic circuit111. In that case, the first hardware diversification key F_HWDK can be generated by the communication apparatus100when required. The system as illustrated inFIG.1also comprises network elements101,102complaint with a given radio communication technology. As an example, this radio communication technology is one of the Third Generation Partnership Project (3GPP) and can support 2G, 3G, 4G and 5G. The skilled person will understand that other alternative technologies can also be considered in the context of this invention such as WiFi or LoRa technologies as well as other technologies provided by other standards setting organizations (SSO) or non-standardized. In the sequel, an LTE (Long Term Evolution) wireless network is taken as an example and the network comprises at least an eNode B101and a mobility management entity (MME)102. The system also comprises a device authentication server103also designated with the acronym DAS. This server103is able to access a database enabling to associate subscription identifiers with device identifiers. This database can be implemented in the DAS server or alternatively in another server allowing secure data exchanges with the DAS server. More precisely, the database allows the device authentication server103to unambiguously retrieve a device identifier that is associated to a given subscription identifier. For example, if this authentication server103is provided with a subscription identifier of IMSI type, it will be capable of identifying an identifier of the communication apparatus embedding the subscription module provisioned with the receiver IMSI. The identifier of the communication apparatus is for example an IMEI or any other type of identifier allowing to identify a communication apparatus. The device authentication server103is able to communicate through the wireless network120with the communication apparatus100embedding the subscription module110and the electronic circuit111. The device authentication server103is further configured to access a secure distributed ledger130. A secure distributed ledger SDL is a database which is consensually replicated, shared, and synchronized geographically across multiple sites, countries, or institutions. In addition, it is said secure because it is considered as impossible to modify data once memorized in the ledger. Therefore, data obtained by the device authentication server104by accessing the secure distributed ledger130can be trusted. The secure distributed ledger comprises continuously growing list of assertions or records and is often designated as immutable. An example of immutable distributed database is Blockchain. The secure distributed ledger130is made of a plurality of nodes131-133which are configured to share and/or publish records in the database. When a manufacturer produces an electronic circuit111, it is able to publish in the secure distributed ledger130a record, also called assertion, comprising one or several data elements. According to a preferred embodiment of the invention, this record comprises a hardware diversification key designated in this description as the second hardware diversification key S_HWDK and an identifier of the communication apparatus or of a circuit that is embedded in the communication apparatus. These data elements contained in the secure distributed ledger130can be later used by the device authentication server to authenticate a given communication apparatus embedding the electronic circuit111. FIG.2provides an example of method allowing a wireless network operator to authenticate a communication apparatus. According to this example, a given communication apparatus associated to a given subscription sends an attachment request. This attachment request is received by the MME102of a wireless network operator, which is preferably the home operator of the subscriber. The attachment request comprises an identifier of the subscriber. According to an embodiment, the attachment request is the standardized message as described in 3GPP specifications and allows the communication apparatus to send its IMSI as a subscription identifier. For Universal Mobile Telecommunications Systems (UMTS) and Long Term Evolution (LTE) networks, this message is detailed in the 3GPP technical specifications 3GPP TS 24.008 “Mobile radio interface Layer3specification” and 3GPP TS 24.301 “Non-Access-Stratum (NAS) protocol for Evolved Packet System (EPS)”. The attachment message is processed by the wireless network120for authenticating200the subscriber. Authentication of the subscriber S_AUTH can be carried out using well known mechanism which are not described here. According to an aspect of the invention, the wireless network120can trigger202a second authentication procedure once the subscriber has been successfully authenticated201. The second authentication procedure can be triggered202for example when there is a suspicion that the correctly authenticated subscriber is in fact a fraudster using for example a cloned version of a soft SIM. This suspicion may arise when other authentication attempts which have been requested for the same subscriber failed, showing that there might be several devices trying to connect to the network using same subscriber identifier, some of them being operated by a malevolent person. The second authentication procedure is an authentication of the device D_AUTH. Alternatively, the device authentication procedure D_AUTH can be triggered after failure of the subscriber authentication S_AUTH. According to another example, the device authentication can be trigger when it is detected that another communication apparatus is already attached to the network. The skilled person will understand that the device authentication procedure can be triggered when the occurrence of a given event is detected, or on demand by the home mobile network operator MNO. The subscriber authentication can fail for example if a soft SIM has been copied and installed in another communication apparatus. In case the subscriber authentication is based on a single-use secret updated by the network and the soft SIM after each successful authentication as described in the European patent application number 17306942.8, only one communication apparatus among the one embedding the genuine soft SIM and the one embedding the cloned soft SIM is able to be correctly authenticated, as the secret will be then updated by this communication apparatus and the network. Therefore, the other communication apparatus, which can be either the one with the genuine soft SIM or the cloned one, is not anymore synchronized with the network as their secrets are not anymore synchronized. The device authentication D_AUTH aims at verify that the communication apparatus sending the attachment request is correctly associated to the subscriber. Once the subscription identifier is received by the network in the attachment request, it is then transmitted by the MME102of the wireless network operator to the device authentication server103allowing the retrieval201of the device identifier to which is associated. The device authentication server103then retrieves204in a database an identifier of a communication apparatus associated to the subscription identifier. For that purpose, the device authentication server103, comprises a database or has access to a database allowing him to make the link between a subscriber identifier, for example an IMSI and a device identifier, for example an IMEI. It is underlined once more that various types of subscription identifier and device identifiers can be used in the context of this invention. Further, the method comprises the step of acquiring205by the device authentication server103from the secure distributed ledger130a hardware diversification key called second hardware diversification key S_HWDK with the purpose of authenticating the communication apparatus identified by the device identifier, or a chip or circuit embedded in the communication apparatus. According to an aspect of the invention, the second hardware diversification key S_HWDK is published in the secure distributed ledger130by the manufacturer of the electronic device. According to an example, each chip maker, device maker or integrator can write in the distributed ledger an information element in order to identify itself. This information element is labelled MANUF in this description. According to an alternative embodiment, the manufacturer can write in the secure distributed ledger130a master key KMF instead of the second hardware diversification key S_HWDK, said second hardware diversification key S_HWDK being derived from the master key KMF using a predetermined algorithm. Further, the manufacturer can also write in the secure distributed ledger an information element designating which algorithm ALGO1 is to be used for determining the second hardware diversified key S_HWDK knowing a device serial number, for example its IMEI and/or a secret such as the aforementioned master key KMF. According to another example, the device manufacturer can also write in the secure distributed ledger132an information element indicating which algorithm ALGO2 is used by the communication apparatus in case the first hardware diversification key F_HWDK needs to be derived from a secret and/or a device identifier or other data elements or parameters. This information can be later communicated to the communication device by the authentication server to determining locally the hardware diversification key to be used to enable the device authentication. The authentication scheme implemented in this system aims at verifying that the communication apparatus used by the subscriber is authorized by its wireless network operator. For that purpose, it is verified that the communication apparatus is provisioned with a secret that is the same as the one published by the device manufacturer in the secure distributed ledger. As underlined in the different embodiments presented above, this secret can be a hardware diversification key. In that case, the aim of the device authentication process is to verify that the first hardware diversification key F_HWDK is identical to second hardware diversification key S_HWDK. In other words, it is verified during the device authentication process that the hardware diversification key memorized in the communication apparatus is that same as the one published by the device manufacturer in the secure distributed ledger for this device. Alternatively, the secret can be a master key KMF memorized in the communication apparatus and also published in the secure distributed ledger. In that case, the master key is used to derive a first hardware diversification key by the communication device and a second hardware diversification key by the authentication server. The authentication of the communication apparatus can then be carried out using an authentication mechanism based on the use of symmetric keys. A well-known example is the AKA milenage algorithm which is used for authenticating the subscriber in 3GPP-like wireless networks. According to the example presented inFIG.2an algorithm based on the use of symmetric keys is applied. The device authentication206can be carried out as follow by the device authentication server103. A challenge message is generated by the device authentication server103and then transmitted to the communication apparatus100. The challenge message comprises for example a random number RAND generated by the device authentication server103. Then, the device authentication process can be applied206. For that purpose, the communication apparatus100is able to calculate a first result F_HWRES by applying an algorithm ALGO2 memorized for example in a tamper resistant area of the communication apparatus. The random number RAND and the first hardware diversification key F_HWDK memorized securely in the communication apparatus or derived from a master key KMF memorized securely in the communication apparatus can be used as inputs: F_HWRES=ALGO1(F_HWDK,RAND) This result F_HWRES is then transmitted by the communication apparatus to the device authentication server103. It is then compared with a result S_HWRES determined by the device authentication server103. The result S_HWRES can be determined as follow: S_HWRES=ALGO2(S_HWDK,RAND) If207the result S_HWRES received from the communication apparatus100matches with the one F_HWRES determined by the device authentication server103, then the communication apparatus100is authenticated. If not, this means that the communication apparatus100is not the one known as associated with the subscriber by his home network operator. This situation can be encountered for example when a soft-SIM has been installed in a first communication apparatus, duplicated by a malevolent person and installed in a second communication apparatus. When the second device tries to establish a wireless communication, the home wireless network operator is able to verify if the if the second communication embeds a cloned version of the soft-SIM. Depending on the success or failure205of the authentication, the wireless network can behave differently. Only legitimate communication apparatus with provisioned with the correct hardware diversification key are able to respond correctly. If the authentication is successful, the communication apparatus is identified genuine and therefore, it is granted with an access to the wireless network. On the contrary, if the authentication process fails, the wireless network operator can then take appropriate actions208to cancel the cloned device from its network. In case of a failed authentication with a genuine device, once the genuine device is identified, the AuC authentication key can be swapped to the current value of this genuine device, in case of AKA tokenization. Another appropriate action is to force the communication apparatus which is detected as a cloned device to detach from the network, using for example the standardized 3GPP detach procedure.
17,424
11943613
DETAILED DESCRIPTION Reference will now be made in greater detail to embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numerals will be used throughout the drawings and the description to refer to the same or like parts. As used herein, the terminology “computer” or “computing device” includes any unit, or combination of units, capable of performing any method, or any portion or portions thereof, disclosed herein. For example, the “computer” or “computing device” may include at least one or more processor(s). As used herein, the terminology “processor” indicates one or more processors, such as one or more special purpose processors, one or more digital signal processors, one or more microprocessors, one or more controllers, one or more microcontrollers, one or more application processors, one or more central processing units (CPU)s, one or more graphics processing units (GPU)s, one or more digital signal processors (DSP)s, one or more application specific integrated circuits (ASIC)s, one or more application specific standard products, one or more field programmable gate arrays, any other type or combination of integrated circuits, one or more state machines, or any combination thereof. As used herein, the terminology “memory” indicates any computer-usable or computer-readable medium or device that can tangibly contain, store, communicate, or transport any signal or information that may be used by or in connection with any processor. For example, a memory may be one or more read-only memories (ROM), one or more random access memories (RAM), one or more registers, low power double data rate (LPDDR) memories, one or more cache memories, one or more semiconductor memory devices, one or more magnetic media, one or more optical media, one or more magneto-optical media, or any combination thereof. As used herein, the terminology “instructions” may include directions or expressions for performing any method, or any portion or portions thereof, disclosed herein, and may be realized in hardware, software, or any combination thereof. For example, instructions may be implemented as information, such as a computer program, stored in memory that may be executed by a processor to perform any of the respective methods, algorithms, aspects, or combinations thereof, as described herein. Instructions, or a portion thereof, may be implemented as a special purpose processor, or circuitry, that may include specialized hardware for carrying out any of the methods, algorithms, aspects, or combinations thereof, as described herein. In some implementations, portions of the instructions may be distributed across multiple processors on a single device, on multiple devices, which may communicate directly or across a network such as a local area network, a wide area network, the Internet, or a combination thereof. As used herein, the term “application” refers generally to a unit of executable software that implements or performs one or more functions, tasks or activities. For example, applications may perform one or more functions including, but not limited to, telephony, web browsers, e-commerce transactions, media players, travel scheduling and management, smart home management, entertainment, and the like. The unit of executable software generally runs in a predetermined environment and/or a processor. As used herein, the terminology “determine” and “identify,” or any variations thereof includes selecting, ascertaining, computing, looking up, receiving, determining, establishing, obtaining, or otherwise identifying or determining in any manner whatsoever using one or more of the devices and methods are shown and described herein. As used herein, the terminology “example,” “the embodiment,” “implementation,” “aspect,” “feature,” or “element” indicates serving as an example, instance, or illustration. Unless expressly indicated, any example, embodiment, implementation, aspect, feature, or element is independent of each other example, embodiment, implementation, aspect, feature, or element and may be used in combination with any other example, embodiment, implementation, aspect, feature, or element. As used herein, the terminology “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is unless specified otherwise, or clear from context, “X includes A or B” is intended to indicate any of the natural inclusive permutations. That is if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form. Further, for simplicity of explanation, although the figures and descriptions herein may include sequences or series of steps or stages, elements of the methods disclosed herein may occur in various orders or concurrently. Additionally, elements of the methods disclosed herein may occur with other elements not explicitly presented and described herein. Furthermore, not all elements of the methods described herein may be required to implement a method in accordance with this disclosure. Although aspects, features, and elements are described herein in particular combinations, each aspect, feature, or element may be used independently or in various combinations with or without other aspects, features, and elements. Further, the figures and descriptions provided herein may be simplified to illustrate aspects of the described embodiments that are relevant for a clear understanding of the herein disclosed processes, machines, manufactures, and/or compositions of matter, while eliminating for the purpose of clarity other aspects that may be found in typical similar devices, systems, compositions and methods. Those of ordinary skill may thus recognize that other elements and/or steps may be desirable or necessary to implement the devices, systems, compositions and methods described herein. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the disclosed embodiments, a discussion of such elements and steps may not be provided herein. However, the present disclosure is deemed to inherently include all such elements, variations, and modifications to the described aspects that would be known to those of ordinary skill in the pertinent art in light of the discussion herein. Disclosed herein are methods and systems for network device and local area network recovery and management using mobile devices. In an implementation, a router is provided with a redundant router controller which connects with a mobile device to relay control, command, and diagnostic data and information to and from a service provider system in the event of a network connection failure between the router and the service provider system. Diagnostic data collected by the router is sent to the service provider system, which in turn can send configuration commands to the router based on the diagnostic data. Router reconfiguration can be quickly and efficiently processed, resulting in reconnection of the network connection between the router and the service provider system. In an implementation, a service provider application on a mobile device and the redundant router controller provides a means for establishing a secure wireless communication channel between the router and the service provider system in case of an outage. In an implementation, the wireless communication channel can use a Wi-Fi® interface. In an implementation, the wireless communication channel can use a Bluetooth® interface. In an implementation, authentication certifications would be exchanged between the mobile device, the service provider system, and the router. Upon an outage condition, secure connections are established between the mobile device and the service provider system, and between the mobile device and the router. The router would start sending diagnostic logs to the mobile device, which would act or be seen as a configuration controller. In an implementation, the mobile device sends the diagnostic logs to the service provider system. The service provider system determines a resolution and sends configuration commands to the mobile device, which in turn sends the configuration commands to the router, is seen by the service provider system as a configuration client. In an implementation, the mobile device determines a resolution and sends configuration commands to the router. In an implementation, the mobile device updates the service provider system with the new configuration. In an implementation, the redundant or back-up communication channel architecture complements existing capabilities. A user would be able to request a service set identifier (SSID) even in the event of network connection outages. The user would send the request to the service provider system as normal. The service provider system would send the new configuration data to the mobile device, which in turn loads the configuration update to the router. In illustrative examples, even in the event of an outage, configuration instructions can be sent to disable a rogue device, update a policy, update router configuration based on received diagnostic logs, update LAN based rules, update network blocking rules, firewall rules, device kicking rules, firmware updates, software updates, and the like and/or combinations thereof. FIG.1is a diagram of an example of a network architecture1000in accordance with embodiments of this disclosure. In implementations, the architecture1000can include a service provider system1100which provides cable, television, Internet, voice, and like services to premises, residences, offices, and the like (collectively “premises”) such as, for example, premises1200. The service provider system1100can include a cable modem termination system1110and a configuration management server1120. The service provider system1100is connected to or in communication with (collectively “in communication with”) the premises1200. The premises1200can include a modem1300which is connected to the cable modem termination system1110and to a router1400. The router1400can establish a local area network (LAN) for the premises1200, where connections to the LAN can be wired, wireless, or combinations thereof. The router1400can include radios such as, for example, a Wi-Fi® radio1410, a BlueTooth® radio1420, and the like for wireless connectivity and Ethernet ports1430, for example, for wired connectivity. For example, a connected device1500can be connected to the Ethernet port1430and a mobile device1600can be connected via the Wi-Fi® radio1410. The router1400also includes a router controller1440and a redundant router controller1450which may also be referred to as a cellular backhaul manager or controller or a back-up communication channel manager or controller. The mobile device1600can also be connected to a wireless network1700, which provides wireless coverage using one or more base stations1710,1720, and1730. The number of base stations is illustrative and the wireless network1700may include more or less base stations. The communications between elements or components in the architecture1000may include wired communications, wireless communications, or a combination thereof, as appropriate. In implementations, the architecture1000can execute the techniques described inFIGS.3-13individually or in combinations thereof. The architecture1000and each element or component in the architecture1000is illustrative and can include additional, fewer or different devices, entities, element, components, and the like which can be similarly or differently architected without departing from the scope of the specification and claims herein. Moreover, the illustrated devices, entities, element, and components can perform other functions without departing from the scope of the specification and claims herein. The cable modem termination system1110can provide high speed data services, cable Internet, Voice over Internet Protocol, and like services to service provider subscribers located at, for example, at the premises1200. In implementations, the connection between the cable modem termination system1110and the modem1300is wired. The configuration management server1120can manage networks, network devices such as routers, switches, and the like. The configuration management server1120can analyze diagnostic data from network devices, determine a resolution, and send commands to the network devices to correct or repair the network device configuration. The configuration management server1120can determine the status of the connection with the network device. In the event of a network connection failure or outage, the configuration management server1120can send commands to the network device via a securely and authenticated mobile device connection. The modem1300converts data for transmission between computing devices over a transmission medium such as a fiber optical cable, coaxial cable, and the like. The modem1300encodes and decodes digital information for transmission and reception between the computing devices. The modem1300is connected to the router1400. The router1400can determine the most inexpensive, fastest, least-busy, best quality, or other criteria-based routes for delivering or forwarding packets between source and destination devices. Configuration of the router1400is nominally done by the configuration management server1120. The router controller1440controls connection configurations and other router control functionality based on commands received from the configuration management server1120via a wired connection such as between the modem1300and the cable modem termination system1110. The redundant router controller1440controls connection configurations and other router control functionality based on commands received from the configuration management server1120via a back-up or redundant communication channel which is formed from two connections including a first connection between the mobile device1600and the router1400, and a second connection between the mobile device1300and the service provider system1100and/or configuration management server1120. The router1400is an illustrative access point device and other network devices can be used. In implementations, the modem1300and the router1400can be an integrated access point device such as a gateway. The connected device1500can be, but is not limited to, end user devices, set-top boxes, personal computers (PCs), cellular telephones, Internet Protocol (IP) devices, computers, desktop computers, laptops, mobile devices, handheld computers, PDAs, personal media devices, smartphones, notebooks, notepads, phablets and the like which can be connected to the Ethernet port1430. The mobile device1600can be, but is not limited to, end user devices, cellular telephones, Internet Protocol (IP) devices, laptops, mobile devices, handheld computers, PDAs, personal media devices, smartphones, notebooks, notepads, phablets and the like. For example, in an implementation, the mobile device1600can include applications such as, but not limited to, a mail application1610, a web browser application1620, a service provider application1630and the like. The service provider application1630enables the mobile device1600to perform as a relay between the service provide system1100and the router1400in the event of an outage. The mobile device1600and/or the service provider application1630can store and use a public and private key to establish secure and authenticated connections with the router1400. In implementations, the mobile device1600and/or the service provider application1630can establish secure MQ Telemetry Transport (MQTT) or like messaging protocol connections with the service provider system1100or exchange secure messages using the MQTT or like messaging protocol connections. The wireless network1700and the one or more base stations1710,1720, and1730can be any cellular, mobile, or like standard for wireless communications including, for example, but not limited to, 3G, 4G, 5G, Global System for Mobile Communications (GSM), Long-Term Evolution (LTE), Code-division multiple access (CDMA), and the like. FIG.2is a block diagram of an example of a device2000in accordance with embodiments of this disclosure. The device2000may include, but is not limited to, a processor2100, a communication interface2200, a memory/storage2300, and applications2400. The device2000may include or implement, for example, the service provider system1100, the cable modem termination system1110, the configuration management system1120, the modem1300, the router1400, the router controller1440, the redundant router controller1450, the connected device1500, the mobile device1600, the wireless network1700, and the base stations1710,1720, and1730. The applicable or appropriate techniques or methods as described with respect toFIGS.3-13may be stored in the memory/storage2300and executed by the processor2100in cooperation with the memory/storage2300, the communications interface2200, and the applications2400, as appropriate. The applicable or appropriate techniques or methods as described with respect toFIGS.3-13can be executed individually or in various combinations thereof. The device2000may include other elements which may be desirable or necessary to implement the devices, systems, compositions and methods described herein. However, because such elements and steps are well known in the art, and because they do not facilitate a better understanding of the disclosed embodiments, a discussion of such elements and steps may not be provided herein. Operationally with respect toFIGS.1-2, and as described in detail herein below, a LAN can be established using the router1400, which is connected to the service provider system1100via the modem1300and the cable modem termination system1110. The router1400and/or router controller1440can send diagnostic information to the service provider system1100and/or configuration management server1120and the service provider system1100and/or configuration management server1120can send control and configuration commands to the router1400via the modem1300and the cable modem termination system1110connection. The connected device1500and the mobile device1600can connect to and use the LAN as appropriate. In implementations, the mobile device1600can exchange authentication information or credentials with the router1400via the service provider system1100. These authentication information or credentials can then be used in the event of an outage between the service provider system1100and the router1400. In the event of a network connection failure, the router1400and/or the redundant router controller1450can advertise the need for the back-up communication channel. The network connection failure can be due a variety of reasons, some of which are illustrated herein. In a non-limited example, the failure can be between the modem1300and the cable modem termination system1110connection. The mobile device1600can authenticate with the router1400and/or the redundant router controller1450to establish a secure and authenticated wireless connection. The secure and authenticated wireless connection can be, for example, a WiFi®, BlueTooth®, or other wireless communication based connection. The redundant router controller1450can take control of the router1400from the router controller1440. The mobile device1600can then relay diagnostic data received from the router1400and/or the redundant router controller1450to the service provider system1100and/or configuration management server1120. The configuration management server1120can then send configuration commands to the router1400and/or the redundant router controller1450via the mobile device1600. The redundant router controller1450can then reconfigure the router1400in accordance with the received configuration commands. If the network connection is re-established, then the mobile device1600connection can be disconnected and the router controller1440can take control of the router1400from the redundant router controller1450. FIG.3is a flow diagram3000of an example of a method for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The flow diagram3000describes communications and events with respect to a service provider system3100, a router3200, and a mobile device3400associated with a user3300or subscriber to the services provided by the service provider system3100. Initially there is a full or complete network connection between the service provider system3100and the router3200(3500). In implementations, the network connection is an Internet connection. Upon initial connection to the router3200or associated LAN, the mobile device3400can publish authentication credentials to the service provider system3100(3505). In implementations, the mobile device3400can have public and private keys and the public key can be published to the service provider system3100. The service provider system3100can then send a media access control (MAC) address to and load the public key on the router3200(3510). An outage occurs (3515). The service provider system3100can send a message to the mobile device3400that a back-up communication channel or backhaul connection is needed to communicate with the router3200(3520). The mobile device3400can then send a request or notification to the user3300for approval (3525). The user3300can then send or provide approval (3530), upon which the mobile device3400can then send a request for certifications for an authenticated connection to the service provider system3100(3555). The service provider system3100can send the certifications (3540) and the mobile device3400can request diagnostic logs on the back-up communication channel or backhaul connection (3545). The back-up communication channel or backhaul connection can be a Wi-Fi® connection, a Bluetooth® connection, or any wireless connection. At this point, a backhaul manager or controller (e.g., the redundant router controller1450ofFIG.1) can assume control of the router3200and can redirect the control traffic including the diagnostic data to the mobile device3400(3550) which in turn can relay the control traffic including the diagnostic data to the service provider system3100. The service provider system3100can then send router configuration data, commands, or instructions to the mobile device3400(3555), which in turn can update the router3200with the router configuration data, commands, or instructions (3560). FIG.4is a flow diagram4000of an example of a method for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The flow diagram4000describes communications and events with respect to a service provider system4100, a router4200which includes a router cellular backhaul manager (CBM)4300, and a mobile device4400. In the event of an outage (4500), the router4200can detect network connection failure and initiate diagnostics (4505). The router CBM4300can assume control of the router4200and begin advertising to the mobile device4400that a cellular backhaul connection is needed (4510). The mobile device4400and the router CBM4300can then establish an authenticated or secure cellular backhaul connection (4515). The cellular backhaul connection can be a Wi-Fi® connection, a Bluetooth® connection, or any wireless connection. The router CBM4300can then change the router configuration to route control traffic through the cellular backhaul connection (4520). The router4200can then send diagnostic logs and information to the mobile device4400(4525). The mobile device4400can then send router configuration data, commands, or instructions to the router CBM4300(4530), which in turn can update the router4200with the router configuration data, commands, or instructions (4535). The router CBM4300can then verify reestablishment of the network connection with the service provider4100(4300), restore the network connection between the router4200and the service provider4100, and remove the cellular backhaul connection (4545). The router4200and the service provider4100can now communicate over the restored network connection (4550). FIG.5is a flow diagram5000of an example of a method for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The flow diagram5000describes communications and events with respect to a service provider system5100, a router5200, and a mobile device5400associated with a user5300or subscriber to the services provided by the service provider system5100. In the event of an outage (5500), the router5200can detect network connection failure and initiate diagnostics (5505) and begin advertising to the mobile device5400that a backhaul connection is needed (5510). The backhaul connection can be a Wi-Fi® connection, a Bluetooth® connection, or any wireless connection. In addition, the service provider system5100can send a message or notification of a network connection failure to the mobile device5400(5515). The mobile device5400can then send a request or notification to the user5300for approval (5520). The user5300can then send or provide approval (5525), upon which the mobile device5400can then send a request to the router5200to accept the backhaul connection and verify the certifications (5530). The router5200can verify the certifications and establish the backhaul connection (5535). The mobile device5400can confirm connection with the service provider system5100(5540). In an illustrative example, the outage may have been due to a bad or corrupted domain name system (DNS) server configuration. In this outage event, the service provider system5100can send to the mobile device5400configuration instructions to connect to a different DNS server (5545). The mobile device5400can send the configuration instructions to the router5200, which in turn can attempt to resolve the different DNS server (5550). The router5200can send the results from resolving the different DNS server to the mobile device5400(5555), which in turn can instruct the router5200to update to the different DNS server (5560). FIG.6is a flow diagram6000of an example of a method for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The flow diagram6000describes communications and events with respect to a service provider system6100, a router6200, and a mobile device6400associated with a user6300or subscriber to the services provided by the service provider system5100. In the event of an outage (6500), the router6200can detect network connection failure and initiate diagnostics (6505) and begin advertising to the mobile device6400that a backhaul connection is needed (6510). In addition, the service provider system6100can send a message or notification of a network connection failure to the mobile device6400(6515). The mobile device6400can then send a request or notification to the user6300for approval (6520). The user6300can then send or provide approval (6525), upon which the mobile device6400can then send a request to the router6200to accept the backhaul connection and verify the certifications (6530). The router6200can verify the certifications and establish the backhaul connection (6535). The backhaul connection can be a Wi-Fi® connection, a Bluetooth® connection, or any wireless connection. The mobile device6400can confirm the connection with the service provider system6100(6540). In an illustrative example, the outage may have been due to a rogue Internet of Things (IoT) device. In this outage event, the service provider system5100can send instructions to the mobile device6400to disable the IoT device from the LAN or the router6200(6545). The mobile device6400can send the instructions to the router6200, which in turn can disable or disconnect the rogue IoT device. FIG.7is a flow diagram7000of an example of a method for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The flow diagram7000describes communications and events with respect to a service provider system7100, a router7200, and a mobile device7300including a service provider application. In the event of an outage (7400), the service provider system7100can send instructions to the mobile device7300to gather diagnostics from the router7200(7410), which in turn can send instructions, over a secure wireless connection, to the router7200to pull the diagnostic logs (7420). The secure wireless connection can be a Wi-Fi® connection, a Bluetooth® connection, or any wireless connection. The router7200sends the pulled diagnostic logs to the mobile device7300(7430), which in relays or forwards the diagnostic logs to the service provider system7100(7440). The service provider system7100can send configuration correction updates to the mobile device7300. The configuration correction updates can include Dynamic Host Configuration Protocol (DHCP) client updates for the router7200. The mobile device7300can send the DHCP client updates to the router7200(7450), which in turn can perform the update (7460). The network connection is restored upon successful repair (7470). FIG.8is a flow diagram8000of an example of a method for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The flow diagram8000describes communications and events with respect to a service provider system8100, a router8200, and a mobile device8300including a service provider application. In the event of an outage (8400), the service provider system8100can send an alert notification and instructions to the mobile device8300(8410). In an illustrative example, the alert notification can be due to elapsing or timing out of a time sensitive policy and the instructions can be configuration details to address the alert notification. The mobile device8300can establish a secure wireless connection with the router8200(8420). The secure wireless connection can be a Wi-Fi® connection, a Bluetooth® connection, or any wireless connection. The mobile device8300can send the alert notification and instructions to the router8200(8430). In an illustrative example, the instructions can be to modify a firewall. The router8200can send operational confirmation of the instructions (8440). In an illustrative example, the operational confirmation can be success, failure, and the like. The mobile device8300can send the operational confirmation to the service provider system8100(8450). FIG.9is a flow diagram9000of an example of a method for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The flow diagram9000describes communications and events with respect to a service provider system9100, a router9200, and a mobile device9300including a service provider application. During an outage (9400), the mobile device9300can send a request for a configuration change or update to the service provider system9100(9410). In an illustrative example, the configuration change or update request can be to change the Wi-Fi® SSID. The service provider system9100can send the configuration update for the router9200to the mobile device9300in view of the outage (9420). The mobile device9300can perform a secure handshake with the router9200to establish a secure connection (9430). The secure connection can be a Wi-Fi® connection, a Bluetooth® connection, or any wireless connection. The mobile device9300can send the configuration update to the router9200over the secure connection (9440). Upon completing the configuration update, the router9200can send a transaction success message to the mobile device9300(9450) FIG.10is a flowchart of an example method10000for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The method10000includes: providing10100an access point device with authentication details of a mobile device; establishing10200a secure connection with the access point device for failure of a network connection between a service provider system and the access point device; obtaining10300diagnostic information from the access point device; sending10400the diagnostic information to the service provider system; receiving10500a configuration update from the service provider system; sending10600the configuration update to the access point device; receiving10700confirmation of the configuration update; and disconnecting10800the secure connection upon restoration of the network connection between the service provider system and the access point device. For example, the technique10000may be implemented, as applicable and appropriate, by the service provider system1100, the cable modem termination system1110, the configuration management system1120, the modem1300, the router1400, the router controller1440, the redundant router controller1450, the connected device1500, the mobile device1600, the wireless network1700, the base stations1710,1720, and1730, the device2000, the processor2100, the communication interface2200, the memory/storage2300, and the applications2400. The method10000includes providing10100an access point device with authentication details of a mobile device. In implementations, a service provider can provide services to a premises by providing and connecting an access point device to a service provider system via a network connection. Configuration and maintenance of the access point device can be performed over the network connection. A secondary connection for configuration and maintenance can be provided by enabling the access point device and a mobile device associated with the premises be connected in case the network connection fails. The mobile device can provide authentication credentials, such as public and private keys, to the access point device when the mobile device initially connects to the access point device. In implementations, this can be processed via the service provider system. The access point device can have one or more controllers to handle the network connection and the secondary connection. The method10000includes establishing10200a secure connection with the access point device for failure of a network connection between a service provider system and the access point device. The mobile device and the access point device can perform an authentication or secure handshake to establish a secure connection upon receipt of notification of a failure. In implementations, notification can be provided by the service provider system, the access point device, and/or combinations thereof. In implementations, the access point device can advertise the need for the secondary connection to the mobile device. In implementations, receipt of the notification can generate an alert to a user of the mobile device for permission to use the mobile device in the secondary connection. In this instance, the authentication or secure handshake process can take place upon user approval. The method10000includes obtaining10300diagnostic information from the access point device. In implementations, the access point device can initiate gathering of diagnostic data in the event of a network connection failure. In implementations, the mobile device can instruct the access point device to gather the diagnostic data. The method10000includes sending10400the diagnostic information to the service provider system. The diagnostic data provided by the access point device is sent by the mobile device to the service provider system. The method10000includes receiving10500a configuration update from the service provider system. The service provider system can review the diagnostic data received from the mobile device and generate a configuration update in view of the diagnostic data. The method10000includes sending10600the configuration update to the access point device. The mobile device can relay or forward the configuration update to the access point device. The method10000includes receiving10700confirmation of the configuration update. The access point device can apply the configuration update and send results to the mobile device. In implementations, the mobile device can confirm application of the configuration update. The method10000includes disconnecting10800the secure connection upon restoration of the network connection between the service provider system and the access point device. The secondary connection can be disconnected upon successful application of the configuration update and restoration of the network connection. FIG.11is a flowchart of an example method11000for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The method11000includes: providing11100an access point device with authentication details of a mobile device; establishing11200a secure connection with the access point device for failure of a network connection between a service provider system and the access point device; obtaining11300diagnostic information from the access point device; sending11400a configuration update to the access point device; receiving11500confirmation of the configuration update; and disconnecting11600the secure connection upon restoration of the network connection between the service provider system and the access point device. For example, the technique11000may be implemented, as applicable and appropriate, by the service provider system1100, the cable modem termination system1110, the configuration management system1120, the modem1300, the router1400, the router controller1440, the redundant router controller1450, the connected device1500, the mobile device1600, the wireless network1700, the base stations1710,1720, and1730, the device2000, the processor2100, the communication interface2200, the memory/storage2300, and the applications2400. The method11000includes providing11100an access point device with authentication details of a mobile device. In implementations, a service provider can provide services to a premises by providing and connecting an access point device to a service provider system via a network connection. Configuration and maintenance of the access point device can be performed over the network connection. A secondary connection for configuration and maintenance can be provided by enabling the access point device and a mobile device associated with the premises be connected in case the network connection fails. The mobile device can provide authentication credentials, such as public and private keys, to the access point device when the mobile device initially connects to the access point device. In implementations, this can be processed via the service provider system. The access point device can have one or more controllers to handle the network connection and the secondary connection. The method11000includes establishing11200a secure connection with the access point device for failure of a network connection between a service provider system and the access point device. The mobile device and the access point device can perform an authentication or secure handshake to establish a secure connection upon receipt of notification of a failure. In implementations, the notification can be provided by the service provider system, the access point device, and/or combinations thereof. In implementations, the access point device can advertise the need for the secondary connection to the mobile device. In implementations, receipt of the notification can generate an alert to a user of the mobile device for permission to use the mobile device in the secondary connection. In this instance, the authentication or secure handshake process can take place upon user approval. The method11000includes obtaining11300diagnostic information from the access point device. In implementations, the access point device can initiate gathering of diagnostic data in the event of a network connection failure. In implementations, the mobile device can instruct the access point device to gather the diagnostic data. The method11000includes sending11400a configuration update to the access point device. The mobile device can review the diagnostic data received from the access point device and generate a configuration update in view of the diagnostic data. The method11000includes receiving11500confirmation of the configuration update. The access point device can apply the configuration update and send results to the mobile device. In implementations, the mobile device can confirm application of the configuration update. The method11000includes disconnecting11600the secure connection upon restoration of the network connection between the service provider system and the access point device. FIG.12is a flowchart of an example method12000for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The method12000includes: receiving12100configuration instructions as an indication an outage between an access point device and a service provider system; establishing12200a secure connection with the access point device; sending12300the configuration instructions to the access point device; receiving12400confirmation of application of the configuration instructions; and sending12500results to the service provider. For example, the technique12000may be implemented, as applicable and appropriate, by the service provider system1100, the cable modem termination system1110, the configuration management system1120, the modem1300, the router1400, the router controller1440, the redundant router controller1450, the connected device1500, the mobile device1600, the wireless network1700, the base stations1710,1720, and1730, the device2000, the processor2100, the communication interface2200, the memory/storage2300, and the applications2400. The method12000includes receiving12100configuration instructions as an indication an outage between an access point device and a service provider system. In implementations, a service provider can provide services to a premises by providing and connecting an access point device to a service provider system via a network connection. Configuration and maintenance of the access point device can be performed over the network connection. A secondary connection for configuration and maintenance can be provided by enabling the access point device and a mobile device associated with the premises be connected in case the network connection fails. The mobile device can provide authentication credentials, such as public and private keys, to the access point device when the mobile device initially connects to the access point device. In implementations, this can be processed via the service provider system. The access point device can have one or more controllers to handle the network connection and the secondary connection. The mobile device can receive configuration instructions from the service provider system which indicate that an outage has occurred and instructions for repairing the outage, for applying a configuration change, for performing an access point device policy update, and/or combinations thereof. The method12000includes establishing12200a secure connection with the access point device. The mobile device and the access point device can perform an authentication or secure handshake to establish a secure connection. In implementations, the access point device can advertise the need for the secondary connection to the mobile device. In implementations, receipt of the notification can generate an alert to a user of the mobile device for permission to use the mobile device in the secondary connection. In this instance, the authentication or secure handshake process can take place upon user approval. The method12000includes sending12300the configuration instructions to the access point device. The mobile device can relay or forward the configuration instructions to the access point device. The method12000includes receiving12400confirmation of application of the configuration instructions. The access point device can apply the configuration instructions and send results to the mobile device. In illustrative examples, the configuration instructions can be to disable a rogue device, update a policy, update router configuration based on received diagnostic logs, update LAN based rules, update network blocking rules, update firewall rules, device kicking rules, firmware updates, software updates, and the like and/or combinations thereof. In implementations, the mobile device can confirm application of the configuration instructions. The method12000includes sending12500results to the service provider. The mobile device can send the results from the application of the configuration instructions to the service provider system. In implementations, the mobile device can disconnect the secondary connection upon sending the results. FIG.13is a flowchart of an example method13000for network device and local area network recovery and management using mobile devices in accordance with embodiments of this disclosure. The method13000includes: receiving13100an indication of an outage between an access point device and a service provider system; establishing13200a secure connection with the access point device; receiving13300configuration instructions from the service provider system; sending13400the configuration instructions to the access point device; receiving13500configuration results from the access point device; and sending13600the configuration results to the service provider. For example, the technique13000may be implemented, as applicable and appropriate, by the service provider system1100, the cable modem termination system1110, the configuration management system1120, the modem1300, the router1400, the router controller1440, the redundant router controller1450, the connected device1500, the mobile device1600, the wireless network1700, the base stations1710,1720, and1730, the device2000, the processor2100, the communication interface2200, the memory/storage2300, and the applications2400. The method13000includes receiving13100an indication of an outage between an access point device and a service provider system. In implementations, a service provider can provide services to a premises by providing and connecting an access point device to a service provider system via a network connection. Configuration and maintenance of the access point device can be performed over the network connection. A secondary connection for configuration and maintenance can be provided by enabling the access point device and a mobile device associated with the premises be connected in case the network connection fails. The mobile device can provide authentication credentials, such as public and private keys, to the access point device when the mobile device initially connects to the access point device. In implementations, this can be processed via the service provider system. The access point device can have one or more controllers to handle the network connection and the secondary connection. The mobile device can receive an indication that an outage has occurred. In implementations, the indication can be provided by the service provider system, the access point device, and/or combinations thereof. In implementations, receipt of the indication can generate an alert to a user of the mobile device for permission to use the mobile device in the secondary connection. The method13000includes establishing13200a secure connection with the access point device. The mobile device and the access point device can perform an authentication or secure handshake to establish a secure connection upon receipt of the indication. The mobile device can send confirmation to the service provider system upon establishment of the secure connection. The method13000includes receiving13300configuration instructions from the service provider system. The mobile device can receive configuration instructions from the service provider system after confirming establishment of the secure connection. The method13000includes sending13400the configuration instructions to the access point device. The mobile device can relay or forward the configuration instructions to the access point device. The method13000includes receiving13500configuration results from the access point device. The access point device can apply the configuration instructions and send results to the mobile device. In illustrative examples, the configuration instructions can be to disable a rogue device, update a policy, update router configuration based on received diagnostic logs, update LAN based rules, update network blocking rules, firewall rules, device kicking rules, and the like and/or combinations thereof. In implementations, the mobile device can confirm application of the configuration instructions. The method13000includes sending13600the configuration results to the service provider. The mobile device can send the configuration results from the application of the configuration instructions to the service provider system. In implementations, the mobile device can disconnect the secondary connection upon sending the configuration results. In general, a method for access point device recovery and management using mobile devices, includes providing, by a mobile device to an access point device via a service provider system, authentication details of the mobile device, establishing a secure wireless connection using the authentication details between the access point device and the mobile device upon receiving, by the mobile device, an indication of failure of a network connection between the service provider system and the access point device, sending, by the mobile device to the access point device over the secure wireless connection, a configuration instruction, receiving, by the mobile device from the access point device over the secure wireless connection, confirmation of the configuration instruction, and disconnecting the secure connection upon successful application of the configuration instruction. In implementations, the method includes obtaining, by the mobile device from the access point device over the secure wireless connection, diagnostic information for the access point device, sending, by the mobile device to the service provider system, the diagnostic information, receiving, by the mobile device from the service provider system, the configuration instruction based on the diagnostic information, and restoring the network connection between the service provider system and the access point device. In implementations, the method includes obtaining, by the mobile device from the access point device over the secure wireless connection, diagnostic information, sending, by the mobile device to the access point device over the secure wireless connection, the configuration instruction based on the diagnostic information, and restoring the network connection between the service provider system and the access point device. In implementations, the indication is the configuration instruction. In implementations, the configuration instruction is at least one of a disable a rogue device, update a policy, update access point device configuration based on received diagnostic logs, update local area network based rules, update network blocking rules, update firewall rules, device kicking rules, firmware updates, or software updates. In implementations, the method further includes switching from a primary controller to a secondary controller in the access point device, wherein the primary controller handles access point device processes with respect to the network connection and the secondary controller handles access point device processes with respect to the secure wireless connection. In implementations, the secure wireless connection is one of a Wi-Fi® connection or a Bluetooth® connection. In general, a method for router recovery and management using a wireless device includes providing a router with a router controller and a redundant router controller, wherein the router controller handles router management with respect to a wired connection between the router and an Internet service provider (ISP) and the redundant router controller handles router management with respect to a wireless connection, exchanging authentication credentials to establish the wireless connection between the router and the wireless device in the event of an outage of the wired connection, switching from the router controller to the redundant router controller, sending, by the mobile device to the to the redundant router controller, a management command, receiving, by the mobile device from the redundant router controller, results after application of the management command, and disconnecting the wireless connection for successful results. In implementations, the method further includes receiving, by the mobile device from the redundant router controller, failure data for the router, sending, by the mobile device to the ISP, the failure data, receiving, by the mobile device from the ISP, the management command based on the failure data, and restoring the wired connection between the ISP and the router. In implementations, the indication is the management command. In implementations, the management command is at least one of a disable a rogue device, update a policy, update access point device configuration based on received diagnostic logs, update local area network based rules, update network blocking rules, update firewall rules, device kicking rules, firmware updates, or software updates. In implementations, the method further includes providing, by a mobile device to the router via the ISP, the authentication credentials of the mobile device. In implementations, the method further includes receiving, by the mobile device from the redundant router controller, failure data for the router, sending, by the mobile device to the redundant router controller, the management command based on the failure data, and restoring the wired connection between the ISP and the router. In general, a service provider network including a service provider system including a configuration management server and a router including a main controller a secondary controller; and at least one radio managed by the secondary controller, wherein the router is configured to communicate command and control information with the configuration management server using the main controller, receive, upon initial connection to the router, a public key from a mobile device, switch from the main controller to the secondary controller for communication outage between the main controller and the configuration management server, and communicate command and control information with the mobile device using the secondary controller and the at least one radio. In implementations, the router is further configured to notify the mobile device of the communication outage between the main controller and the configuration management server. In implementations, the router is further configured to establish an authenticated connection by handshaking with the mobile device using the public key and a private key stored by the mobile device. In implementations, the router further is configured to apply commands received from the mobile device, switch from the secondary controller to the main controller upon restoration of the communication between the main controller and the configuration management server, and disconnect communication of the command and control information with the mobile device using the secondary controller and the at least one radio. In implementations, the commands are at least one of a disable a rogue device, update a policy, update access point device configuration based on received diagnostic logs, update local area network based rules, update network blocking rules, update firewall rules, device kicking rules, firmware updates, or software updates. In implementations, the secondary controller and the main controller are an integrated controller. In implementations, the command and control information received from the mobile device are relayed by the mobile device from the configuration management server. Although some embodiments herein refer to methods, it will be appreciated by one skilled in the art that they may also be embodied as a system or computer program product. Accordingly, aspects may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “processor,” “device,” or “system.” Furthermore, aspects may take the form of a computer program product embodied in one or more the computer readable mediums having the computer readable program code embodied thereon. Any combination of one or more computer readable mediums may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to CDs, DVDs, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. While the disclosure has been described in connection with certain embodiments, it is to be understood that the disclosure is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications, combinations, and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
63,389
11943614
DETAILED DESCRIPTION Exemplary embodiments briefly summarized above will now be described more fully with reference to the accompanying drawings. These descriptions are provided by way of example to explain the subject matter to those skilled in the art, and should not be construed as limiting the scope of the subject matter to only the embodiments described herein. More specifically, examples are provided below that illustrate the operation of various embodiments according to the advantages discussed above. Furthermore, the following terms are used throughout the description given below:Radio Node: As used herein, a “radio node” can be either a “radio access node” or a “wireless device.”Radio Access Node: As used herein, a “radio access node” (or “radio network node”) can be any node in a radio access network (RAN) of a cellular communications network that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station (e.g., a New Radio (NR) base station (gNB) in a 3GPP Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP LTE network), a high-power or macro base station, a low-power base station (e.g., a micro base station, a pico base station, a home eNB, or the like), and a relay node.Core Network Node: As used herein, a “core network node” is any type of node in a core network. Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a Packet Data Network Gateway (P-GW), a Service Capability Exposure Function (SCEF), or the like.Wireless Device: As used herein, a “wireless device” is any type of device that has access to (i.e., is served by) a cellular communications network by wirelessly transmitting and/or receiving signals to a radio access node(s). Some examples of a wireless device include, but are not limited to, a UE in a 3GPP network and a Machine Type Communication (MTC) device.Network Node: As used herein, a “node” or “network node” is any node that is part of the radio access network or the core network of a cellular communications network/system. Note that the description given herein focuses on a 3GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is oftentimes used. However, the concepts disclosed herein are not limited to a 3GPP system. As discussed above, in order to support smooth migration between EPC and 5GC, it is assumed that both EPC and 5GC have access to a common subscriber database that acts as the master database for a given user (as defined in 3GPP TS 23.002). In other words, there should be a common subscriber database that encompasses both HSS/EPC-UDR functionality and UDM/UDR functionality. As noted above, however, access to encrypted data stored in EPC-UDR is limited by proprietary data model involving propriety authentication and/or encryption protocols. These are discussed below in more detail. In 4G networks, the interface between HSS(AuC) and EPC UDR is Ud, in which the data model is not standardized (i.e., it is vendor-specific). So long as both HSS(AuC) and EPC UDR in any deployment are provided by the same vendor, having proprietary data encrypted by AuC and stored in EPC-UDR is not a problem. In 5G networks, however, the data is stored in newly-defined (5G) UDR, which is accessible by the standardized Nudr interface. As such, UDM and UDR may be from different vendors. Currently the credentials for 5G subscribers are unencrypted, but if the standards are updated to include credentials encryption, then Nudr should also be updated to include a data model that considers the authentication data required for that encryption. For example, the encryption mechanism will need to be standardized. This can create various problems for 4G network users who are also enabled as 5G network users, whereby user credentials must be accessed from both network types. When the user credentials are accessed from 4G, they will be read by HSS(AuC) from EPC-UDR (via Ud) and then the known, proprietary, decryption will be applied. On the other hand, when the user credentials are accessed from 5G, they will be read by UDM from UDR and the standards-based decryption will be applied. For unified 4G-5G data storage, one approach would be to have two separate user credentials, each accessible by a different network type. However, this can be undesirable from a security standpoint. One of the encryption secret key parameters is SQN, a four-octet sequence number that is refreshed each time a network tries to authenticate the user. Duplication of security keys can lead to SQN reuse and, consequently, synchronization errors. As such, it is desirable that any duplication of secret keys should be avoided except for backup purposes. But if only one single set of credentials is required, for 4G users enabled as 5G users, currently this single set cannot be accessed from both 4G and 5G networks having different reference points (Ud vs. Nudr), different protocols (LDAP vs. HTTP/REST), different data models (proprietary vs. new Nudr to be defined), and different encryption mechanisms (proprietary vs. new Nudr to be defined). Exemplary embodiments of the present disclosure address these and other problems, challenges, and/or issues by providing mechanism that can perform a per-traffic update of encryption mechanisms of authentication credentials. For example, the first time authentication credentials are read by a 5G consumer (e.g, a UDM), the 5G consumer executes a service (e.g., “4G to 5G encrypted credentials translator”) that is able to decrypt vendor-proprietary 4G credentials and then re-encrypt (as needed) the decrypted credentials using another encryption mechanism (e.g., a new 5G standard). The newly-created 5G encrypted credentials can then be stored in UDR, so that the next time a 5G consumer requires authentication credentials for the same subscriber, the stored credentials are available. The existing 4G-encrypted credentials can remain stored in EPC-UDR until there are no more 4G consumers (e.g., HSS) that require 4G credentials for that user (e.g., when the user no longer accesses 4G networks). Although the translator service and the 4G HSS/EPC-UDR should be provided by the same vendor, the remainder of services that utilize the 5G user credentials can be provided by different vendors. The enhancements and/or operational improvements provided by exemplary embodiments promote coexistence and compatibility of 4G and 5G networks by facilitating coexistence of 4G and 5G user credentials that use different encryption mechanisms. Exemplary embodiments also facilitate the migration and/or upgrade of vendor-specific 4G encryption of user credentials to a standardized 5G encryption mechanism. Embodiments also minimize and/or reduce processing and signalling resource requirements by making 5G user credentials available to subsequent consumers after initial creation. Embodiments also facilitate deployments of 5G UDR and 4G HSS/EPC-UDR from different vendors, while keeping subscriber encryption parameters under control of a single vendor, the one that originally encrypted the user credentials. In addition, embodiments can keep encrypted user credentials available for both 4G and 5G accesses, while using only a single set of corresponding encryption parameters (e.g., SQN). This avoids synchronization errors that can occur when two credentials sets, including secret key parameters, are kept in different places. Furthermore, embodiments facilitate the inclusion of 5G non-standardized (e.g., operator-specific) encryption mechanisms or variants with future 5G standardized encryption mechanisms. For example, operators are able to use other encryption mechanisms as required and/or desired, while avoiding any modification of credentials. The following description of exemplary embodiments is based on the typical current deployment situation where HSS and EPC-UDR and provided by the same vendor and credentials for 4G network users are stored in EPC-UDR. Moreover, the description focuses on certain ones of these 4G network users who are also enabled as 5G network users, e.g., via subscribed capability. The description focuses on two aspects: the first 5G network access by the 5G-enabled user, and subsequent 4G network accesses by the same user. FIG.6, which consists ofFIGS.6A and6B, shows an exemplary signalling diagram among various core network entities in relation to an initial 5G network access for a 5G-enabled user, according to various exemplary embodiments of the present disclosure. Although the various operations inFIG.6are labelled with numbers, these numbers are merely for facilitating explanation and neither require nor suggest the operations to be performed in a particular order, unless specifically stated in the following explanation. Furthermore, the operations shown can be combined and/or divided into operations having different functionality than shown inFIG.6. In addition, the exemplary method and/or procedure shown inFIG.6can be complementary to other exemplary methods and/or procedures disclosed herein, such that they are capable of being used cooperatively to provide the benefits, advantages, and/or solutions to problems described hereinabove. Although not shown, in response to a particular user's (also referred to as a “subscriber”) request to access the 5G network, the Access and Mobility Management Function (AMF) in the 5G network can select a particular Authentication Server Function (AUSF) to perform authentication between the UE and 5GC. In operation 1, the selected AUSF requests the UDM function to execute the authentication operation for the user. In operation 2, upon receiving the request, the UDM determines that it needs to fetch authentication information for that subscriber, and sends a request (e.g., query) to the 5GC UDR via the Nudr interface. In this example, the subscriber is assumed to be 5G-enabled but the authentication credentials remain as in 4G, i.e., encrypted by a 4G core network vendor. However, this vendor is not required to also be the vendor of the 5G entities such as UDR, UDM, AMF, or AUSF. Since only 4G encrypted credentials are available, the Nudr query in operation 2 cannot return the 4G credentials since they are stored using Ud with a proprietary data model. Accordingly, in operation 3, the Nudr response will return either an error or some information identifying that only 4G encrypted credentials are available. In operation 4, the UDM determines, based on response received in operation 3, whether 5G encrypted credentials are available for the subscriber. If available, the signalling flow proceeds to operation 10. Otherwise, UDM needs to execute operation 5, shown as “Upgrade to 5G encrypted credentials”. This action is shown inFIG.6as a message sent by UDM to a “4G to 5G encrypted credentials translator” procedure, the message including at least the subscriber identification. This procedure could be implemented as a new service, module, or network function (NF), such as described above. In general, the UDM may be from a different vendor than the HSS and EPC-UDR, so long as Nudr is standardized to accommodate this multi-vendor compatibility. As such, the “4G to 5G encrypted credentials translator” function can be implemented as a service that is independent from UDM, and it may be discovered using NRF (or by configuration), similar to other SBA services. As briefly mentioned above, in this multi-vendor arrangement, the “4G to 5G encrypted credentials translator” function can be provided by the same vendor as the HSS/EPC-UDR, thereby facilitating decryption of the vendor-proprietary 4G encryption. In case of single vendor of HSS/EPC-UDR and UDM, the “4G to 5G encrypted credentials translator” can be implemented in the Authentication credential Repository and Processing Function (ARPF), which can be part of or associated with the UDM. In this case, encryption (see operation 8, described below) and decryption (operation 10) are not required, so long as the information is not exchanged between two vendors by a network connection. In operation 6, the “4G-5G encrypted credential translator” service requests from EPC-UDR the required subscriber authentication data, using the proprietary data model over the Ud interface. A response is received in operation 7, including 4G encrypted credentials and any other required information to be able decrypt the credentials. Based on this information, in operation 8, the “4G-5G encrypted credential translator” service decrypts the credentials and then re-encrypts them using a standardized 5G encryption mechanism. However, this re-encryption operation is optional depending, e.g., on whether a standardized 5G credentials encryption mechanism is available. In some embodiments, instead of 5G standard encryption mechanism, an operator can define other proprietary 5G encryption mechanisms that the UDM can be instructed to select and apply in operation 8. This could be the case when UDM and “4G-5G encrypted credential translator” are provided by the same vendor, or in a multi-vendor arrangement where the operator has requested custom implementation for both vendors. In these embodiments, in operation 8, the “4G-5G encrypted credential translator” can generate the “operator proprietary encrypted credential.” In operation 9, “4G-5G encrypted credential translator” sends to the UDM the 5G encrypted credentials along with any other information that is required for decryption, such as an identification of the encryption mechanism used (e.g., “operation proprietary encryption”). Non-standard encryption mechanisms (and related required data) can be conveyed in a proprietary extension field of the service operation. In operation 10, the UDM proceeds similarly as if it received 5G encrypted credentials (and any other required information) in operation 3. In other words, it decrypts the 5G encrypted credentials and generates required Authentication Vectors as requested in operation 1. In operation 11, the UDM responds to the AUSF indicating that the user has been authenticated. In operations 12-13, the UDM re-encrypts the credentials using either a 5G standardized or operator-proprietary encryption (according to the embodiment) and stores the 5G encrypted credentials in UDR via the Nudr interface. In operation 14, the UDM receives a UDR response indicating successful credential storage. In this manner, former 4G encrypted credentials can be upgraded to 5G encrypted credentials and can be used directly for user 5G network accesses, while 4G encrypted credentials can remain untouched and available for use when the 5G-enabled user accesses a 4G network (described below in relation toFIG.7). In operation 15, once 5G encrypted credentials have been successfully stored in UDR, UDM sends a request to the “4G-5G encrypted credential translator” to confirm that the migration of credentials for that particular user was successful. After receiving this confirmation, in operation 16, the “4G-5G encrypted credential translator” updates authentication proprietary information in the EPC-UDR, thereby allow subsequent EPC-UDR reads to identify that the authentication data has already been “migrated” to 5G UDR. This vendor proprietary data can be included in a proprietary, non-standardized extension. For example, this information can be stored in a binary format to avoid identification and interpretation of this proprietary information by other vendors (e.g., competitors). In other embodiments of operation 16, the “4G-5G encrypted credential translator” can delete the 4G credentials in the EPC-UDR. This is explained in more detail below. In operation 17, the “4G-5G encrypted credential translator” responds to UDM with an indication that the 4G encrypted credentials have been successfully marked as “migrated” (or alternatively deleted). In operation 18, the Authentication Server Function (AUSF) in the 5GC requests the UDM function to execute the Authentication operation for the same user. In operation 19, upon receiving the request, the UDM determines that it needs to fetch authentication information for that subscriber, and sends a request (e.g., query) to the 5GC UDR via the Nudr interface. For this user, however, 5G encrypted credentials exist and are successfully returned in operation 20. Operations 21-22 are similar to operations 10-11, described above. FIG.7shows an exemplary signalling diagram among various core network entities in relation to subsequent 4G network access for a 5G-enabled user, according to various exemplary embodiments of the present disclosure. Although the various operations inFIG.7are labelled with numbers, these numbers are merely for facilitating explanation and neither require nor suggest the operations to be performed in a particular order, unless specifically stated in the following explanation. Furthermore, the operations shown can be combined and/or divided into operations having different functionality than shown inFIG.7. In addition, the exemplary method and/or procedure shown inFIG.7can be complementary to other exemplary methods and/or procedures disclosed herein, such that they are capable of being used cooperatively to provide the benefits, advantages, and/or solutions to problems described hereinabove. In operation 1, an MME in the EPC sends an authentication requests for a particular user (who is accessing the 4G network) to the HSS. In operation 2, the HSS requests the authentication information from EPC-UDR using Ud interface and proprietary data model. In operation 3, the EPC-UDR response indicates that the authentication information for that subscriber has been migrated to 5G network credentials, or alternatively that the Authentication information for that subscriber is empty (e.g., it was previously deleted in accordance with the above description). In operation 4, the HSS checks this received indication. If the 4G encrypted credentials are not available (either deleted or marked as “migrated”), then in operation 5, the HSS executes a service to get vendor proprietary authentication data from UDM. In the figure, this service is shown as part of the existing Nudm UEAuthenticate service of the 5GC SBA, but in other embodiments it can be a new service. The request for vendor proprietary authentication data can include a Vendor identification. In operations 6-8, the UDM queries UDR for the requested vendor-proprietary information, the UDR returns the information, and the UDM sends the received information to the requesting HSS. In operation 9, the HSS (e.g., the AuC functionality) uses the received proprietary information to decrypt the 4G credentials and generate Authentication Vectors. In operation 10, the HSS sends a request for the UDM to store the updated authentication information including, e.g., the SQN. In operations 11-12, the UDM successfully stores this updated information in the UDR. In this manner, even though two credential sets are stored and encrypted using different algorithms, the sequence or/and timing is maintained in a single repository, i.e., the 5G UDR. In operation 13, the UDM sends a response to the HSS indicating successful update of the user's authentication information. In operation 14, the HSS provides the requested user authentication information to the MME. Note that in the arrangement shown inFIG.7, the signalling flows between UDM and UDR in operations 6-7 and 11-12 are optional. For example, the UDR may be unnecessary if the UDM can store data locally. In other embodiments, the HSS can interact directly with the UDR, simplifying the signalling flow shown inFIG.7.FIG.8shows an exemplary signalling diagram among various core network entities, according to these exemplary embodiments. In such case, the HSS can communicate directly with the UDR without the UDM as an intermediary. The embodiments described above relate to an existing 4G user that is 5G-enabled. In other embodiments, a new user could be directly provisioned as a 5G user but also include a 4G profile. As such, this user could access 4G networks, but the user's 4G profile does not exist in the EPC-UDR. Even so, the signalling flow shown inFIG.8can be used for this scenario, provided that both 4G and 5G encrypted credentials are provisioned in UDR. The embodiments described above relate to scenarios in which there are two independent repositories: EPC-UDR and UDR. This will be normally the case when a Vendor1 HSS/EPC-UDR is deployed but the operator selects a Vendor2 UDR for deployment. However, it can be desirable to deploy a Vendor1 UDR together with a deployed Vendor1 HSS/EPC-UDR. In this single-vendor case, it is possible to provide a Common Repository for consolidated 4G/5G subscriber information that is accessible by both Ud and Nudr.FIG.9shows an exemplary signalling diagram among various core network entities in relation to subsequent 4G network access for a 5G-enabled user, according to exemplary embodiments of the present disclosure that utilize a Common Repository for consolidated 4G/5G subscriber information. Although the various operations inFIG.9are labelled with numbers, these numbers are merely for facilitating explanation and neither require nor suggest the operations to be performed in a particular order, unless specifically stated in the following explanation. Furthermore, the operations shown can be combined and/or divided into operations having different functionality than shown inFIG.9. In addition, the exemplary method and/or procedure shown inFIG.9can be complementary to other exemplary methods and/or procedures disclosed herein, such that they are capable of being used cooperatively to provide the benefits, advantages, and/or solutions to problems described hereinabove. Operations 1-6 show an exemplary authentication flow for a user accessing a 4G network. Operations 7-13 show an exemplary authentication flow for the user accessing a 5G network. In this scenario, the Common Repository includes both EPC-UDR and (5GC) UDR, with both 4G and 5G encrypted credentials being stored in the Common Repository. As discussed above, the SQN of the encryption parameters must be unique in order to avoid synchronization problems between 4G and 5G accesses. As briefly mentioned, embodiments described above can be based on existing, updated, and/or newly-created services associated with 3GPP service-based architecture (SBA) services. As such, the various embodiments can be implemented efficiently in a cloud-based network architecture. FIG.10illustrates an exemplary method and/or procedure for managing user authentication credentials in relation to different types of core networks (CNs), according to various exemplary embodiments of the present disclosure. The exemplary method and/or procedure shown inFIG.10can be performed by a data management node (e.g., UDM) in a first CN (e.g., 5GC), such as shown in and/or described in relation to other figures herein. Although the exemplary method and/or procedure is illustrated inFIG.10by blocks in a particular order, this order is exemplary and the operations corresponding to the blocks can be performed in different orders, and can be combined and/or divided into blocks and/or operations having different functionality than shown inFIG.10. Furthermore, the exemplary method and/or procedure shown inFIG.10can be complementary to other exemplary methods and/or procedures disclosed herein, such that they are capable of being used cooperatively to provide the benefits, advantages, and/or solutions to problems described hereinabove. For example, the exemplary method and/or procedure shown inFIG.10can be used with one or more of the exemplary methods and/or procedures shown inFIG.11. Optional blocks and/or operations are indicated by dashed lines. The exemplary method and/or procedure can include the operations of block1010, where the node can receive a request to authenticate a user for access via a first CN. The request can be received, for example, from an Authentication Support Function (AUSF) in a 5GC network. The exemplary method and/or procedure can also include the operations of block1020, where the node can determine that user authentication credentials are unavailable in relation to the first CN. In some embodiments, the determining operations in block1020can include the operations of sub-block1022, where the node can send, to a first data repository associated with the first CN, a request for the user authentication credentials. For example, the first repository can be a unified data repository (UDR) in a 5GC network. In such embodiments, the determining operations in block1020can include the operations of sub-block1024, where the node can receive, from the first data repository, a response indicating at least one of the following: an error; and an indication that user authentication credentials are available in relation to the second CN. The exemplary method and/or procedure can also include the operations of block1030, where the node can send, to a translator function associated with a second CN that is different than the first CN, a request to provide user authentication credentials associated with the first CN. The exemplary method and/or procedure can also include the operations of block1040, where the node can receive user authentication credentials associated with the first CN, e.g., from the translator function. In some embodiments, the received user authentication credentials are encrypted using an encryption mechanism associated with the first CN. The exemplary method and/or procedure can also include the operations of block1050, where the node can, based on the received user authentication credentials, authenticate the user for access via the first CN. In embodiments where the received user authentication credentials are encrypted, the authentication operations of block1050can include the operations of sub-blocks1052-1054, where the node can decrypt the received user authentication credentials and generate authentication vectors. In such embodiments, the node can then authenticate the user based on the generated authentication vectors. In some embodiments, the exemplary method and/or procedure can also include the operations of block1060, where the node can re-encrypt the decrypted user authentication credentials using the encryption mechanism associated with the first CN. In such embodiments, the exemplary method and/or procedure can also include the operations of block1070, where the node can store the re-encrypted user authentication credentials in a first data repository associated with the first CN. Furthermore, in some embodiments, the exemplary method and/or procedure can also include the operations of block1080, where the node can, in response to receiving a further request to authenticate the user for access via the first CN, determine that the user authentication credentials are available in relation to the first CN. This determination can be based, for example, on the user credentials stored in the first data repository in an earlier operation (e.g., operations of block1070). FIG.11illustrates an exemplary method and/or procedure for providing user authentication credentials for a second core network, CN, that is different from a first CN, that is, for managing user authentication credentials in relation to different types of core networks, CNs, according to various exemplary embodiments of the present disclosure. The exemplary method and/or procedure shown inFIG.11can be performed by an encrypted credentials translator function, node, and/or service (e.g., a 4G-5G encrypted credentials translator) associated with the second CN (e.g., 4G EPC), such as shown in and/or described in relation to other figures herein. Although the exemplary method and/or procedure is illustrated inFIG.11by blocks in a particular order, this order is exemplary and the operations corresponding to the blocks can be performed in different orders, and can be combined and/or divided into blocks having different functionality than shown inFIG.11. Furthermore, the exemplary method and/or procedure shown inFIG.11can be complementary to other exemplary methods and/or procedures disclosed herein, such that they are capable of being used cooperatively to provide the benefits, advantages, and/or solutions to problems described hereinabove. For example, the exemplary method and/or procedure shown inFIG.11can be used with one or more of the exemplary methods and/or procedures shown inFIG.10. Optional blocks and/or operations are indicated by dashed lines. The exemplary method and/or procedure can include the operations of block1110, where the translator node can receive, from a data management node associated with the first CN, a request to provide user authentication credentials associated with the first CN. For example, the data management node can be a user data management (UDM) function and/or node in a 5GC network. The exemplary method and/or procedure can also include the operations of block1120, where the translator node can retrieve user authentication credentials associated with the second CN. In some embodiments, the retrieving operations of block1120can include the operations of sub-block1122, where the translator node can send, to a subscriber data repository associated with the second CN, a request for the user authentication credentials. In such embodiments, the retrieving operations of block1120can also include the operations of sub-block1124, where the translator node can receive, from the subscriber data repository, the user authentication credentials encrypted based on an encryption mechanism associated with the second CN. The exemplary method and/or procedure can also include the operations of block1130, where the translator node can translate the retrieved user authentication credentials into user authentication credentials associated with the first CN. In some embodiments, the translating operations of block1130can include the operations of sub-block1132, where the translator node can decrypt the encrypted user authentication credentials, and sub-block1134, where the translator node can re-encrypt the decrypted user authentication credentials based on an encryption mechanism associated with the first CN. The exemplary method and/or procedure can also include the operations of block1140, where the translator node can provide the translated user authentication credentials to the data management node. In some embodiments, the translated user authentication credentials provided to the data management node can include the re-encrypted user authentication credentials and at least one of the following: an indication of the particular encryption mechanism used for the re-encryption; and information needed to decrypt the re-encrypted user authentication credentials. FIG.12illustrates an exemplary method and/or procedure for managing user authentication credentials in relation to different types of core networks (CNs), according to various exemplary embodiments of the present disclosure. The exemplary method and/or procedure shown inFIG.12can be performed by an data management node or subscriber server (e.g., a Home Subscriber Server, HSS) associated with the second CN (e.g., 4G EPC), such as shown in and/or described in relation to other figures herein. Although the exemplary method and/or procedure is illustrated inFIG.12by blocks in a particular order, this order is exemplary and the operations corresponding to the blocks can be performed in different orders, and can be combined and/or divided into blocks having different functionality than shown inFIG.12. Furthermore, the exemplary method and/or procedure shown inFIG.12can be complementary to other exemplary methods and/or procedures disclosed herein, such that they are capable of being used cooperatively to provide the benefits, advantages, and/or solutions to problems described hereinabove. For example, the exemplary method and/or procedure shown inFIG.12can be used with one or more of the exemplary methods and/or procedures shown inFIGS.10-11. Optional blocks and/or operations are indicated by dashed lines. The exemplary method and/or procedure can include the operations of block1210, where the node can receive a request to authenticate a user for access via a second CN. For example, the node can receive the request from a mobility management entity (MIME) in a 4G EPC. The exemplary method and/or procedure can also include the operations of block1220, where the node can determine that user authentication credentials are unavailable in relation to the second CN. In some embodiments, this determining operation can include sending a request for the credentials to a data repository associated with the second CN (e.g., an EPC-UDR), and receiving an indication, in response, that the credentials have been deleted or converted to credentials in relation to a first CN (e.g., 5GC) that is different than the second CN. The exemplary method and/or procedure can also include the operations of block1230, where the node can send, to a data management node associated with the first CN that is different than the second CN, a request to provide user authentication credentials associated with the second CN. The exemplary method and/or procedure can also include the operations of block1240, where the node can receive, from the data management node, user authentication credentials associated with the second CN. The exemplary method and/or procedure can also include the operations of block1250, where the node can, based on the received user authentication credentials, authenticate the user for access via the second CN. In some embodiments, the received (e.g., in block1240) user authentication credentials can be encrypted using an encryption mechanism associated with the second CN. In such embodiments, the authentication operations in block1250can also include the operations of sub-blocks1252and1254, where the node can decrypt the received user authentication credentials and generate authentication vectors. In such case, the node can authenticate the user based on the generated authentication vectors. In some embodiments, the decryption operation in sub-block1252can include incrementing an encryption sequence number (e.g., SQN). In such embodiments, the exemplary method and/or procedure can also include the operations of block1260, where the node can send, to the data management node, a request to update stored user authentication credentials with the incremented sequence number. In such embodiments, the exemplary method and/or procedure can also include the operations of block1270, where the node can receive, from the data management node, a response indicating a successful update of the stored user authentication credentials. Although the subject matter described herein can be implemented in any appropriate type of system using any suitable components, the embodiments disclosed herein are described in relation to a wireless network, such as the example wireless network illustrated inFIG.13. For simplicity, the wireless network ofFIG.13only depicts network1306, network nodes1360and1360b, and WDs1310,1310b, and1310c. In practice, a wireless network can further include any additional elements suitable to support communication between wireless devices or between a wireless device and another communication device, such as a landline telephone, a service provider, or any other network node or end device. Of the illustrated components, network node1360and wireless device (WD)1310are depicted with additional detail. The wireless network can provide communication and other types of services to one or more wireless devices to facilitate the wireless devices' access to and/or use of the services provided by, or via, the wireless network. The wireless network can comprise and/or interface with any type of communication, telecommunication, data, cellular, and/or radio network or other similar type of system. In some embodiments, the wireless network can be configured to operate according to specific standards or other types of predefined rules or procedures. Thus, particular embodiments of the wireless network can implement communication standards, such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, or 5G standards; wireless local area network (WLAN) standards, such as the IEEE 802.11 standards; and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave and/or ZigBee standards. Network1306can comprise one or more backhaul networks, core networks, IP networks, public switched telephone networks (PSTNs), packet data networks, optical networks, wide-area networks (WANs), local area networks (LANs), wireless local area networks (WLANs), wired networks, wireless networks, metropolitan area networks, and other networks to enable communication between devices. Network node1360and WD1310comprise various components described in more detail below. These components work together in order to provide network node and/or wireless device functionality, such as providing wireless connections in a wireless network. In different embodiments, the wireless network can comprise any number of wired or wireless networks, network nodes, base stations, controllers, wireless devices, relay stations, and/or any other components or systems that can facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a wireless device and/or with other network nodes or equipment in the wireless network to enable and/or provide wireless access to the wireless device and/or to perform other functions (e.g., administration) in the wireless network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)). Base stations can be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and can then also be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station can be a relay node or a relay donor node controlling a relay. A network node can also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station can also be referred to as nodes in a distributed antenna system (DAS). Further examples of network nodes include multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), core network nodes (e.g., MSCs, MMEs), O&M nodes, OSS nodes, SON nodes, positioning nodes (e.g., E-SMLCs), and/or MDTs. As another example, a network node can be a virtual network node as described in more detail below. More generally, however, network nodes can represent any suitable device (or group of devices) capable, configured, arranged, and/or operable to enable and/or provide a wireless device with access to the wireless network or to provide some service to a wireless device that has accessed the wireless network. InFIG.13, network node1360includes processing circuitry1370, device readable medium1380, interface1390, auxiliary equipment1384, power source1386, power circuitry1387, and antenna1362. Although network node1360illustrated in the example wireless network ofFIG.13can represent a device that includes the illustrated combination of hardware components, other embodiments can comprise network nodes with different combinations of components. It is to be understood that a network node comprises any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods and/or procedures disclosed herein. Moreover, while the components of network node1360are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, a network node can comprise multiple different physical components that make up a single illustrated component (e.g., device readable medium1380can comprise multiple separate hard drives as well as multiple RAM modules). Similarly, network node1360can be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which can each have their own respective components. In certain scenarios in which network node1360comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components can be shared among several network nodes. For example, a single RNC can control multiple NodeB's. In such a scenario, each unique NodeB and RNC pair, can in some instances be considered a single separate network node. In some embodiments, network node1360can be configured to support multiple radio access technologies (RATs). In such embodiments, some components can be duplicated (e.g., separate device readable medium1380for the different RATs) and some components can be reused (e.g., the same antenna1362can be shared by the RATs). Network node1360can also include multiple sets of the various illustrated components for different wireless technologies integrated into network node1360, such as, for example, GSM, WCDMA, LTE, NR, WiFi, or Bluetooth wireless technologies. These wireless technologies can be integrated into the same or different chip or set of chips and other components within network node1360. Processing circuitry1370can be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being provided by a network node. These operations performed by processing circuitry1370can include processing information obtained by processing circuitry1370by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Processing circuitry1370can comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node1360components, such as device readable medium1380, network node1360functionality. For example, processing circuitry1370can execute instructions stored in device readable medium1380or in memory within processing circuitry1370. Such functionality can include providing any of the various wireless features, functions, or benefits discussed herein. In some embodiments, processing circuitry1370can include a system on a chip (SOC). In some embodiments, processing circuitry1370can include one or more of radio frequency (RF) transceiver circuitry1372and baseband processing circuitry1374. In some embodiments, radio frequency (RF) transceiver circuitry1372and baseband processing circuitry1374can be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry1372and baseband processing circuitry1374can be on the same chip or set of chips, boards, or units In certain embodiments, some or all of the functionality described herein as being provided by a network node, base station, eNB or other such network device can be performed by processing circuitry1370executing instructions stored on device readable medium1380or memory within processing circuitry1370. In alternative embodiments, some or all of the functionality can be provided by processing circuitry1370without executing instructions stored on a separate or discrete device readable medium, such as in a hard-wired manner. In any of those embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry1370can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry1370alone or to other components of network node1360, but are enjoyed by network node1360as a whole, and/or by end users and the wireless network generally. Device readable medium1380can comprise any form of volatile or non-volatile computer readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer-executable memory devices that store information, data, and/or instructions that can be used by processing circuitry1370. Device readable medium1380can store any suitable instructions, data or information, including a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry1370and, utilized by network node1360. Device readable medium1380can be used to store any calculations made by processing circuitry1370and/or any data received via interface1390. In some embodiments, processing circuitry1370and device readable medium1380can be considered to be integrated. Interface1390is used in the wired or wireless communication of signalling and/or data between network node1360, network1306, and/or WDs1310. As illustrated, interface1390comprises port(s)/terminal(s)1394to send and receive data, for example to and from network1306over a wired connection. Interface1390also includes radio front end circuitry1392that can be coupled to, or in certain embodiments a part of, antenna1362. Radio front end circuitry1392comprises filters1398and amplifiers1396. Radio front end circuitry1392can be connected to antenna1362and processing circuitry1370. Radio front end circuitry can be configured to condition signals communicated between antenna1362and processing circuitry1370. Radio front end circuitry1392can receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry1392can convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters1398and/or amplifiers1396. The radio signal can then be transmitted via antenna1362. Similarly, when receiving data, antenna1362can collect radio signals which are then converted into digital data by radio front end circuitry1392. The digital data can be passed to processing circuitry1370. In other embodiments, the interface can comprise different components and/or different combinations of components. In certain alternative embodiments, network node1360may not include separate radio front end circuitry1392, instead, processing circuitry1370can comprise radio front end circuitry and can be connected to antenna1362without separate radio front end circuitry1392. Similarly, in some embodiments, all or some of RF transceiver circuitry1372can be considered a part of interface1390. In still other embodiments, interface1390can include one or more ports or terminals1394, radio front end circuitry1392, and RF transceiver circuitry1372, as part of a radio unit (not shown), and interface1390can communicate with baseband processing circuitry1374, which is part of a digital unit (not shown). Antenna1362can include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. Antenna1362can be coupled to radio front end circuitry1390and can be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In some embodiments, antenna1362can comprise one or more omni-directional, sector or panel antennas operable to transmit/receive radio signals between, for example, 2 GHz and 66 GHz. An omni-directional antenna can be used to transmit/receive radio signals in any direction, a sector antenna can be used to transmit/receive radio signals from devices within a particular area, and a panel antenna can be a line of sight antenna used to transmit/receive radio signals in a relatively straight line. In some instances, the use of more than one antenna can be referred to as MIMO. In certain embodiments, antenna1362can be separate from network node1360and can be connectable to network node1360through an interface or port. Antenna1362, interface1390, and/or processing circuitry1370can be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by a network node. Any information, data and/or signals can be received from a wireless device, another network node and/or any other network equipment. Similarly, antenna1362, interface1390, and/or processing circuitry1370can be configured to perform any transmitting operations described herein as being performed by a network node. Any information, data and/or signals can be transmitted to a wireless device, another network node and/or any other network equipment. Power circuitry1387can comprise, or be coupled to, power management circuitry and can be configured to supply the components of network node1360with power for performing the functionality described herein. Power circuitry1387can receive power from power source1386. Power source1386and/or power circuitry1387can be configured to provide power to the various components of network node1360in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). Power source1386can either be included in, or external to, power circuitry1387and/or network node1360. For example, network node1360can be connectable to an external power source (e.g., an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry1387. As a further example, power source1386can comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry1387. The battery can provide backup power should the external power source fail. Other types of power sources, such as photovoltaic devices, can also be used. Alternative embodiments of network node1360can include additional components beyond those shown inFIG.13that can be responsible for providing certain aspects of the network node's functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, network node1360can include user interface equipment to allow and/or facilitate input of information into network node1360and to allow and/or facilitate output of information from network node1360. This can allow and/or facilitate a user to perform diagnostic, maintenance, repair, and other administrative functions for network node1360. As used herein, wireless device (WD) refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other wireless devices. Unless otherwise noted, the term WD can be used interchangeably herein with user equipment (UE). Communicating wirelessly can involve transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information through air. In some embodiments, a WD can be configured to transmit and/or receive information without direct human interaction. For instance, a WD can be designed to transmit information to a network on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the network. Examples of a WD include, but are not limited to, a smart phone, a mobile phone, a cell phone, a voice over IP (VoIP) phone, a wireless local loop phone, a desktop computer, a personal digital assistant (PDA), a wireless cameras, a gaming console or device, a music storage device, a playback appliance, a wearable terminal device, a wireless endpoint, a mobile station, a tablet, a laptop, a laptop-embedded equipment (LEE), a laptop-mounted equipment (LME), a smart device, a wireless customer-premise equipment (CPE). a vehicle-mounted wireless terminal device, etc. A WD can support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), vehicle-to-everything (V2X) and can in this case be referred to as a D2D communication device. As yet another specific example, in an Internet of Things (IoT) scenario, a WD can represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another WD and/or a network node. The WD can in this case be a machine-to-machine (M2M) device, which can in a 3GPP context be referred to as an MTC device. As one particular example, the WD can be a UE implementing the 3GPP narrow band internet of things (NB-IoT) standard. Particular examples of such machines or devices are sensors, metering devices such as power meters, industrial machinery, or home or personal appliances (e.g., refrigerators, televisions, etc.) personal wearables (e.g., watches, fitness trackers, etc.). In other scenarios, a WD can represent a vehicle or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation. A WD as described above can represent the endpoint of a wireless connection, in which case the device can be referred to as a wireless terminal. Furthermore, a WD as described above can be mobile, in which case it can also be referred to as a mobile device or a mobile terminal. As illustrated, wireless device1310includes antenna1311, interface1314, processing circuitry1320, device readable medium1330, user interface equipment1332, auxiliary equipment1334, power source1336and power circuitry1337. WD1310can include multiple sets of one or more of the illustrated components for different wireless technologies supported by WD1310, such as, for example, GSM, WCDMA, LTE, NR, WiFi, WiMAX, or Bluetooth wireless technologies, just to mention a few. These wireless technologies can be integrated into the same or different chips or set of chips as other components within WD1310. Antenna1311can include one or more antennas or antenna arrays, configured to send and/or receive wireless signals, and is connected to interface1314. In certain alternative embodiments, antenna1311can be separate from WD1310and be connectable to WD1310through an interface or port. Antenna1311, interface1314, and/or processing circuitry1320can be configured to perform any receiving or transmitting operations described herein as being performed by a WD. Any information, data and/or signals can be received from a network node and/or another WD. In some embodiments, radio front end circuitry and/or antenna1311can be considered an interface. As illustrated, interface1314comprises radio front end circuitry1312and antenna1311. Radio front end circuitry1312comprise one or more filters1318and amplifiers1316. Radio front end circuitry1314is connected to antenna1311and processing circuitry1320, and can be configured to condition signals communicated between antenna1311and processing circuitry1320. Radio front end circuitry1312can be coupled to or a part of antenna1311. In some embodiments, WD1310may not include separate radio front end circuitry1312; rather, processing circuitry1320can comprise radio front end circuitry and can be connected to antenna1311. Similarly, in some embodiments, some or all of RF transceiver circuitry1322can be considered a part of interface1314. Radio front end circuitry1312can receive digital data that is to be sent out to other network nodes or WDs via a wireless connection. Radio front end circuitry1312can convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters1318and/or amplifiers1316. The radio signal can then be transmitted via antenna1311. Similarly, when receiving data, antenna1311can collect radio signals which are then converted into digital data by radio front end circuitry1312. The digital data can be passed to processing circuitry1320. In other embodiments, the interface can comprise different components and/or different combinations of components. Processing circuitry1320can comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software, and/or encoded logic operable to provide, either alone or in conjunction with other WD1310components, such as device readable medium1330, WD1310functionality. Such functionality can include providing any of the various wireless features or benefits discussed herein. For example, processing circuitry1320can execute instructions stored in device readable medium1330or in memory within processing circuitry1320to provide the functionality disclosed herein. As illustrated, processing circuitry1320includes one or more of RF transceiver circuitry1322, baseband processing circuitry1324, and application processing circuitry1326. In other embodiments, the processing circuitry can comprise different components and/or different combinations of components. In certain embodiments processing circuitry1320of WD1310can comprise a SOC. In some embodiments, RF transceiver circuitry1322, baseband processing circuitry1324, and application processing circuitry1326can be on separate chips or sets of chips. In alternative embodiments, part or all of baseband processing circuitry1324and application processing circuitry1326can be combined into one chip or set of chips, and RF transceiver circuitry1322can be on a separate chip or set of chips. In still alternative embodiments, part or all of RF transceiver circuitry1322and baseband processing circuitry1324can be on the same chip or set of chips, and application processing circuitry1326can be on a separate chip or set of chips. In yet other alternative embodiments, part or all of RF transceiver circuitry1322, baseband processing circuitry1324, and application processing circuitry1326can be combined in the same chip or set of chips. In some embodiments, RF transceiver circuitry1322can be a part of interface1314. RF transceiver circuitry1322can condition RF signals for processing circuitry1320. In certain embodiments, some or all of the functionality described herein as being performed by a WD can be provided by processing circuitry1320executing instructions stored on device readable medium1330, which in certain embodiments can be a computer-readable storage medium. In alternative embodiments, some or all of the functionality can be provided by processing circuitry1320without executing instructions stored on a separate or discrete device readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a device readable storage medium or not, processing circuitry1320can be configured to perform the described functionality. The benefits provided by such functionality are not limited to processing circuitry1320alone or to other components of WD1310, but are enjoyed by WD1310as a whole, and/or by end users and the wireless network generally. Processing circuitry1320can be configured to perform any determining, calculating, or similar operations (e.g., certain obtaining operations) described herein as being performed by a WD. These operations, as performed by processing circuitry1320, can include processing information obtained by processing circuitry1320by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored by WD1310, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Device readable medium1330can be operable to store a computer program, software, an application including one or more of logic, rules, code, tables, etc. and/or other instructions capable of being executed by processing circuitry1320. Device readable medium1330can include computer memory (e.g., Random Access Memory (RAM) or Read Only Memory (ROM)), mass storage media (e.g., a hard disk), removable storage media (e.g., a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device readable and/or computer executable memory devices that store information, data, and/or instructions that can be used by processing circuitry1320. In some embodiments, processing circuitry1320and device readable medium1330can be considered to be integrated. User interface equipment1332can include components that allow and/or facilitate a human user to interact with WD1310. Such interaction can be of many forms, such as visual, audial, tactile, etc. User interface equipment1332can be operable to produce output to the user and to allow and/or facilitate the user to provide input to WD1310. The type of interaction can vary depending on the type of user interface equipment1332installed in WD1310. For example, if WD1310is a smart phone, the interaction can be via a touch screen; if WD1310is a smart meter, the interaction can be through a screen that provides usage (e.g., the number of gallons used) or a speaker that provides an audible alert (e.g., if smoke is detected). User interface equipment1332can include input interfaces, devices and circuits, and output interfaces, devices and circuits. User interface equipment1332can be configured to allow and/or facilitate input of information into WD1310, and is connected to processing circuitry1320to allow and/or facilitate processing circuitry1320to process the input information. User interface equipment1332can include, for example, a microphone, a proximity or other sensor, keys/buttons, a touch display, one or more cameras, a USB port, or other input circuitry. User interface equipment1332is also configured to allow and/or facilitate output of information from WD1310, and to allow and/or facilitate processing circuitry1320to output information from WD1310. User interface equipment1332can include, for example, a speaker, a display, vibrating circuitry, a USB port, a headphone interface, or other output circuitry. Using one or more input and output interfaces, devices, and circuits, of user interface equipment1332, WD1310can communicate with end users and/or the wireless network, and allow and/or facilitate them to benefit from the functionality described herein. Auxiliary equipment1334is operable to provide more specific functionality which may not be generally performed by WDs. This can comprise specialized sensors for doing measurements for various purposes, interfaces for additional types of communication such as wired communications etc. The inclusion and type of components of auxiliary equipment1334can vary depending on the embodiment and/or scenario. Power source1336can, in some embodiments, be in the form of a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic devices or power cells, can also be used. WD1310can further comprise power circuitry1337for delivering power from power source1336to the various parts of WD1310which need power from power source1336to carry out any functionality described or indicated herein. Power circuitry1337can in certain embodiments comprise power management circuitry. Power circuitry1337can additionally or alternatively be operable to receive power from an external power source; in which case WD1310can be connectable to the external power source (such as an electricity outlet) via input circuitry or an interface such as an electrical power cable. Power circuitry1337can also in certain embodiments be operable to deliver power from an external power source to power source1336. This can be, for example, for the charging of power source1336. Power circuitry1337can perform any converting or other modification to the power from power source1336to make it suitable for supply to the respective components of WD1310. FIG.14illustrates one embodiment of a UE in accordance with various aspects described herein. As used herein, a user equipment or UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE can represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE can represent a device that is not intended for sale to, or operation by, an end user but which can be associated with or operated for the benefit of a user (e.g., a smart power meter). UE14200can be any UE identified by the 3rdGeneration Partnership Project (3GPP), including a NB-IoT UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE. UE1400, as illustrated inFIG.14, is one example of a WD configured for communication in accordance with one or more communication standards promulgated by the 3rdGeneration Partnership Project (3GPP), such as 3GPP's GSM, UMTS, LTE, and/or 5G standards. As mentioned previously, the term WD and UE can be used interchangeable. Accordingly, althoughFIG.14is a UE, the components discussed herein are equally applicable to a WD, and vice-versa. InFIG.14, UE1400includes processing circuitry1401that is operatively coupled to input/output interface1405, radio frequency (RF) interface1409, network connection interface1411, memory1415including random access memory (RAM)1417, read-only memory (ROM)1419, and storage medium1421or the like, communication subsystem1431, power source1433, and/or any other component, or any combination thereof. Storage medium1421includes operating system1423, application program1425, and data1427. In other embodiments, storage medium1421can include other similar types of information. Certain UEs can utilize all of the components shown inFIG.14, or only a subset of the components. The level of integration between the components can vary from one UE to another UE. Further, certain UEs can contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc. InFIG.14, processing circuitry1401can be configured to process computer instructions and data. Processing circuitry1401can be configured to implement any sequential state machine operative to execute machine instructions stored as machine-readable computer programs in the memory, such as one or more hardware-implemented state machines (e.g., in discrete logic, FPGA, ASIC, etc.); programmable logic together with appropriate firmware; one or more stored program, general-purpose processors, such as a microprocessor or Digital Signal Processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry1401can include two central processing units (CPUs). Data can be information in a form suitable for use by a computer. In the depicted embodiment, input/output interface1405can be configured to provide a communication interface to an input device, output device, or input and output device. UE1400can be configured to use an output device via input/output interface1405. An output device can use the same type of interface port as an input device. For example, a USB port can be used to provide input to and output from UE1400. The output device can be a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. UE1400can be configured to use an input device via input/output interface1405to allow and/or facilitate a user to capture information into UE1400. The input device can include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display can include a capacitive or resistive touch sensor to sense input from a user. A sensor can be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, another like sensor, or any combination thereof. For example, the input device can be an accelerometer, a magnetometer, a digital camera, a microphone, and an optical sensor. InFIG.14, RF interface1409can be configured to provide a communication interface to RF components such as a transmitter, a receiver, and an antenna. Network connection interface1411can be configured to provide a communication interface to network1443a. Network1443acan encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network1443acan comprise a Wi-Fi network. Network connection interface1411can be configured to include a receiver and a transmitter interface used to communicate with one or more other devices over a communication network according to one or more communication protocols, such as Ethernet, TCP/IP, SONET, ATM, or the like. Network connection interface1411can implement receiver and transmitter functionality appropriate to the communication network links (e.g., optical, electrical, and the like). The transmitter and receiver functions can share circuit components, software or firmware, or alternatively can be implemented separately. RAM1417can be configured to interface via bus1402to processing circuitry1401to provide storage or caching of data or computer instructions during the execution of software programs such as the operating system, application programs, and device drivers. ROM1419can be configured to provide computer instructions or data to processing circuitry1401. For example, ROM1419can be configured to store invariant low-level system code or data for basic system functions such as basic input and output (I/O), startup, or reception of keystrokes from a keyboard that are stored in a non-volatile memory. Storage medium1421can be configured to include memory such as RAM, ROM, programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, floppy disks, hard disks, removable cartridges, or flash drives. In one example, storage medium1421can be configured to include operating system1423, application program1425such as a web browser application, a widget or gadget engine or another application, and data file1427. Storage medium1421can store, for use by UE1400, any of a variety of various operating systems or combinations of operating systems. Storage medium1421can be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), floppy disk drive, flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as a subscriber identity module or a removable user identity (SIM/RUIM) module, other memory, or any combination thereof. Storage medium1421can allow and/or facilitate UE1400to access computer-executable instructions, application programs or the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system can be tangibly embodied in storage medium1421, which can comprise a device readable medium. InFIG.14, processing circuitry1401can be configured to communicate with network1443busing communication subsystem1431. Network1443aand network1443bcan be the same network or networks or different network or networks. Communication subsystem1431can be configured to include one or more transceivers used to communicate with network1443b. For example, communication subsystem1431can be configured to include one or more transceivers used to communicate with one or more remote transceivers of another device capable of wireless communication such as another WD, UE, or base station of a radio access network (RAN) according to one or more communication protocols, such as IEEE 802.14, CDMA, WCDMA, GSM, LTE, UTRAN, WiMax, or the like. Each transceiver can include transmitter1433and/or receiver1435to implement transmitter or receiver functionality, respectively, appropriate to the RAN links (e.g., frequency allocations and the like). Further, transmitter1433and receiver1435of each transceiver can share circuit components, software or firmware, or alternatively can be implemented separately. In the illustrated embodiment, the communication functions of communication subsystem1431can include data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. For example, communication subsystem1431can include cellular communication, Wi-Fi communication, Bluetooth communication, and GPS communication. Network1443bcan encompass wired and/or wireless networks such as a local-area network (LAN), a wide-area network (WAN), a computer network, a wireless network, a telecommunications network, another like network or any combination thereof. For example, network1443bcan be a cellular network, a Wi-Fi network, and/or a near-field network. Power source1413can be configured to provide alternating current (AC) or direct current (DC) power to components of UE1400. The features, benefits and/or functions described herein can be implemented in one of the components of UE1400or partitioned across multiple components of UE1400. Further, the features, benefits, and/or functions described herein can be implemented in any combination of hardware, software or firmware. In one example, communication subsystem1431can be configured to include any of the components described herein. Further, processing circuitry1401can be configured to communicate with any of such components over bus1402. In another example, any of such components can be represented by program instructions stored in memory that when executed by processing circuitry1401perform the corresponding functions described herein. In another example, the functionality of any of such components can be partitioned between processing circuitry1401and communication subsystem1431. In another example, the non-computationally intensive functions of any of such components can be implemented in software or firmware and the computationally intensive functions can be implemented in hardware. FIG.15is a schematic block diagram illustrating a virtualization environment1500in which functions implemented by some embodiments can be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which can include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to a node (e.g., a virtualized base station or a virtualized radio access node) or to a device (e.g., a UE, a wireless device or any other type of communication device) or components thereof and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components (e.g., via one or more applications, components, functions, virtual machines or containers executing on one or more physical processing nodes in one or more networks). In some embodiments, some or all of the functions described herein can be implemented as virtual components executed by one or more virtual machines implemented in one or more virtual environments1500hosted by one or more of hardware nodes1530. Further, in embodiments in which the virtual node is not a radio access node or does not require radio connectivity (e.g., a core network node), then the network node can be entirely virtualized. The functions can be implemented by one or more applications1520(which can alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) operative to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein. Applications1520are run in virtualization environment1500which provides hardware1530comprising processing circuitry1560and memory1590. Memory1590contains instructions1595executable by processing circuitry1560whereby application1520is operative to provide one or more of the features, benefits, and/or functions disclosed herein. Virtualization environment1500, comprises general-purpose or special-purpose network hardware devices1530comprising a set of one or more processors or processing circuitry1560, which can be commercial off-the-shelf (COTS) processors, dedicated Application Specific Integrated Circuits (ASICs), or any other type of processing circuitry including digital or analog hardware components or special purpose processors. Each hardware device can comprise memory1590-1which can be non-persistent memory for temporarily storing instructions1595or software executed by processing circuitry1560. Each hardware device can comprise one or more network interface controllers (NICs)1570, also known as network interface cards, which include physical network interface1580. Each hardware device can also include non-transitory, persistent, machine-readable storage media1590-2having stored therein software1595and/or instructions executable by processing circuitry1560. Software1595can include any type of software including software for instantiating one or more virtualization layers1550(also referred to as hypervisors), software to execute virtual machines1540as well as software allowing it to execute functions, features and/or benefits described in relation with some embodiments described herein. Virtual machines1540, comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and can be run by a corresponding virtualization layer1550or hypervisor. Different embodiments of the instance of virtual appliance1520can be implemented on one or more of virtual machines1540, and the implementations can be made in different ways. During operation, processing circuitry1560executes software1595to instantiate the hypervisor or virtualization layer1550, which can sometimes be referred to as a virtual machine monitor (VMM). Virtualization layer1550can present a virtual operating platform that appears like networking hardware to virtual machine1540. As shown inFIG.15, hardware1530can be a standalone network node with generic or specific components. Hardware1530can comprise antenna15225and can implement some functions via virtualization. Alternatively, hardware1530can be part of a larger cluster of hardware (e.g., such as in a data center or customer premise equipment (CPE)) where many hardware nodes work together and are managed via management and orchestration (MANO)15100, which, among others, oversees lifecycle management of applications1520. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV can be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment. In the context of NFV, virtual machine1540can be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of virtual machines1540, and that part of hardware1530that executes that virtual machine, be it hardware dedicated to that virtual machine and/or hardware shared by that virtual machine with others of the virtual machines1540, forms a separate virtual network elements (VNE). Still in the context of NFV, Virtual Network Function (VNF) is responsible for handling specific network functions that run in one or more virtual machines1540on top of hardware networking infrastructure1530and corresponds to application1520inFIG.15. In some embodiments, one or more radio units15200that each include one or more transmitters15220and one or more receivers15210can be coupled to one or more antennas15225. Radio units15200can communicate directly with hardware nodes1530via one or more appropriate network interfaces and can be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signalling can be effected with the use of control system15230which can alternatively be used for communication between the hardware nodes1530and radio units15200. With reference toFIG.16, in accordance with an embodiment, a communication system includes telecommunication network1610, such as a 3GPP-type cellular network, which comprises access network1611, such as a radio access network, and core network1614. Access network1611comprises a plurality of base stations1612a,1612b,1612c, such as NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area1613a,1613b,1613c. Each base station1612a,1612b,1612cis connectable to core network1614over a wired or wireless connection1615. A first UE1691located in coverage area1613ccan be configured to wirelessly connect to, or be paged by, the corresponding base station1612c. A second UE1692in coverage area1613ais wirelessly connectable to the corresponding base station1612a. While a plurality of UEs1691,1692are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the Telecommunication network1610is itself connected to host computer1630, which can be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm. Host computer1630can be under the ownership or control of a service provider, or can be operated by the service provider or on behalf of the service provider. Connections1621and1622between telecommunication network1610and host computer1630can extend directly from core network1614to host computer1630or can go via an optional intermediate network1620. Intermediate network1620can be one of, or a combination of more than one of, a public, private or hosted network; intermediate network1620, if any, can be a backbone network or the Internet; in particular, intermediate network1620can comprise two or more sub-networks (not shown). The communication system ofFIG.16as a whole enables connectivity between the connected UEs1691,1692and host computer1630. The connectivity can be described as an over-the-top (OTT) connection1650. Host computer1630and the connected UEs1691,1692are configured to communicate data and/or signaling via OTT connection1650, using access network1611, core network1614, any intermediate network1620and possible further infrastructure (not shown) as intermediaries. OTT connection1650can be transparent in the sense that the participating communication devices through which OTT connection1650passes are unaware of routing of uplink and downlink communications. For example, base station1612may not or need not be informed about the past routing of an incoming downlink communication with data originating from host computer1630to be forwarded (e.g., handed over) to a connected UE1691. Similarly, base station1612need not be aware of the future routing of an outgoing uplink communication originating from the UE1691towards the host computer1630. Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference toFIG.17. In communication system1700, host computer1710comprises hardware1715including communication interface1716configured to set up and maintain a wired or wireless connection with an interface of a different communication device of communication system1700. Host computer1710further comprises processing circuitry1718, which can have storage and/or processing capabilities. In particular, processing circuitry1718can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Host computer1710further comprises software1711, which is stored in or accessible by host computer1710and executable by processing circuitry1718. Software1711includes host application1712. Host application1712can be operable to provide a service to a remote user, such as UE1730connecting via OTT connection1750terminating at UE1730and host computer1710. In providing the service to the remote user, host application1712can provide user data which is transmitted using OTT connection1750. Communication system1700can also include base station1720provided in a telecommunication system and comprising hardware1725enabling it to communicate with host computer1710and with UE1730. Hardware1725can include communication interface1726for setting up and maintaining a wired or wireless connection with an interface of a different communication device of communication system1700, as well as radio interface1727for setting up and maintaining at least wireless connection1770with UE1730located in a coverage area (not shown inFIG.17) served by base station1720. Communication interface1726can be configured to facilitate connection1760to host computer1710. Connection1760can be direct or it can pass through a core network (not shown inFIG.17) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, hardware1725of base station1720can also include processing circuitry1728, which can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. Base station1720further has software1721stored internally or accessible via an external connection. Communication system1700can also include UE1730already referred to. Its hardware1735can include radio interface1737configured to set up and maintain wireless connection1770with a base station serving a coverage area in which UE1730is currently located. Hardware1735of UE1730can also include processing circuitry1738, which can comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. UE1730further comprises software1731, which is stored in or accessible by UE1730and executable by processing circuitry1738. Software1731includes client application1732. Client application1732can be operable to provide a service to a human or non-human user via UE1730, with the support of host computer1710. In host computer1710, an executing host application1712can communicate with the executing client application1732via OTT connection1750terminating at UE1730and host computer1710. In providing the service to the user, client application1732can receive request data from host application1712and provide user data in response to the request data. OTT connection1750can transfer both the request data and the user data. Client application1732can interact with the user to generate the user data that it provides. It is noted that host computer1710, base station1720and UE1730illustrated inFIG.17can be similar or identical to host computer1630, one of base stations1612a,1612b,1612cand one of UEs1691,1692ofFIG.16, respectively. This is to say, the inner workings of these entities can be as shown inFIG.17and independently, the surrounding network topology can be that ofFIG.16. InFIG.17, OTT connection1750has been drawn abstractly to illustrate the communication between host computer1710and UE1730via base station1720, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure can determine the routing, which it can be configured to hide from UE1730or from the service provider operating host computer1710, or both. While OTT connection1750is active, the network infrastructure can further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network). Wireless connection1770between UE1730and base station1720is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to UE1730using OTT connection1750, in which wireless connection1770forms the last segment. More precisely, the exemplary embodiments disclosed herein can improve flexibility for the network to monitor end-to-end quality-of-service (QoS) of data flows, including their corresponding radio bearers, associated with data sessions between a user equipment (UE) and another entity, such as an OTT data application or service external to the 5G network. These and other advantages can facilitate more timely design, implementation, and deployment of 5G/NR solutions. Furthermore, such embodiments can facilitate flexible and timely control of data session QoS, which can lead to improvements in capacity, throughput, latency, etc. that are envisioned by 5G/NR and important for the growth of OTT services. A measurement procedure can be provided for the purpose of monitoring data rate, latency and other network operational aspects on which the one or more embodiments improve. There can further be an optional network functionality for reconfiguring OTT connection1750between host computer1710and UE1730, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring OTT connection1750can be implemented in software1711and hardware1715of host computer1710or in software1731and hardware1735of UE1730, or both. In embodiments, sensors (not shown) can be deployed in or in association with communication devices through which OTT connection1750passes; the sensors can participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software1711,1731can compute or estimate the monitored quantities. The reconfiguring of OTT connection1750can include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect base station1720, and it can be unknown or imperceptible to base station1720. Such procedures and functionalities can be known and practiced in the art. In certain embodiments, measurements can involve proprietary UE signaling facilitating host computer1710's measurements of throughput, propagation times, latency and the like. The measurements can be implemented in that software1711and1731causes messages to be transmitted, in particular empty or ‘dummy’ messages, using OTT connection1750while it monitors propagation times, errors etc. FIG.18is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which, in some exemplary embodiments, can be those described with reference toFIGS.16and17. For simplicity of the present disclosure, only drawing references toFIG.18will be included in this section. In step1810, the host computer provides user data. In substep1811(which can be optional) of step1810, the host computer provides the user data by executing a host application. In step1820, the host computer initiates a transmission carrying the user data to the UE. In step1830(which can be optional), the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step1840(which can also be optional), the UE executes a client application associated with the host application executed by the host computer. FIG.19is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which can be those described with reference toFIGS.16and17. For simplicity of the present disclosure, only drawing references toFIG.19will be included in this section. In step1910of the method, the host computer provides user data. In an optional substep (not shown) the host computer provides the user data by executing a host application. In step1920, the host computer initiates a transmission carrying the user data to the UE. The transmission can pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In step1930(which can be optional), the UE receives the user data carried in the transmission. FIG.20is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which can be those described with reference toFIGS.16and17. For simplicity of the present disclosure, only drawing references toFIG.20will be included in this section. In step2010(which can be optional), the UE receives input data provided by the host computer. Additionally or alternatively, in step2020, the UE provides user data. In substep2021(which can be optional) of step2020, the UE provides the user data by executing a client application. In substep2011(which can be optional) of step2010, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application can further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in substep2030(which can be optional), transmission of the user data to the host computer. In step2040of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure. FIG.21is a flowchart illustrating an exemplary method and/or procedure implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station and a UE which can be those described with reference toFIGS.16and17. For simplicity of the present disclosure, only drawing references toFIG.21will be included in this section. In step2110(which can be optional), in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In step2120(which can be optional), the base station initiates transmission of the received user data to the host computer. In step2130(which can be optional), the host computer receives the user data carried in the transmission initiated by the base station. The term unit, as used herein, can have conventional meaning in the field of electronics, electrical devices and/or electronic devices and can include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. Exemplary embodiments include the following numbered embodiments. 1. A method for managing user authentication credentials in relation to different types of core networks, CNs, the method comprising: receiving (1010) a request to authenticate a user for access via a first CN;determining (1020) that user authentication credentials are unavailable in relation to the first CN;sending (1030), to a translator function associated with a second CN that is different than the first CN, a request to provide user authentication credentials associated with the first CN;receiving (1040) user authentication credentials associated with the first CN; andbased on the received user authentication credentials, authenticating (1050) the user for access via the first CN. 2. The method of embodiment 1, wherein determining (1020) that user authentication credentials are unavailable in relation to the first CN comprises:sending (1022), to a first data repository associated with the first CN, a request for the user authentication credentials; andreceiving (1024), from the first data repository, a response indicating at least one of the following: an error, and an indication that user authentication credentials are available in relation to the second CN. 3. The method of any of embodiments 1-2, wherein:the received user authentication credentials are encrypted using an encryption mechanism associated with the first CN; andauthenticating (1050) the user for access via the first CN comprises:decrypting (1052) the received user authentication credentials;generating (1054) authentication vectors; andauthenticating the user based on the generated authentication vectors. 4. The method of embodiment 3, further comprising:re-encrypting (1060) the decrypted user authentication credentials using the encryption mechanism associated with the first CN; andstoring (1070) the re-encrypted user authentication credentials in a first data repository associated with the first CN. 5. The method of any of embodiments 1-4, further comprising, in response to a further request to authenticate the user for access via the first CN, determining (1080) that the user authentication credentials are available in relation to the first CN. 6. The method of any of embodiments 1-5, wherein:the first CN is a 5G CN and the second CN is a 4G CN;the method is performed by a user data management, UDM, node of the 5G CN; andthe translator function is a 4G-5G encrypted credentials translator. 7. A method for providing user authentication credentials for a second core network, CN, that is different from a first CN, the method comprising:receiving (1110), from a data management node associated with the first CN, a request to provide user authentication credentials associated with the first CN;retrieving (1120) user authentication credentials associated with the second CN;translating (1130) the retrieved user authentication credentials into user authentication credentials associated with the first CN; andproviding (1040) the translated user authentication credentials to the data management node. 8. The method of embodiment 7, wherein retrieving (1120) user authentication credentials associated with the second CN comprises:sending (1122), to a subscriber data repository associated with the second CN, a request for the user authentication credentials; andreceiving (1124), from the subscriber data repository, the user authentication credentials encrypted based on an encryption mechanism associated with the second CN. 9. The method of embodiment 8, wherein translating (1130) the retrieved user authentication credentials comprises:decrypting (1132) the encrypted user authentication credentials; andre-encrypting (1134) the decrypted user authentication credentials based on an encryption mechanism associated with the first CN. 10. The method of embodiment 9, wherein the translated user authentication credentials provided to the data management node includes the re-encrypted user authentication credentials and at least one of the following:an indication of the particular encryption mechanism used for the re-encryption; andinformation needed to decrypt the re-encrypted user authentication credentials. 11. The method of any of embodiments 7-10, wherein:the first CN is a 5G CN and the second CN is a 4G CN;the method is performed by a 4G-5G encrypted credentials translator. 12. A method for managing user authentication credentials in relation to different types of core networks, CNs, the method comprising:receiving (1210) a request to authenticate a user for access via a second CN;determining (1220) that user authentication credentials are unavailable in relation to the second CN;sending (1230), to a data management node associated with a first CN that is different than the second CN, a request to provide user authentication credentials associated with the second CN;receiving (1240), from the data management node, user authentication credentials associated with the second CN; andbased on the received user authentication credentials, authenticating (1250) the user for access via the second CN. 13. The method of embodiment 12, wherein:the received user authentication credentials are encrypted using an encryption mechanism associated with the second CN; andauthenticating (1250) the user for access via the second CN comprises:decrypting (1252) the received user authentication credentials;generating (1254) authentication vectors; andauthenticating the user based on the generated authentication vectors. 14. The method of embodiment 13, wherein:decrypting (1252) the received user authentication credentials comprises incrementing an encryption sequence number; andthe method further comprises:sending (1260), to the data management node, a request to update stored user authentication credentials with the incremented sequence number; andreceiving (1270), from the data management node, a response indicating a successful update of the stored user authentication credentials. 15. The method of any of embodiments 12-14, wherein:the first CN is a 5G CN and the second CN is a 4G CN; andthe method is performed by a home subscriber server, HSS, of the 4G CN. 16. A first core network, CN, comprising:a first access management node (610);a first data management node (630,740,940) coupled to a first subscriber data repository (640,750,850,950); andan encrypted credentials translator function (620) configured to communicate with the first access management node and a second subscriber data repository (650,730,830,950) associated with a second CN,wherein the first data management node is configured to perform operations corresponding to any of the methods of embodiments 1-6. 17. The first core network of embodiment 16, wherein the encrypted credentials translator function (620) is configured to perform operations corresponding to any of the methods of embodiments 7-11. 18. The first core network of any of embodiments 16-17, wherein:the first CN is a 5G CN and the second CN is a 4G CN;the first access management node (610) comprises at least one of a Authentication Server Function (AUSF) and an Access and Mobility Management Function (AMF);the first data management node (630,740,940) comprises a user data management (UDM) function; andthe first data subscriber data repository comprises a Unified Data Repository (UDR). 19. The first core network of any of embodiments 16-18, wherein the first and second subscriber data repositories are part of a unified data repository (950). 20. A data management node (630,740,940) in a first core network, CN, the data management node comprising:a network interface configured to communicate with a first subscriber data repository (640,750,850,950) and an encrypted credentials translator function (620);processing circuitry operably coupled to the network interface and configured to perform operations corresponding to any of the methods of embodiments 1-6; andpower supply circuitry configured to supply power to the first data management node. 21. A data management node (630,740,940) in a first core network, CN, the data management node being arranged to perform operations corresponding to any of the methods of embodiments 1-6. 22. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry comprising a data management node in a core network, configure the node to perform operations corresponding to any of the methods of embodiments 1-6. 23. A computer program product comprising computer-executable instructions that, when executed by processing circuitry comprising a data management node in a core network, configure the node to perform operations corresponding to any of the methods of embodiments 1-6. 24. An encrypted credentials translator node (620) associated with a second core network, CN, the encrypted credentials translator function comprising:a network interface configured to communicate with:a second subscriber data repository (650,730,830,950) associated with the second CN; andeither a first data management node (630,740,940) or a first subscriber data repository (650,730,830,950) associated with a first CN that is different than the second CN;processing circuitry operably coupled to the network interface and configured to perform operations corresponding to any of the methods of embodiments 7-10; andpower supply circuitry configured to the encrypted credentials translator function. 25. An encrypted credentials translator node (620) associated with a second core network, the encrypted credentials translator node being arranged to perform operations corresponding to any of the methods of embodiments 7-10. 26. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry comprising an encrypted credentials translator node in a core network, configure the node to perform operations corresponding to any of the methods of embodiments 7-10. 27. A computer program product comprising computer-executable instructions that, when executed by processing circuitry comprising an encrypted credentials translator node in a core network, configure the node to perform operations corresponding to any of the methods of embodiments 7-10. 28. A data management node (720,820,920) in a second core network, CN, the data management node comprising:a network interface configured to communicate with:a second subscriber data repository (650,730,830,950) associated with the second CN; andeither a first data management node (630,740,940) or a first subscriber data repository (650,730,830,950) associated with a first CN that is different than the second CN;processing circuitry operably coupled to the network interface and configured to perform operations corresponding to any of the methods of embodiments 11-15; andpower supply circuitry configured to supply power to the first data management node. 29. A data management node (720,820,920) in a second core network, CN, the data management node being arranged to perform operations corresponding to any of the methods of embodiments 11-15. 30. A non-transitory, computer-readable medium storing computer-executable instructions that, when executed by processing circuitry comprising a data management node in a core network, configure the node to perform operations corresponding to any of the methods of embodiments 11-15. 31. A computer program product comprising computer-executable instructions that, when executed by processing circuitry comprising a data management node in a core network, configure the node to perform operations corresponding to any of the methods of embodiments 11-15.
109,879
11943615
DETAILED DESCRIPTION Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings. In describing the exemplary embodiments of the disclosure, descriptions related to technical contents which are well-known in the art to which the disclosure pertains, and are not directly associated with the disclosure, will be omitted. Such an omission of unnecessary descriptions is intended to prevent obscuring of the main idea of the disclosure and more clearly transfer the main idea. For the same reason, in the accompanying drawings, some elements may be exaggerated, omitted, or schematically illustrated. Further, the size of each element does not entirely reflect the actual size. In the drawings, identical or corresponding elements are provided with identical reference numerals. The specific terms used herein are provided for ease of understanding the disclosure, and such specific terms may be changed into other forms without departing from the spirit and scope of the disclosure. The advantages and features of the disclosure and ways to achieve them will be apparent by making reference to embodiments as described below in detail in conjunction with the accompanying drawings. However, the disclosure is not limited to the embodiments set forth below, but may be implemented in various different forms. The following embodiments are provided only to completely disclose the disclosure and inform those skilled in the art of the scope of the disclosure, and the disclosure is defined only by the scope of the appended claims. Here, it will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer usable or computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. And each block of the flowchart illustrations may represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the blocks may occur out of the order. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. As used herein, the “unit” refers to a software element or a hardware element, such as a. Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), which performs a predetermined function. However, the “unit does not always have a meaning limited to software or hardware. The “unit” may be constructed either to be stored in an addressable storage medium or to execute one or more processors. Therefore, the “unit” includes, for example, software elements, object-oriented software elements, class elements or task elements, processes, functions, properties, procedures, sub-routines, segments of a program code, drivers, firmware, micro-codes, circuits, data, database, data structures, tables, arrays, and parameters. The elements and functions provided by the “unit” may be either combined into a smaller number of elements, “unit” or divided into a larger number of elements, “unit”. Moreover, the elements and “units” may be implemented to reproduce one or more CPUs within a device or a security multimedia card. Also, in an embodiment, the ‘˜unit’ may include one or more processors. First, terms used in this specification will be defined. In this specification, a UICC is a smart card inserted and used in a mobile terminal and denotes a chip that stores personal information, such as network access authentication information of a mobile communication subscriber, phone books, and SMS, and performs subscriber authentication and traffic security key generation when accessing a mobile communication network, such as GSM, WCDMA, LTE, or the like, thereby enabling secure mobile communication usage. The UICC may be equipped with communication applications, such as a subscriber identification module (SIM), a universal SIM (USIM), IP multimedia SIM (ISM), or the like, according to the type of mobile communication network accessed by the subscriber, and may provide a higher level security function for employing various applications such as an electronic wallet, ticketing, an electronic passport, and the like. In this specification, an embedded eUICC (eUICC) is a security module in the form of a chip embedded in a terminal, which cannot be inserted into and cannot be removed from the terminal. The eUICC may download and install a profile using over-the-air (OTA) technology. The eUICC may be referred to as a “UICC capable of downloading and installing a profile”. In this specification, a method of downloading and installing a profile in the eUICC using the OTA technology may also be applied to a detachable UICC that can be inserted into and removed from the terminal. That is, the embodiments of the disclosure may be applied to a UICC capable of downloading and installing a profile using the OTA technology. In this specification, the term “UICC” may be used interchangeably with “SPX, and the term “eUICC” may be used interchangeably with “eSIM”. In this specification, a profile may denote a package of an application, a file system, an authentication key value, and the like, which are stored in the UICC, in a software form. In this specification, a USIM Profile may have the same meaning as the profile, or may denote a package of information included in the USIM application in the profile in a software form. In this specification, a profile provision server may have a function of producing a profile, encrypting the produced profile, producing a remote profile management instruction, or encrypting the produced remote profile management instruction, and may be referred to as a “subscription manager data preparation (SM-DP)”, a “subscription manager data preparation plus (SM-DP+)”, an “off-card entity of profile domain”, a “profile encryption server”, a “profile producing server”, a “profile provisioner (PP)”, a “profile provider”, or a “profile provisioning credentials (PPC) holder”. In this specification, a profile management server may be referred to as a “subscription manager secure routing (SM-SR)”, a “subscription manager secure routing plus (SM-SR-+)”, an “off-card entity of eUICC profile manager”, a “profile management credentials (PMC) holder”, or an “eUICC manager (EM)”. In this specification, the profile provision server may encompass the functions of the profile management server. Therefore, in various embodiments of the disclosure, that is, in the following description, the operation of the profile provision server may also be performed by the profile management server. Likewise, the operation of the profile management server or the SM-SR may also be performed by the profile provision server. The term “terminal” as used herein may be referred to as a “mobile station (MS)”, “user equipment (UE)”, a “user terminal (UT)”, a “wireless terminal”, an “access terminal (AT)”, a “terminal”, a “subscriber unit”, a “subscriber station (SS)”, a “wireless device”, a “wireless communication device”, a “wireless transmit/receive unit (WTRU)”, a “mobile node”, “mobile”, or other terms. Various embodiments of the terminal may include a cellular phone, a smartphone having a wireless communication function, a personal digital assistant (PDA) having a wireless communication function, a wireless modem, a portable computer having a wireless communication function, a photographing device, such as a digital camera, leaving a wireless communication function, a gaming device having a wireless communication function, home appliances for music storage and playback having a wireless communication function, Internet home appliances capable of wireless Internet access and browsing, a portable unit employing combinations of the above functions, or terminals thereof. In addition, the terminal may include a machine-to-machine (M2M) terminal or a machine type communication (\ITC) terminal/device, but is not limited thereto. In this specification, the terminal may be referred to as an “electronic device”. In this specification, the electronic device may include an embedded UICC capable of downloading and installing a profile. In the case where the UICC is not embedded in the electronic device, a UICC physically separated from the electronic device may be inserted into the electronic device, thereby connecting to the electronic device. For example, the UICC in the form of a card may be inserted into the electronic device. The electronic device may include the terminal, and in this case, the terminal may be a terminal including a UICC capable of downloading and installing a profile. The UICC may be embedded in the terminal, and in the case where the terminal and the UICC are separate from each other, the UICC may be inserted into the terminal to then be connected to the terminal. The UICC capable of downloading and installing a profile may be called, for example, an “eUICC”. In this specification, the terminal or the electronic device may include software or an application installed in the terminal or the electronic device so as to control the UICC or the eUICC. The software or the application may, for example, be referred to as a “local profile assistant (LPA)”. In this specification, a profile identifier may be referred to as a “profile ID”, an “integrated circuit card ID (ICCID)”, a “matching ID”, an “event identifier (H))”, an “activation code”, an “activation code token”, or a “factor matching ISD-P or a profile domain (PD)”. The profile ID may indicate a unique identifier of each profile. The profile identifier may include an address of a profile provision server (SM-DP+) capable of indexing profiles. In this specification, an eUICC identifier (eUICC ID) may be a unique identifier of the eUICC embedded in the terminal, and may be referred to as an “EID”. In addition, in the case where the eUICC, is equipped with a provisioning profile, the eUICC identifier may be a profile ID of the corresponding provisioning profile. In addition, in the case where the terminal and the eUICC chip are not separate as in the embodiment of the disclosure, the eUICC identifier may be a terminal ID. In addition, the eUICC identifier may denote a specific secure domain of the eUICC chip. In this specification, a profile container may be referred to as a “profile domain”. The profile container may be a security domain. In this specification, an application protocol data unit (APDU) may be a message by which the terminal interworks with the eUICC. In addition, the APDU may be a message by which a PP or a PM interworks with the eUICC. In this specification, profile provisioning credentials (PPC) may be a means used in mutual authentication, profile encryption, and signature between the profile provision server and the eUICC. The PPC may include at least one of a symmetric key, a Rivest Shamir Adleman (RSA) certificate and a private key, an elliptic curved cryptography (ECC) certificate and a private key, a root certification authority (CA), and a certificate chain. In addition, if there is a plurality of profile provision servers, different PPCs may be stored in the eUICC or used for the respective profile provision servers. In this specification, profile management credentials (PMC) may be a means used in mutual authentication, transmission data encryption, and signature between the profile management server and the eUICC. The PMC may include at least one of a symmetric key, an RSA certificate and a private key, an ECC certificate and a private key, a root CA, and a certificate chain. In addition, if there is a plurality of profile management servers, different PMCs may be stored in the eUICC or used for the respective profile management servers. In this specification, an AID may be an application identifier. This value may be an identifier for distinguishing between different applications in the eUICC. In this specification, an event may denote profile download, remote profile management, or management/processing instructions of other profiles or the eUICC. “Profile download” may be used interchangeably with “profile installation”, In addition, an event type may be used to indicate whether a specific event is profile download, remote profile management, or a management/processing command of other profiles or the eUICC, and may be referred to as an “operation type (or operationtype)”, an “operation class (or operationclass)”, an “event request type”, an “event class”, an “event request class”, or the like. In this specification, a “profile package” may be used interchangeably with a “profile”, or may be used as a term indicating a data object of a specific profile, and may be referred to as a “profile TIN” or a “profile package TLV”. In the case where a profile package is encrypted using encryption parameters, the profile package may be referred to as a “protected profile package (PPP)” or a “protected profile package TLV (PPP TLV)”. In the case where a profile package is encrypted using encryption parameters that can be decrypted only by a specific eUICC, the profile package may be referred to as a “bound profile package (BPP)” or a “bound profile package TLV (BPP TLV)”. The profile package TLV may be a data set representing information constituting the profile in a TLV (tag, length, and value) format. In this specification, remote profile management (RPM) may be referred to as “remote profile management”, “remote management“, a “remote management command”, a “remote command”, a “remote profile management (RPM) package”, a “remote profile management package”, a “remote management package”, a “remote management command package”, or a “remote command package”. The RPM may be used to change the status of a specific profile (enabled, disabled, or deleted) or update the content of a specific profile (e.g., a profile nickname, profile metadata, or the like). The RPM may include one or more remote management commands, and in this case, the profiles, which are targets of the respective remote management commands, may be the same or different depending on the remote management commands. In this specification, a certificate or a digital certificate may denote a digital certificate used for mutual authentication based on an asymmetric key including a pair of a public key (PK) and a secret key (SK). Each certificate may include one or more public keys (PKs), public key identifiers (PK IDs) corresponding to the respective public keys, an identifier of a certificate issuer (CI) who issued the certificate (a certificate issuer ID), and a digital signature thereof. In addition, the certificate issuer may be referred to as a “certification issuer”, a “certificate authority (CA)”, a “certification authority”, or the like. In this specification, the public key (PK) and the public key ID (PK ID) may denote a specific public key or a certificate containing the corresponding public key, a portion of a specific public key or a portion of a certificate containing the corresponding public key, an operation result value of a specific public key (e.g., hash) or an operation result value of a certificate containing the corresponding public key (e.g., hash), an operation result value of a portion of a specific public key hash) or an operation result value of a portion of a certificate containing the corresponding public key (e.g., hash), or a storage space storing the above data, and may be used interchangeably therewith. In this specification, if certificates issued by a single certificate issuer (primary certificates) are used to issue other certificates (secondary certificates), or if the secondary certificates are successively used to issue third or more certificates, the correlation of the certificates may be referred to as a “certificate chain” or a “certificate hierarchy”. In this case, the CI certificate used in initial certificate issuance may be referred to as a “root of certificate”, a “highest certificate”, a “root CI”, a “root CI certificate, a “root CA”, a “root CA certificate”, or the like. In this specification, AKA may indicate authentication and key agreement, and may represent an authentication algorithm for accessing 3GPP and 3GPP2 networks. In this specification, “K” is an encryption key value stored in the eUICC, which is used in the AKA authentication algorithm. In this specification, “OPc” is a parameter value that may be stored in the eUICC, which is used in the AKA authentication algorithm. In this specification, an “NAA” is a network access application, and may be an application program such as USLM or ISM stored in the UICC and accessing a network. The NAA may be a network access module. In describing the disclosure, a detailed description of related known functions or configurations, which may unnecessarily obscure the subject matter of the disclosure, will be omitted. FIG.1is a diagram100illustrating a method of connecting a terminal to a mobile communication network using a UICC equipped with a profile fixed to the terminal. Referring toFIG.1, a UICC120may be inserted into a terminal110. In this case, the UICC may be detachable, or may be embedded in the terminal. The fixed profile of the UICC equipped with the fixed profile denotes that “access information” for accessing a specific communication provider is fixed. The access information may be, for example, an MST, which is a subscriber identifier, and a value “K” or “Ki” necessary for authentication in access to the network together with the subscriber identifier. Then, the terminal may perform authentication with an authentication processing system {e.g., a home location register (HLR) or AuC) of a mobile communication provider using the UICC. The authentication process may be an authentication and key agreement (AKA) process. If authentication is successful, the terminal may use mobile communication services, such as a phone call or usage of mobile data using the mobile communication network130of the mobile communication system. FIG.2is a diagram200illustrating an example of a hierarchy of certificate (or a certificate chain) issued by a certificate issuer (CI) and an example of configuration of a public key and a digital signature of a certificate issuer (CI), which are included in each certificate. Referring toFIG.2, the certificate issuer (CI) may produce a public key and a secret key to be used by the certificate issuer, may produce a certificate issuer (CI) certificate211by including the public key213, among the above keys, in its own certificate, and may attach, to the certificate, a digital signature215produced using its own secret key with respect to the certificate. In addition, referring toFIG.2, the CI certificate211may be used to issue a certificate231of Object1(see291). Object1may be, for example, a profile management server (SM-DP+). Object1may produce a public key and a secret key to be used by itself, may produce a certificate231of Object1by including the public key233, among the keys, in its own certificate, and may make a request to the certificate issuer, thereby receiving certificate issuer (CI) digital signature235using the certificate issuer (CI) secret key. In this case, the certificate231of Object1may include a certificate issuer (CI) public key identifier (ID) (CI PKID)237corresponding to the certificate issuer (CI) public key213, which is to be used when checking the certificate issuer signature235contained in the corresponding certificate. In addition, referring toFIG.2, the CI certificate211may be used to issue a certificate251of Object2(see293). Object2may be, for example, an eUICC manufacturer (EUM). Object2may produce a public key and a secret key to be used by itself, may produce a certificate251of Object2by including the public key253, among the above keys, in its own certificate, and may make a request to the certificate issuer, thereby receiving a certificate issuer (CI) digital signature5using the certificate issuer (CI) secret key. In this case, the certificate251of Object2may include a certificate issuer (CI) public key identifier (ID) (CI PKID)237corresponding to the certificate issuer (CI) public key213, which is to be used when checking the certificate issuer signature255included in the corresponding certificate. The certificate issuer signatures235and255contained in the certificate231of Object1and the certificate251of Object2may have different values from each other, but the certificate issuer public key identifiers (CI PKIDs)237have the same value. In addition, referring toFIG.2, the certificate251of Object2may be used to issue a certificate271of Object3(see295). Object3may be, for example, an eUICC manufactured by an eUICC manufacturer (EUM). Object3may produce a public key and a secret key to be used by itself, may produce a certificate251of Object3by including the public key273, among the above keys, in its own certificate, and may make a request to Object2, thereby receiving a digital signature275of Object2using the secret key of Object2. In this case, the certificate271of Object3may include a public key identifier (ID) (CI PKID)277corresponding to the public key253of Object2, which is to be used when checking the signature275of Object2contained in the corresponding certificate. The certificate231of Object1, the certificate251of Object2, and the certificate271of Object3illustrated in the example ofFIG.2all have the same CI certificate211as the highest certificate or the root of certificate. Therefore, Object1, Object2, and Object3require the CI certificate211or the CI public key213contained therein in order to authenticate each other. More specifically, in the example ofFIG.2, in order for Object1and Object2to authenticate each other using digital certificates and signatures, Object1requires the signature of Object2, the certificate251of Object2, and the CI public key213, and Object2requires the signature of Object1, the certificate231of Object1, and the CI public key213. In addition, in the example ofFIG.2, in order for Object1and Object3to authenticate each other using digital certificates and signature, Object1requires the signature of Object3, the certificate271of Object3, the certificate251of Object2, and the CI public key213, and Object3requires the signature of Object1, the certificate231of Object1, and the CI public key213. In this case, the certificate251of Object2with respect to the certificate271of Object3may be referred to as a “sub-certificate issuer (sub CI) certificate” or a “sub-certificate authority (sub CA) certificate”. FIG.3is a diagram300illustrating a mutual authentication procedure between a server310and a terminal350. InFIG.3, the server310may be, for example, a profile management server (SM-DP+) or a service discovery server (SM-DS). In addition, inFIG.3, the terminal350may include software for controlling an eUICC local profile assistant (LPA)}320and an eUICC330. In addition, inFIG.3, each of the server310, the LPA320, and the eUICC330may store one or more digital certificates. Referring toFIG.3, the LPA320may check a list of all CI public keys (CI PKIDs) supported by the eUICC330in step3003. More specifically, in step3003, the LPA320and the eUICC330may identify eUICC information using an eUICC information request (Get eUICC info request) message and an eUICC information response (Get eUICC info response) message. The eUICC information response message may include eUICC information, which is referred to as “euiccinfol”, “euiccInfo”, or the like. The eUICC information may include a list of all CI PKIDs supported by the eUICC330. In step3005, the LPA320and the server310may estimate a TLS connection. The TLS connection in step3005may be performed using a server authentication method, among TLS connection methods, in which the LPA320verifies the identity of the server310. When the LPA320identifies the identity of the server310during the TLS connection in step3005, the server310may submit a TLS certificate to the LPA320. The LPA320or the terminal350may store one or more CI PKIDs for validating the TLS certificate. If one or more sub-CI certificates are required for validating the TLS certificate of the server310using the CI PKID, the server310may submit one or more sub-CI certificates to the LPA320together with the TLS certificate in step3005. After the TLS connection is established, all messages between the LPA320and the server310may be protected by the TLS security procedure. In operation3007, the LPA320may make a request to the server310for initiating mutual authentication. The initiation of mutual authentication may be performed using an initiate authentication request message. The initiate authentication request message in step3007may include all CI PKIDs supported by the eUICC330, based on the information (euiccInfol) of the eUICC330identified by the LPA320in step3003. In operation3009, the server310may respond to the LPA320with initiation of the mutual authentication. The mutual authentication response may use an initiate authentication response message. The initiate authentication response message in step3009may include one CI PKID selected from the list of CI PKIDs included in the information (euiccinfol) of the eUICC330received by the server310in step3007, a server certificate capable of verifying the validity using the corresponding CI PKID; and a digital signature of the server310capable of verifying the validity using the corresponding server certificate. In this case, the CI PKID selected by the server310may be referred to as an “eUICC CI PKID to be used by the eUICC”. In addition, if one or more sub-CI certificates are required to determine validity of the server310using the selected CI PKID, the initiate authentication response message in step3009may include one or more sub-CI certificates together with the server certificate. The certificate of the server310transmitted in step3007may be different from the TLS certificate of the server310transmitted in step3005. In addition, the CI that issues the certificate of the server310transmitted in step3007and the CI that issues the TLS certificate of the server310transmitted in step3005may be the same or different. In operation3011, the LPA320may make a request to the eUICC330for authentication of the server. The authentication request may be performed using an authenticate server request message. The authenticate server request message in step3011, like the message received by the LPA320in step3009, may include a CI PKID that the server selects and transmits, a server certificate capable of verifying the validity using the corresponding CI PKID, one or more sub-CI certificates necessary for the verification of the validity, and a digital signature of the server310capable of verifying the validity using the server certificate. In addition, the authenticate server request message in step3011is information additionally produced by the LPA320, and may include information about the operation type that the terminal intends to perform. In step3013, the eUICC330may transmit a server authentication result to the LPA320in reply. The authentication result may be transmitted using an authenticate server response message. The authenticate server response message in step3013may include a validity, verification result with respect to the digital signature of the server310received by the eUICC320in step3011, a CI PKID that the server310selects and transmits, an eUICC certificate capable of verifying the validity using the corresponding CI MOD, one or more sub-CI certificates necessary for the verification of the validity, a digital signature of the eUICC330capable of verifying the validity using the &ACC certificate, and information about the operation type that the terminal intends to perform. In operation3015, the LPA320may make a request to the server310for authentication of the terminal. The authentication request of the terminal may be performed using an authenticate client request message. The authenticate client request message in step3015may include information received by the LPA320from the eUICC330in step3013. In operation3017, the server310may transmit an authentication result of the terminal in reply. The authentication result may be transmitted using an authenticate client response message. The authenticate client response message in step3017may include a validity verification result with respect to the digital signature of the eUICC330received by the server310in step3015and the information on an event or event summary corresponding to the operation type to be performed by the terminal350. In step3019the terminal350may install a profile or perform remote management of a profile according to the content of the event received in step3017. According to the mutual authentication procedure between the server310and the terminal350shown inFIG.3, since the terminal350verifies the server certificate using all CI PKIDs pre-stored in the eUICC330, it is difficult to limit a connection to the server310having a server certificate belonging to a certificate hierarchy of a specific CI certificate. FIG.4is a diagram400illustrating a procedure of identifying a server310having a server certificate belonging to a certificate hierarchy of a specific CI certificate when performing mutual authentication between the server310and a terminal350according to an embodiment of the disclosure. InFIG.4, a description of the server310, the LPA320, the eUICC330, and the terminal350refers to the description made with reference toFIG.3. Referring toFIG.4, as a method for restricting the certificate of the server310to the certificate belonging to a certificate hierarchy of a specific CI certificate when performing mutual authentication later, the LPA320may acquire public key identifier (CI PIM) information of the corresponding CI certificate in step4001, The LPA320may acquire the corresponding CI PKID information by the following methods, but the method is not necessarily limited thereto.Direct user input to terminalRetrieve some data of eUICC storage spaceRetrieve profile installed in eUICCRetrieve activation code used in installation of profileTransfer by 3rd party software to LPA in command code formTransfer by specific server relaying profile installation or remote management to LPATransfer by server managing terminal, LPA, eUICC to LISA In step4003, the LPA320may check whether or not the eUICC330is able to support the corresponding CI PKID in relation to the CI PKID information acquired in step4001. The operation of the terminal350in step4003will be described in more detail with reference to FIG. In operation4005, the LPA320and the server310may perform a TLS connection. The TLS connection in step4005may be performed using a server authentication method in which the LPA320identifies the identity of the server310, among TLS connection methods. When the LPA320identifies the identity of the server310in the TLS connection process in step4005, the server310may submit a TLS certificate to the LPA320, The320or the terminal350may include one or more CI PKIDs for verifying the validity of the TLS certificate. If one or more sub-CI certificates are required for verifying the validity of the TLS certificate of the server310using the CI PKID, the server310may submit one or more sub-CI certificates to the LPA320together with the TLS certificate in step4005. In step4005, compared with step3005described inFIG.3, the terminal350may further check whether or not it is possible to verify the certificate of the TLS certificate and the sub-CI certificates, which are submitted by the server, using the CI PKID identified in step4001. After the TLS connection is established, all messages between the LPA320and the server310may be protected by the TLS security procedure. In operation4007, the LPA320may make a request to the server310for initiating mutual authentication. The initiation of mutual authentication may be performed using an initiate authentication request message. The initiate authentication request message in step4007, compared with step3007described inFIG.3, may include a CI PKID that the LPA320acquires in step4001. In addition, the initiate authentication request message in step4007, compared with step3007described inFIG.3, may include a CI PKID that is identified to be supported by the eUICC through the eUICC in step4003. In operation4009, the server310may respond to the LPA320with initiation of mutual authentication. The mutual authentication response may be performed using an initiate authentication response message. The initiate authentication response message in step4009may include the CI PKID received by the server310in step4007, a server certificate capable of verifying the validity using the corresponding CI PKID, and a digital signature of the server310capable of verifying the validity using the corresponding server certificate. In this case, the transmitted CI MID may be referred to as an “eUICC CI PKID to be used by the eUICC”. In addition, if one or more sub-CI certificates are required to determine the validity of the server310using the corresponding CI PKID, the initiate authentication response message in step4009may include one or more sub-CI certificates together with the server certificate. The certificate of the server310transmitted in step4009may be different from the TLS certificate of the server310transmitted in step4005. In addition, the CI that issues the certificate of the server310transmitted in step4009and the CI that issues the TLS certificate of the server310transmitted in step4005may be the same or different. In addition, the LPA320may compare the CI PKID transmitted by the server310in step4009with the CI PKID transmitted by the LPA320in step4007. If the CI PKID transmitted by the server310is different from the CI MD transmitted by the LPA320in step4007, the LPA320may terminate the communication. In operation4011, the LPA320may make a request to the eUICC330for authentication of the server. The authentication request may be performed using an authenticate server request message. The authenticate server request message in step4011, like the message received by the LPA.320in step4009, may include a CI PKID transmitted by the server310, a server certificate capable of verifying the validity using the corresponding CI PKID, one or more sub-CI certificates necessary for the verification of the validity, and a digital signature of the server310capable of verifying the validity using the server certificate. In addition, the authenticate server request message in step4011is information additionally produced by the LPA320, and may include information about the operation type that the terminal intends to perform. The eUICC330that has received the message in step4011may verify the validity of the certificates included in the message in step4011, and may verify the digital signature of the server310using the corresponding certificates. In this case, the eUICC330may further check whether or not the eUICC330is able to support the CI PKID included in the message in step4011and/or whether or not the CI PKID is available. If the CI PKID transmitted by the server310cannot be supported or if the CI PKID is not available, the eUICC330may terminate the communication. In step4013, the eUICC330may transmit a server authentication result to the LPA320in reply. The authentication result may be transmitted using an authenticate server response message. The authenticate server response message in step4013may include the validity verification result with respect to the digital signature of the server310received by the eUICC320in step4011, a CI PKI) transmitted by the server310, an eUICC certificate capable of verifying the validity using the corresponding CI PKID, one or more sub-CI certificates necessary for the verification of the validity, a digital signature of the eUICC330capable of verifying the validity using the eUICC certificate, and information about the operation type that the terminal intends to perform. A description of the subsequent operations in steps4015to4019refers to the description of the operations in steps3015to3019inFIG.3. According to the mutual authentication procedure between the server310and the terminal350shown inFIG.4, since the terminal350verifies the server certificate using a specific CI PKID input in step4001, it is possible to limit a connection to the server310having a server certificate belonging to a certificate hierarchy of a specific CI certificate. FIG.5is a diagram500illustrating the operations of the LPA320and the eUICC330in detail in relation to the operation in step4003described inFIG.4. InFIG.5, a description of the LPA320, the eUICC330, and the terminal350refers to the description made with reference toFIG.3. Referring toFIG.5, the LPA320may acquire CI PKID information as in step4001ofFIG.4. A detailed description of step4001refers to the description made with reference toFIG.4. Thereafter, the terminal350may check whether or not the eUICC330is able to support the corresponding CI PKID in the following manner. As an example5100, the LPA320may compare the CI information acquired in step4001with eUICC information (euiccInfol or euiccInfo) temporarily cached in the LPA320in step5003. More specifically, in the case where the LPA320caches the eUICC information (euiccInfol or euiccInfo) identified by the method as described in step3003ofFIG.3in a temporary storage before step4001, the LPA320may compare a list of all CI PKIDs supported by the eUICC, which are included in the temporary storage, with the CI PKID information acquired in step4001. As a result of the comparison, if it is determined that the eUICC330does not support the CI PKID acquired in step4001, the LPA320may terminate communication. As another example5200, in steps5005to5009, the320may transmit a message to the eUICC (330) to identify a list of CI PKIDs supported by the eUICC (330), and may compare the same with the CI PKID in step4001. More specifically, the LPA320may transmit an information request message to the eUICC330in step5005after step4001. The message in step5005may be referred to as a “Get eUICC info request message”, a “Get eUICC challenge request message”, an “authenticate server request message”, or the like. In addition, the message in step5005may include an identifier of the profile (a profile ID, an ICCID, or an AID) to be the target of the remote profile management in the future. In step5007, the eUICC330may retrieve CI PKID information from the profile corresponding to the profile identifier received in step5005, or may acquire all CI PKID information supported by the eUICC330, thereby transmitting, to the LPA320, the corresponding CI PKID information together with the eUICC information (euiccInfol or euiccInfo). The message in step5007may be referred to as a “Get eUICC info response message”, a “Get eUICC challenge response message”, or an “authenticate server response message”. Thereafter, in step5009, the LPA320may compare the response of the eUICC;330received in step5007with the CI PKID acquired in step4001. As a result of the comparison, if it is determined that the eUICC330does not support the CI PKID acquired in step4001, the LPA320may terminate communication. As another example5300, in steps5011to5015, the LPA320may transmit a message to the eUICC330, thereby making a request for checking whether or not the eUICC330supports the CI PMI) in step4001. More specifically, the LPA320may transmit a checking request message to the eUICC330in step5011after step4001. The message in step5011may be referred to as a “Get eUICC info request message”, a “Get eUICC challenge request message”, or an “authenticate server request message”. In addition, the message in step5011may include the CI PKID information in step4001for making a request for checking whether or not the eUICC330supports, and may further include an identifier of the profile (a profile ID, an ICCID, or an AID) to be the target of the remote profile management in the future. In this regard, in step5013, the eUICC330may compare the CI MID information in step4001, which is received in step5011, with a list of CI PKIDs supported by the eUICC330, or may compare CI PKID information retrieved from the profile corresponding to the corresponding profile identifier received in step5011with a list of CI PKIDs supported by the eUICC330, thereby identifying whether or not the eUICC330supports the corresponding CI HUD. If the eUICC does not support the corresponding CI PKID, the eUICC330may transmit a specific error code in reply, and may terminate communication. In step5015, the eUICC330may transmit, to the320, a checking response message including at least one of information on whether or not the corresponding CI PKID is supported, the corresponding CI PKID information, eUICC information (euiccInfol or euiccInfo), and an eUICC random challenge. The message in step S015may be referred to as a “Get eUICC info response message”, a “Get eUICC challenge response message”, or an “authenticate server response message”. If the eUICC330transmits an error code indicating that the CI PKID is not supported in reply, the LPA320may terminate the communication. FIG.6is a diagram illustrating the structure600of a terminal according to an embodiment of the disclosure. Referring toFIG.6, the terminal may include a transceiver610, a controller620, and a storage unit630. In the disclosure, the controller may be defined as a circuit, an application-specific integrated circuit, or at least one processor. In addition, although not shown explicitly inFIG.6, the terminal may further include an eUICC, and the eUICC may be inserted into or embedded in the terminal. The transceiver610may transmit and receive signals to and from other network entities. For example, the transceiver610may transmit, to the profile management server, digital certificate issuer information trusted by the terminal and a random character string (nonce or random challenge) that the profile management server (SM-DP+) uses when producing a signature for self-authentication, or may receive, from the profile management server, a signature of the profile management server, one or more digital certificates to be used to verify the signature of the profile management server, and a random character string that the eUICC in the terminal uses when producing a signature for self-authentication. In addition, the transceiver610may transmit a signature of the eUICC and one or more digital certificates to be used to verify the signature of the eUICC. In addition, the transceiver610may further transmit information on the operation type that the terminal intends to perform to the profile management server, or may receive some or all of information on an operation to be performed by the terminal from the profile management server. However, the transceiver610may selectively transmit information on the operation type that the terminal intends to perform. The controller620may control the overall operation of the terminal according to the embodiment proposed by the disclosure. For example, the controller620may control signal flow between blocks so as to perform the operations according to the flowcharts described above. More specifically, the controller620may identify digital certificate issuer information to be trusted by the terminal with reference to the eUICC in the terminal, may verify the validity of a digital certificate and digital certificate issuer information transmitted by the profile management server, may identify a signature of the profile management server, and may produce a signature of the eUICC. In addition, the controller620may perform an operation of installing or managing a profile according to the information received from the profile management server. In addition, the controller620may control the operation of the transceiver or the storage unit. The storage unit630may store at least one piece of information transmitted and received through the transceiver610and information produced through the controller620. In addition, the terminal of the disclosure may further include an input unit for receiving digital certificate issuer information to be trusted by the terminal. However, in the case where the input unit is not provided, the terminal may receive the digital certificate issuer information from a server or a network, may refer to information pre-stored in the terminal, or may receive the digital certificate issuer information from a third-party software in the terminal. In the case of receiving the digital certificate issuer information from the third-party software, the third-party software may pre-store digital certificate issuer information to be trusted by the terminal, or may receive the same from a server or a network. FIG.7is a diagram illustrating the structure700of a server according to an embodiment of the disclosure. Referring toFIG.7, the server may include a transceiver710, a controller720, and a, storage unit730. In the disclosure, the controller may be defined as a circuit, an application-specific integrated circuit, or at least one processor. The transceiver710may transmit and receive signals to and from other network entities. For example, the transceiver710may receive, from a terminal, digital certificate issuer information trusted by the terminal and a random character string (nonce or random challenge) that the profile management server uses when producing a signature for self-authentication. In addition, the transceiver710may transmit, to the terminal, a signature of the profile management server, one or more digital certificates to be used to verify the signature of the profile management server, and a random character string that the eUICC in the terminal uses when producing a signature for self-authentication, and may receive a signature of the eUICC and one or more digital certificates to be used to verify the signature of the eUICC. In addition, the transceiver710may further receive information on the operation type that the terminal intends to perform from the terminal. However, the information on the operation type that the terminal intends to perform may be selectively transmitted. In addition, the transceiver710may transmit, to the terminal, some of all of information on the operation to be performed by the terminal. The controller720may control the overall operation of the terminal according to the embodiment proposed by the disclosure. For example, the controller720may control a signal flow between blocks so as to perform the operations according to the flowcharts described above. More specifically, the controller720may verify whether or not digital certificate issuer information trusted by the terminal is valid, may determine whether or not the server is also able to trust the digital certificate issuer trusted by the terminal, may select a digital certificate corresponding to digital certificate issuer information trusted by the terminal, and may produce a signature of the profile management server. In addition, the controller720may verify the validity of a digital certificate transmitted by the terminal, may identify a signature of the eUICC, and may determine the operation type to be performed by the terminal. In addition, the controller720may control the operation of the transceiver or the storage unit. The storage unit730may store at least one piece of information transmitted and received through the transceiver710and information produced through the controller720. In the above-described detailed embodiments of the disclosure, a component included in the disclosure is expressed in the singular or the plural according to a presented detailed embodiment. However, the singular form or plural form is selected for convenience of description suitable for the presented situation, and various embodiments of the disclosure are not limited to a single element or multiple elements thereof. Further, either multiple elements expressed in the description may be configured into a single element or a single element in the description may be configured into multiple elements. Although the embodiment has been described in the detailed description of the disclosure, the disclosure may be modified in various forms without departing from the scope of the disclosure. Therefore, the scope of the disclosure should not be defined as being limited to the embodiments, but should be defined by the appended claims and equivalents thereof.
50,046
11943616
DETAILED DESCRIPTION Reference will now be made in detail to various embodiments of the subject matter described herein, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. FIG.1is a block diagram illustrating an example 5G system network architecture10, e.g., a home 5G core (5GC) network. The architecture10in FIG.1includes a network function repository function (NRF)100and SCP101, which may be located in the same home public land mobile network (PLMN). As described above, NRF100may maintain profiles of available producer network function (NF) service instances and their supported services and allow consumer NFs or SCPs to subscribe to and be notified of the registration of new/updated producer NF service instances. SCP101may also support service discovery and selection of producer NF instances. SCP101may perform load balancing of connections between consumer and producer NFs. In addition, using the methodologies described herein, SCP101may perform preferred NF location based selection and routing. NRF100is a repository for NF or service profiles of producer NF instances. In order to communicate with a producer NF instance, a consumer NF or an SCP must obtain the NF or service profile or the producer NF instance from NRF100. The NF or service profile is a JavaScript object notation (JSON) data structure defined in Third Generation Partnership Project (3GPP) Technical Specification (TS) 29.510. The NF or service profile definition includes at least one of a fully qualified domain name (FQDN), an Internet protocol (IP) version 4 (IPv4) address or an IP version 6 (IPv6) address. InFIG.1, any of the nodes (other than NRF100) can be either consumer NFs or producer NFs, depending on whether they are requesting or providing services. In the illustrated example, the nodes include a policy control function (PCF)102that performs policy related operations in a network, a user unified data management (UDM) function104that manages user data, and an application function (AF)106that provides application services. The nodes illustrated inFIG.1further include a session management function (SMF)108that manages sessions between access and mobility management function (AMF)110and PCF102. AMF110performs mobility management operations similar to those performed by a mobility management entity (MME) in 4G networks. An authentication server function (AUSF)112performs authentication services for user equipment (UEs), such as user equipment (UE)114, seeking access to the network. A network slice selection function (NSSF)116provides network slicing services for devices seeking to access specific network capabilities and characteristics associated with a network slice. A network exposure function (NEF)118provides application programming interfaces (APIs) for application functions seeking to obtain information about Internet of things (IoT) devices and other UEs attached to the network. NEF118performs similar functions to the service capability exposure function (SCEF) in 4G networks. A radio access network (RAN)120connects user equipment (UE)114to the network via a wireless link. Radio access network120may be accessed using a g-Node B (gNB) (not shown inFIG.1) or other wireless access point. A user plane function (UPF)122can support various proxy functionality for user plane services. One example of such proxy functionality is multipath transmission control protocol (MPTCP) proxy functionality. UPF122may also support performance measurement functionality, which may be used by UE114to obtain network performance measurements. Also illustrated inFIG.1is a data network (DN)124through which UEs access data network services, such as Internet services. Security edge protection proxy (SEPP)126filters incoming traffic from another PLMN and performs topology hiding for traffic exiting the home PLMN. SEPP126may communicate with an SEPP in a foreign PLMN which manages security for the foreign PLMN. Thus, traffic between NFs in different PLMNs may traverse two SEPP functions, one for the home PLMN and the other for the foreign PLMN. SEPP126may utilize an N32-c interface and an N32-f interface. An N32-c interface is a control plane interface between two SEPPs usable for performing an initial handshake (e.g., a TLS handshake) and negotiating various parameters for an N32-f interface connection and related message forwarding. An N32-f interface is a forwarding interface between two SEPPs usable for forwarding various communications (e.g., 5GC service requests and responses) between a consumer NF and a producer NF after applying application level security protection. One potential issue with the existing 5G architecture is that a consumer NF can trigger a signaling storm by sending a significant number of service requests messages to a producer NF, SEPP, or SCP in a home PLMN. While the receiving producer NF, SEPP, or SCP in the home network can initiate a global message rate limiting process to reduce or mitigate consequences of the signaling storm from the culprit consumer NF, global message rate limiting can similarly discard messages from legitimate consumer NFs and SEPPS that are not responsible for or associated with the signaling storm at an equal rate. FIG.2is a diagram depicting the signaling connections existing between each of a plurality of service consumer network functions200-204and a service producer network function126. In some embodiments, service producer network function126requires some manner of ingress rate limiting in order to protect itself against excessive 5GC signaling from subscribed consumer network functions. For example, service producer network function126may be configured with a global rate limiting functionality that serves to throttle or limit the number of ingress messages received from the consumer network functions. As shown inFIG.2, producer network function126may be configured to receive signaling messages from multiple consumer network functions200-204. Out of the multiple consumer network functions that are sending messages, one or more consumer functions may be sending an excess number of signaling messages which causes the producer network function126to initiate a message throttling mechanism that manages the ingress rate of messages received at the producer network function126. While global message rate limiting measures can mitigate the negative effects of a signaling storm from a particular consumer network function (e.g., consumer network function200), such rate limiting may also unfairly discard or throttle traffic associated with legitimate consumer network functions (e.g., consumer network functions202-204) that are not responsible for or associated with the signaling storm. As shown inFIG.2, the global rate limiting mechanism executed by the producer network function will throttle all incoming messages and improperly throttle consumer network functions (e.g., consumer network functions202-204) that are sending messages in accordance with a permissible limit or threshold. As such, consumer network functions202-204are throttled to the same degree as the culprit consumer network function (e.g., consumer network function200). In some embodiments of the disclosed subject matter, a network node can be provisioned with a message rate limiting engine (as discussed in further detail below). Notably, a message rate limiting engine can be configured to maintain monitor a current messaging rate originating from a particular consumer network function and determine if that rate exceeds a permissible threshold. To accomplish this, the message rate limiting engine at a network node can be configured to recognize an access token (and an included consumer network function instance identifier) that may be stored in an HTTP and/or JSON message header of a service access message sent by a consumer network function. In some embodiments, the access token may be an OAuth2 access token that is requested from an authorization server, such as an NRF. FIG.3is a message flow diagram illustrating an access token request procedure that is conducted by a consumer network function. Referring toFIG.3, a service consumer network function302may send an access token request message311to an authorization server304(e.g., an NRF). In particular, request message311comprises an Nnrf_AcessToken_Get Request message that specifies an expected NF service name and NF type, the service consumer network function type, a client identifier, and the like. Upon receiving request message311, authorization server304is configured to authorize the requesting client (i.e., service consumer network function302) and generate a unique encoded access token (e.g., OAuth2 access token) for that client. After generating the encoded access token, authorization server304generates and sends a response message313that is directed to service consumer network function302. In particular, response message313may include an Nnrf_AccessToken_Get Response message that includes the encoded access token generated by the authorization server and its corresponding expiration time. Once service consumer network function302obtains the necessary service access authorization by successfully fetching the access token, service consumer network function302can be configured to include the acquired access token in a network function service request message (e.g., an SBI service request message) to the service producer network function. Specifically, the service consumer network function can embed an encoded access token in the network function service request message that is sent to the service producer network function. In response to receiving the network function service request message, the service producer network function is configured to extract the encoded access token from the service request message. In particular, the service producer network function can be adapted to verify the integrity and the claims contained in the access token. If the claims and integrity in the access token are successfully verified, the service producer network function is configured to permit access to the requested service to the service consumer network function. Namely, the service producer network function may be configured to send a network function service response message to the service consumer network function that indicates that the requested service is authorized and will be accessible. FIG.4depicts an exemplary encoded access token402and decoded access token404. Notably, encoded access token402is received in this form by the consumer network function from an authorization server or NRF. Encoded access token402is further used in an HTTP header of service request messages generated and sent by consumer network functions. The encoded access token is ultimately decoded by a receiving producer network function and/or its message rate limiting engine as discussed further below. FIG.5depicts a message signaling diagram that illustrates an exemplary rate limiting technique that is performed by a message rate limiting engine514. As shown inFIG.5, message rate limiting engine514is hosted by service producer network function512. In alternate embodiments, message rate limiting engine514is hosted by a SEPP or SCP node.FIG.5further illustrates a pair of consumer network functions521-522. As shown inFIG.5, service producer network function512can be configured with a record database (e.g., see record database700as discussed below and shown inFIG.7) that may contain a plurality of record entries that respectively correspond to service consumer network functions that have communicated with the service producer network function (or host of the message rate limiting engine514). As shown inFIG.5, service producer network function512receives a NF service request message502from service consumer network function521. Notably, service request message502includes an encoded access token that was previously obtained by service consumer network function521(e.g., from an NRF as described above with regard toFIG.3). Moreover, the access token includes a plurality of claims, any of which can be accessed by the message rate limiting engine514. For example, one claim in the access token is a subject claim that contains a consumer network function instance identifier that identifies the sending consumer network function521. Another accessible claim in the access token includes a consumer PLMN identifier. Although the following description primarily describes the access and extraction of identifier data from the subject claim and the consumer PLMN claim, any claim included in the access token may be accessed by the message rate limiting engine for identification information that can be used for rate-limiting purposes without departing from the scope to the disclosed subject matter. After receiving service request message502and the access token, service producer network function512and/or the message rate limiting engine514is configured to decode the encoded access token and initiate an access token verification and service authorization procedure (see block503). For example, message rate limiting engine514may be configured to verify the integrity of the claims included in the access token. Notably, message rate limiting engine514is configured to obtain the consumer network function instance identifier that uniquely identifies the consumer network function521from the subject claim of the decoded access token (and/or obtain a consumer PLMN identifier that uniquely identifies a consumer PLMN from the consumer PLMN claim of the decoded access token). Once the consumer network function instance identifier is obtained, message rate limiting engine514is configured to utilize the consumer network function instance identifier to cross-reference the entries of the record database. In particular, the record database may include network function identifiers (and/or consumer PLMN identifiers, consumer NF group identifiers, or the like) and associated message rate limiting parameters (e.g., as shown inFIG.7). By comparing the consumer network function instance identifier with one or more of the consumer network function identifiers included in the entries of the record database, message rate limiting engine514is able to determine any existing messaging restrictions placed on the particular service consumer network function521. For example, message rate limiting engine514may access the record database and determine various messaging information pertaining to service consumer network function, such as the current messaging rate performed by network function521, a predefined allowed message rate for network function521, and a message throttle rate that is currently applied (if applicable) to the sending service consumer network function. In the event that message rate limiting engine514verifies the integrity of the access token and further determines that service consumer network function521is communicating in a manner that adheres to an acceptable ingress message rate for the producer network function512, message rate limiting engine514will send a service response message to the consumer network function521that indicates that access to the requested service has been granted. Further, message rate limiting engine514will continue to permit consumer network function521to communicate with producer network function512without executing any message rate limiting or throttling actions. In a second scenario illustrated inFIG.5, service consumer network function522sends its own network function service request message to service producer network function512. Similar to message502indicated above, service request message505includes an encoded access token that was previously obtained by service consumer network function522(e.g., from an NRF). Further, the encoded access token also includes a plurality of accessible claims, one of which is a subject claim that contains a consumer network function instance identifier that uniquely identifies the sending consumer network function522. Another accessible claim is a consumer PLMN claim that contains a consumer PLMN identifier that uniquely identifies a sending consumer PLMN. After receiving service request message505and the access token, service producer network function512and/or the message rate limiting engine514is configured to decode the access token and initiate the access token verification and service authorization procedure (similar to block503). For example, message rate limiting engine514may be configured to verify the integrity of the claims in the received access token. Notably, message rate limiting engine514is configured to obtain the consumer network function instance identifier that uniquely identifies the consumer network function523from the subject claim of the access token (and/or the consumer PLMN identifier from the consumer PLMN claim). Once the consumer network function instance identifier is obtained, message rate limiting engine514is configured to utilize the network function instance identifier to cross-reference the entries of the record database. By comparing the consumer network function instance identifier with one or more of the network function identifiers included in the entries of the record database, message rate limiting engine514is able to determine any messaging restrictions placed on the particular service consumer network function522. For example, message rate limiting engine514may access the record database and determine that a message throttle rate is currently being applied to the sending service consumer network function. In the event that message rate limiting engine514determines that service consumer network function521is subjected to a throttling rate for ingress messages to producer network function512, message rate limiting engine514will execute a message rate limiting or throttling actions. For example, message rate limiting engine514may be configured to discard a number of messages sent by consumer network function522based on an established rate limit that is predefined in the record database. More specifically, message rate limiting engine514can restrict ingress messaging to the producer network function512from service consumer network function522to a particular messaging throttle rate (e.g., 10 TPS) as defined in the record database (see, e.g., database700inFIG.7). It will be appreciated thatFIG.5is for illustrative purposes and that different and/or additional messages and/or actions may be used. It will also be appreciated that various messages and/or actions described herein may occur in a different order or sequence. FIG.6is a diagram illustrating an example network node600configured for utilizing network function identifiers to implement ingress message rate limiting. Network node600may represent any suitable entity or entities for performing aspects of ingress message rate limiting. In some embodiments, node600may represent or include one or more 5GC network functions, e.g., a service producer network function, a SEPP, an SCP, or the like. In some embodiments, network node600may represent or include a network gateway, a network proxy, an edge security device, or any related computing device that is configured to host a NF, SEPP, and/or SCP node or functionality. In some embodiments, network node600may include any producer network function, such as an NRF, PCF, BSF, NSSF, NEF, UDM/AUSF, UDR, UDSF, and the like. In some embodiments, network node600or a related module may be configured (e.g., via programming logic) to perform ingress message rate limiting on 5GC service request messages based on a consumer network function instance identifier that corresponds with the originating service consumer network function. By performing ingress message rate limiting in this manner, network node600(e.g., a service producer network function) is able to reduce or mitigate the impact of incoming 5GC request signaling storms on the network node or other downstream network functions in the home network. For example, network node600or a related module may be configured to identify a consumer network function instance identifier included in an access token (e.g., OAuth2 access token). More specifically, the consumer network function instance identifier is included in a subject claim that is contained within the access token. In some embodiments, the network node, message rate limiting engine, or related module is further configured to extract a consumer PLMN identifier from a consumer PLMN claim in the access token. As described below, this consumer PLMN identifier can be used by the network node and/or message rate limiting engine to execute a rate limiting procedure on the sending consumer PLMN. In some embodiments, the network node and/or message rate limiting engine is configured to group multiple service consumer network functions for rate limiting purposes. In such scenarios, the network node and/or message rate limiting engine will require some configuration conducted by a network operator or administrator for enabling the grouping of the consumer NFs. Referring toFIG.6, network node600may include one or more communications interface(s)602for communicating messages via a communications environment, e.g., a home 5GC network. In some embodiments, communications interface(s)602may include a first communication interface for communicating with one or more service consumer network functions and/or SEPPs in a first network, a second communications interface for communicating with one or more service consumer network functions and/or SEPPs in a second network, and a third communications interface for communicating with one or more service consumer network functions and/or SEPPs in a home network, e.g., a home 5GC network. Network node600may include a message rate limiting (MRL) engine604. Message rate limiting engine604may be any suitable entity (e.g., software executing on at least one processor) for performing one or more aspects of disclosed ingress message rate limiting. In some embodiments, message rate limiting engine604may include functionality for obtaining, from a service request message sent from a service consumer network function, a consumer network function instance identifier identifying the originating service consumer network function and using the network function instance identifier to perform ingress message rate limiting functions at the network node600. For example, obtaining a consumer network function instance identifier from a 5GC signaling message may include obtaining the instance identifier from an HTTP header contained in an access token included in the 5GC-based network function service request message. In this example, for each 5GC service acccss request message received by network node600, message rate limiting engine604may determine, using the consumer network function instance identifier, whether an allowed ingress message rate associated with the sending consumer network function instance identifier has reached or exceeded a predefined threshold value. In response to determining that the allowed ingress message rate associated with the network function instance identifier has reached or exceeded the threshold value, message rate limiting engine604may perform a message rate limiting action. Examples of rate limiting actions may include discarding a received request message, generating or modifying a throttle rate for discarding a portion of ingress messages sent by a particular consumer service network function, and/or notifying a network operator or a management system regarding an ingress message rate or related event. In some embodiments, message rate limiting engine604may be configured for determining whether to perform ingress message rate limiting by obtaining an allowed ingress message rate associated with a consumer service network function, obtaining a current ingress message rate associated with the consumer service network, function, and comparing the current ingress message rate and the allowed ingress message rate. If the current ingress message rate meets or exceeds the allowed ingress message rate, then a message rate limiting action may be performed. If the current ingress message rate meets or exceeds the allowed ingress message rate, then message rate limiting engine604may allow the message to be handled or processed, e.g., without ingress message rate limiting. In some embodiments, network node600may access (e.g., read from and/or write information to) data storage606. Data storage606may be any suitable entity (e.g., a computer readable medium or memory) for storing various data. In some embodiments, data storage606may include logic for obtaining identifiers from access tokens, logic for checking whether to perform ingress message rate limiting, logic for implementing or triggering a message rate limiting action, and logic for tracking current ingress message rates associated with various originating entities (e.g., consumer service network function instant identifiers, PLMN IDs, etc.). In some embodiments, data storage606may include message rate limiting data. For example, data storage606may include information for identifying a current message rate, an allowed message rate, and/or a message throttle rate for various consumer network functions or network nodes therein. In this example, related message rates and throttle rates may be indexed or otherwise identified using an identifier obtained from a 5GC service accc,,s request message or an access token therein. Data store606may further be configured to store a record database, such as record database700shown inFIG.7. FIG.7is a diagram that depicts example message rate related data stored in a record database700. Record database700may include information for identifying a current message rate, an allowed message rate, and/or a message throttle rate for various network functions and/or network nodes therein. For example, each rate in record database700may represent a number of messages, requests, or transactions per a time period, e.g., transactions per second (TPS). Referring toFIG.7, a table representing record database700comprises columns and/or fields for network and/or network function instance IDs, current message rates, allowed message rates, and message throttle rate. A network function identifier field may store information for representing a network function or an associated host network node. In some embodiments, record database700may include a consumer PLMN identifier field that can be used to conduct message rate limiting on a particular consumer PLMN. Similarly, in some embodiments, record database700may include a consumer NF group identifier field that can be used to conduct message rate limiting on a particular grouping of service consumer network functions. A current message rate field may store information for representing a measured or tracked message rate associated with one or more messages, types of messages, or transactions. For example, a current message rate (e.g., 50 TPS) may indicate a measured rate of 5GC service request messages or transactions received from a particular consumer network function. An allowed message rate field may store information for representing a predetermined allowed message rate associated with one or more messages, types of messages, or transactions. For example, an allowed message rate (e.g., 40 TPS) may indicate a rate of 5GC service request messages or transactions received from a particular consumer network function that a network node (e.g., a producer network node, SCP, or SEPP) is configured to allow, e.g., without performing a message rate limiting action. A message throttle rate field may store information for representing a message throttle rate associated with one or more messages, types of messages, or transactions. For example, a message throttle rate may indicate a rate of inter-5GC service request messages or transactions received from a particular consumer network function that a network node (e.g., a producer network node, SCP, or SEPP) is to throttle or discard. In this example, a throttle rate may be based on the difference between a current message rate and an allowed message rate, e.g., 50 TPS −40 TPS =10 TPS. It will also be appreciated that record database700is for illustrative purposes and that different and/or additional data than the data depicted inFIG.7may be usable for indicating default values for particular data portions or other information. Further, record database700may be stored (e.g., in a database record in data storage606as shown inFIG.6) or managed using various data structures and/or computer readable media. FIG.8is a diagram illustrating an example process800for ingress message rate limiting. In some embodiments, example process800described herein, or portions thereof, may be performed at or performed by network node600, message rate limiting engine604, and/or another module or node. In step802, a 5GC service acccss request message is received from a service consumer network function. In some embodiments, the request message is received by a network node, such as an SEPP, SCP, a producer NF, or any other node comprising message rate limiting engine604in a home 5GC network. In step804, an access token that includes a consumer network function instance identifier is extracted from the received 5GC service request message. In some embodiments, the message rate limiting engine obtains the consumer network function instance identifier contained in a claim of the access token. Notably, the consumer network function instance identifier uniquely identifies the sending service consumer network function. In some embodiments, the network node and/or message rate limiting engine extracts a consumer PLMN identifier from a consumer PLMN claim in the access token. In step806, it may be determined, using the consumer network function instance identifier, that an allowed ingress message rate associated with the sending service consumer network function has been reached or exceeded. For example, a producer network function may utilize a consumer network function instance identifier obtained from the access token (see step804) associated with an originating service consumer network function to determine whether the messages sent by a particular service consumer network function is reaching or exceeding an ingress message rate. In this example, producer network function may query a data store or database that contains current ingress message rates and allowed message rates indexed by or associated with relevant identifiers (e.g., a consumer network function instance identifier). In some embodiments, an extracted consumer PLMN identifier can be used by the network node and/or message rate limiting engine to determine if an allowed ingress message rate associated with the sending consumer PLMN has been reached or exceeded. In some embodiments, determining that an allowed ingress message rate associated with a particular sending service consumer network function has been reached or exceeded may comprise i) obtaining the allowed ingress message rate associated with the service consumer network function, ii) obtaining a current ingress message rate associated with the service consumer network function, and iii) comparing the current ingress message rate and the allowed ingress message rate for determining that the current ingress message rate meets or exceeds the allowed ingress message rate. In step808, in response to determining that the allowed ingress message rate associated with the service consumer network function has been reached or exceeded, a message rate limiting action may be performed. In some embodiments, a message rate limiting action performed by the producer network function and/or the message rate limiting engine may include discarding a request message, generating or modifying a throttle rate for discarding a portion of messages, or notifying a network operator or a management system. In some embodiments, a message rate limiting action may be performed by the network node and/or the message rate limiting engine in response to determining that the allowed ingress message rate associated with the sending consumer PLMN has been reached or exceeded. It will be appreciated that process800is for illustrative purposes and that different and/or additional actions may be used. It will also be appreciated that various actions described herein may occur in a different order or sequence. It will be appreciated that while some aspects of the subject matter described herein has been discussed with reference to 5G networks various other networks may utilize some aspects of the subject matter described herein. For example, any network that utilize certificates that identify senders or related networks may use features, mechanisms and techniques described herein to perform more selective message rate limiting. It should be noted that network node600, message rate limiting engine604, and/or functionality described herein (e.g., as shown inFIG.6) may constitute a special purpose computing device. Further, node600, message rate limiting engine604, and/or functionality described herein can improve the technological field of network security and/or message rate limiting at a producer network function, SEPP, SCP, or other network node. For example, by performing ingress message rate limiting based on a consumer NF identifier, malicious activities (e.g., signaling traffic storms) and their negative consequences (e.g., network congestion, service failures, and/or poor user experience) can be mitigated and/or prevented. The disclosure of each of the following references is incorporated herein by reference in its entirety to the extent not inconsistent herewith and to the extent that it supplements, explains, provides a background for, or teaches methods, techniques, and/or systems employed herein. REFERENCES 1. 3GPP TS 33.501; 3rdGeneration Partnership Project; Technical Specification Group Services and System Aspects; Security Architecture and Procedures for the 5G System; (Release 16), V16.3.0 (2020-07). 2. 3GPP TS 29.510; 3rdGeneration Partnership Project; Technical Specification Group Core Network and Terminals; 5G System; Network Function Repository Services; Stage 3 (Release 16), V16.4.0 (2020-07). It will be understood that various details of the presently disclosed subject matter may be changed without departing from the scope of the presently disclosed subject matter. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation.
34,842
11943617
DETAILED DESCRIPTION The method disclosed herein is typically used in the field of home networks. Such networks can comprise several terminals such as e.g., TV sets, mobile phones, smartphones, tablets or computers. In such a network, the different devices or terminals often have some of the same rights and some different rights. Thus, it is possible that e.g., a user is authorized to view a movie on his/her tablet as often as he/she wishes during a three day period, but only once, at any time, on his/her TV set. In such a context, it is important that the rights for an authorized user are managed in an efficient and correct way and that a non-authorized user cannot benefit illegally from these rights. In the framework for describing the embodiments of the invention, a user has at least one first terminal that is in charge of acquiring an access right and a second terminal that is used to access to the content. With reference to the system illustrated inFIG.1, the method disclosed herein uses a first device or a first terminal T1comprising both a remote communication mechanism and a local communication mechanism. Such a terminal T1can be e.g., a mobile phone or a tablet among others. The local communication mechanism uses NFC technology and enables a local communication at short distance, typically in the range of a few centimeters. The remote communication mechanism can be a conventional mechanism using for example the GSM network. The method disclosed herein also uses a second terminal T2on which the content is used, as will be explained below in more detail. The method further requires the presence of an access right provider ARP and a content provider CP. FIG.2illustrates a first embodiment of the method disclosed herein using the elements illustrated inFIG.1. In a first step, a user goes to an access right provider ARP with the first terminal T1. This access right provider ARP can typically be a shop, cinema, theater, etc. having a terminal using NFC technology. The user can receive an access right e.g., as a commercial offer, after having purchased goods for a given amount, after having attended a show or a movie, or after having used a given service. In order to acquire said right, the user places its first terminal T1close to a terminal of the access right provider ARP. The right for said user is then transmitted from the provider's terminal to the user's first terminal T1, using NFC technology. The concerned right is then stored in a memory of the first terminal T1. According to a first embodiment, the implementation of the method requires the prior registration of the user at a management center MC. This registration enables the management center to acquire data used for sending management messages to the concerned user. This registration further enables sharing keys with the registered receiver devices. These keys can be common to several terminals of a single user or they can be individual and different for each terminal. These keys can be the same for the management center MC and the receiver or, conversely, they can be different, the key in the management center MC and the key at the user's side being both keys of a private-public key pair. The registration of the user's terminals at the management center MC enables a targeted transmission of the management messages containing the rights to the concerned receiver device. These rights are usually encrypted by a key enabling the receiver to decrypt the messages received. When a user is registered, an account is normally created; this account groups all of the terminals of the user's home network. Thus, the management center MC is able to manage all of the user's terminals and associate their usage rights, which can vary individually for each terminal. According to an alternative embodiment, the user's terminals are not registered in advance. When a right is received by the first terminal T1, a request Rq is sent to the management center MC. This request contains all of the elements that are required for sending the content to a given terminal of the concerned user. In the embodiment illustrated inFIGS.1and2, when the right is stored in the user's first terminal T1, a request Rq is sent to the management center MC by this first terminal T1. If a prior registration of the user has been made, the request can contain a right and a mechanism for verifying its authenticity. The content provider CP has the mechanism for determining which account the user who sent the right is associated with. It is thus not necessary that this information be in the request. It could, however, be introduced in the request to enable a verification process if desired. If, on the other hand, no previous registration has been made, the request Rq must contain information concerning the user's terminal to which the content provider CP must send the content to. At this stage, the content provider CP has the information concerning the concerned user (independent from the fact that a previous registration was requested or not). In particular, the content provider has, for each user, an account enabling it to identify the different terminals associated with that user. As mentioned previously, the request contains, among others, the right and a mechanism for verifying its authenticity. The request can further contain a mechanism for identifying the author of the right, possibly validity conditions such as a date, and possibly information related to the user's terminal on which the content will be used. In this embodiment, the request Rq is sent by a remote communication channel. According to a desired embodiment, the first terminal T1is a mobile phone such as a smartphone and the rights are transmitted to the content provider CP by GSM. When the management center MC receives the request containing the right, the center determines the origin of the request and associates this request to the account of the user. The determination of the user's account also enables determining keys associated with the account, which enables verifying the authenticity of the right received. Several well known methods exist for verifying the authenticity of the rights. One method, which can for example be used in the present embodiment, comprises integrating with the right, a verification code that can be e.g., the result of a one way function using a key, said function being applied to the right. When the request is received, the management center MC can apply the same one way function with the same key, to determine if the right contained in the request is authentic. When this verification step has successfully been performed, the management center MC determines which content Ct corresponds to the right said provider received. In the illustrated example, the management center MC also contains content that can be sent to the users. Therefore, the management center also plays the role of content provider CP. According to the first embodiment, each right corresponds to specific content. According to a specific example, the access right provider ARP can be a movie theater. The right can enable accessing, for a reduced price, a movie from the same distributor which distributed the movie the user has viewed in this theater. According to another example, the user obtains the right to download the music of the movie he/she viewed. The disclosed embodiment can be used for e.g., in any commercial shop for various purposes (e.g., discount coupons, gifts, . . . ) According to a second embodiment disclosed herein, a right does not have a one-to-one correspondence to specific content (e.g., an event, a service or a discount). In this embodiment, it is necessary to acquire several rights before being authorized to access to the content. As an example, it may be necessary to receive five rights from a movie theater to obtain the possibility of downloading one movie for free. In this case, the rights can be collected and stored in the user's first terminal T1and sent when all of the collected rights enable the access to a good or service. The rights can also be collected and stored by the content provider CP or the management center MC, for example, and linked with the user's account. In this case, each right is sent to the content provider. The content provider suggests a product when the stored rights enable access to this product. According to an alternative embodiment, the products proposed vary depending on the number and/or the value of the rights accumulated. In other words, the goods do not “cost” the same number of rights. For example, a content provider can propose the downloading of the music of a movie for a “value” of one right, the viewing of an already seen movie for three rights, the viewing without storage for four rights and the viewing of the same movie with storage for six rights. The user will thus be able to choose different goods depending on the number of rights accumulated. The number of rights deducted from the user's account depends on the goods chosen by said user. When the user has chosen the content he wishes to access, after the appropriate verifications such as e.g., a verification concerning the authenticity of the right and the verification of the suitability between the right requested for the concerned content and the rights available for this user, the content Ct can be transmitted to the user. This transmission is made on one of the user devices, referred to as second terminal T2. The transmission is accompanied with conditions of use. In particular, the conditions of use indicate which operations can be made with the content sent to the second terminal T2. These operations are, for example, viewing only, without the right to store the content, or alternatively, the right to store the content. These operations can also concern the quality of the images (resolution), or temporal constraints such as e.g., viewing during one week. The operations can also be linked to a number of viewings (single or multiple viewings). When the rights are valid, and when the provider has determined which terminal the content must be sent to, the provider sends the content to the concerned terminal together with the conditions of use. This terminal thus uses the content according to the associated conditions of use. In the embodiment illustrated inFIGS.3and4, the user's first terminal T1receives the rights through NFC in a way that is similar to what has been described above with reference to the embodiment illustrated inFIGS.1and2. The rights are also stored in the first terminal T1. In the present illustrated embodiment, the rights are then transmitted from the user's first terminal T1to the second terminal T2, which, in the example illustrated, is a multimedia unit. This transmission can be made by a short distance communication channel (e.g., through NFC) if the second terminal has a communication mechanism for this technology. Other communication mechanisms can also be used, such as for example Wifi or Bluetooth. When the second terminal T2has received the rights, the terminal prepares a response similar to the request sent by the first terminal in the embodiment illustrated inFIGS.1and2. This request Rq is sent to the content provider CP, or the management center MC, which processes this request, proceeds with the requested verifications and authentications. The content provider then determines which terminals associated with the user's account the content must be sent to. This determination can be e.g., made from the content of the request. The content provider further adds the conditions of use and transmits the content and the conditions of use to the concerned terminal. In the embodiment illustrated inFIGS.3and4, the concerned terminal can be the multimedia unit that sent the request, another multimedia unit, a tablet or any similar terminal. It should be noted that the conditions of use could be different in different terminals. For example, the embodiment can limit the use of the content to a single viewing on the multimedia unit and/or limit the use of the same content on the tablet to an unlimited number of viewings in one week. In the embodiment ofFIGS.5and6, the first terminal T1comprises a remote communication mechanism as well as a local communication mechanism. Such a terminal can be a mobile phone (smartphone) or a tablet. The local communication mechanism uses NFC technology and enables a local communication at a very short distance, typically in the range of a few centimeters. The remote communication mechanism can be a conventional communication mechanism using e.g., the GSM network. In the method illustrated inFIG.5, the access right provider ARP sends rights to a user, more specifically to the user's first terminal, through a remote communication mechanism. This transmission can typically use the GSM network. These rights can be sent encrypted or unencrypted as they are addressed individually to each concerned user. When the message containing the rights is received by the first terminal T1, the rights are extracted from the message before being stored in the first terminal. According to the desired security level, the right can be encrypted or, on the contrary, stored in unencrypted form. The encryption key used for the storage of the rights is advantageously a key that is common to all of the devices belonging to the user (for example, a key associated with the user's account). Thus, when a right is transmitted from the first terminal to a user's second terminal, this right can be read by all of the user's terminals. In the following steps of the disclosed method, the rights are transmitted from the first terminal T1to a second terminal T2using local communications, and more specifically, a near field communication (NFC) mechanism. In order to transfer a right by using this communication mechanism, the receiver device must be placed at a short distance from a reader integrated with the user device. The rights received by a second terminal must be validated prior to being usable. In order to perform this validation, a request is transmitted to the content provider or management center, similar to the method discussed above with respect to the embodiment ofFIGS.3and4. The content provider determines which terminal the content must be sent to and what are the conditions of use associated with the content and/or the terminal. The content is then sent to the concerned terminal in a conventional way. The method disclosed herein forces a near field communication, which forces a proximity between the user's terminals and consequently, a voluntary step made by the user, which lowers the risks of fraud. According to a desired embodiment, once a right is used (i.e., transferred from a reception device to a device in which it is used), said right is deleted from the reception device or marked as non usable. This prevents the same right from being used several times, for example, on several different devices. According to another embodiment, the right can be used several times. This could be done e.g., for a limited number of times, for an unlimited number of times, or during a limited time period. The choice of the implementation is free and can be determined e.g., by the right's provider.
15,340
11943618
DETAILED DESCRIPTION In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. This disclosure describes techniques that may be used to force authentication, or re-authentication, of a user of an online service. When accessing a service (e.g., an online service), users are sometimes able to skip authentication via the use of a session token that stores an authentication status of the user. In some embodiments, a status may be determined with respect to the user's access to the service. Upon a determination that the user does not currently qualify to access the service, the system may cause one or more session tokens to be removed from, or updated on, the user device so that future attempts to access the service will force reauthentication of the user. A number of third-party service providers often provide their services to consumers via the users' mobile devices, and often via a communication channel established over a wireless carrier network. Examples of such services may include video streaming services, image storing and sharing services, social networking services, or any other suitable service. A number of those third-party service providers may provide access to a Software as a Service (SaaS) platform which enable such services. Each third-party service provider may maintain information related to an account for a user. In order to access that information, and/or functionality associated with the service, the user may be required to log into that account via an authentication process. Some third-party service providers use session tokens in order to provide convenience to customers. A session token is any piece of data that is stored on a user device and used in network communications to identify a series of related message exchanges (i.e., a session). In some embodiments, a session token for a service stores an authentication status so that a user who is authenticated once will not need to be re-authenticated each time that she or he uses the service. However, when an account no longer qualifies for the service (e.g., when a user cancels the service or moves away from a plan that has the service) the session token continues to enable the user to use the service without reauthentication. Since session tokens can be active for a number of years (they typically expire every two years), users may be able to access services that they do not qualify for long after they have stopped qualifying for those services. In some embodiments, an application programming interface (API) is provided that, when called, checks a current status of a user with respect to a particular service. Such a call to the API may be made by software executing on a mobile device. One or more policies stored in relation to the service may be retrieved in order to assess the current status of an account associated with the user device. If the current status is one that does not qualify for access to the service, instructions are provided to the user device to cause it to delete or overwrite the session token. Absent a current session token, the service may require authentication the next time that the service is requested. In some embodiments, the API may be called each time that the service is executed and may run in the background in parallel to an initiation of the service. Embodiments of the disclosure provide for a number of advantages over conventional systems. For example, embodiments of the disclosure enable online service providers to continue to provide convenience to their users via the use of a session token, while also enabling that service to be restricted to users that qualify for the service. Embodiments of the disclosure may be configured to run in the background so that the providing of the online service is not disrupted. FIG.1illustrates an example architecture of a wireless carrier network for implementing techniques for forcing re-authentication of users in accordance with embodiments of the disclosure. The architecture100may include a wireless carrier network102that serves multiple user devices, such as user devices104. The user device104may include any of a feature phone, a smartphone, a tablet computer, a phablet, an embedded computer system, a personal computing device, or any other device that is capable of using the wireless communication services that are provided by the wireless carrier network102to communicate with other electronic devices. The wireless carrier network102may include multiple base stations, such as the base station106, as well as a core network108. The wireless carrier network102may provide telecommunication and data communication in accordance with one or more technical standards, such as Enhanced Data Rates for GSM Evolution (EDGE), Wideband Code Division Multiple Access (W-CDMA), High Speed Packed Access (HSPA), Long-term Evolution (LTE), CDMA-2000 (Code Division Multiple Access 2000), 4th Generation (4G), 5thGeneration (5G), and/or so forth. The base stations are responsible handling data traffic between user devices and the core network108. In some embodiments, the base stations may be in the form of eNodeB nodes. Each eNodeB node may include a base transceiver system (BTS) that communicates via an antennae system over an air-link with one or more user devices that are within range. The antenna system of an eNodeB node may include multiple antennae that are mounted on a radio tower to provide a coverage area that is referred to as a “cell.” The BTS may send RF signals to user devices and receive radio signals from user devices. The core network108may provide telecommunication and data communication services to multiple user devices104. For example, the core network108may connect the user devices104to other telecommunication and data communication networks, such as a network112(e.g., the Internet) and a public switched telephone network (PSTN). Accordingly, data packet communications via the core network108and the Internet may support a variety of services provided by third-party provider servers114. Additionally, each of the multiple user devices104may be further configured to interact with a network112via an alternative communication channel. For example, one or more of the user devices104may establish communication with the network112via short-range wireless communication with a local network, such as a local wireless access network (WLAN) maintained by a router or a mobile phone in communication with the device (e.g., via Bluetooth®, WiFi, or other suitable short-range wireless means). In some embodiments, a user device104may be any suitable electronic device capable of interacting with at least one other electronic device (e.g., third-party servers114) in order to consume online services. In some embodiments, the user device104may include one or more outputs (e.g., a display, speakers, etc.) via which multimedia content may be presented to a user. Additionally, the user device104may include one or more input devices configured to collect input from a user of the user device. The user device104may include a memory that stores computer executable instructions. In some embodiments, that memory may also store one or more session tokens associated with various online services. In some embodiments, the user device104may have installed upon it a mobile application118configured to provide an online service associated with a third-party server114. In some cases, the mobile application118may be a special-purpose software application that is configured to provide access to functionality associated with the online service. In other cases, the mobile application118may be a browser application that enables interaction with the third-party servers114via a network112. The core network108may include at least a policy engine120and a gateway122. The policy engine120may be a software component that determines policy and enforces policy rules and serves to establish communication sessions and allocate bandwidth to devices associated with those communication sessions. In various embodiments, the policy engine120may include a Policy and Charging Rules Function (PCRF) or another equivalent core network component of the wireless carrier network102. Accordingly, the policy engine120may interface with the gateway122to handle incoming and outgoing communications. The gateway122may include one or more servers and related components that are tasked with providing connectivity between the core network108, various user devices (e.g., the user devices104), and one or more third-party servers114by acting as a point of entry and exit for data traffic. In turn, the core network108may provide the user devices with data access to external packet data networks112, such as the networks of other wireless carrier networks or the Internet. Accordingly, the gateway122may perform functions such as policy enforcement, packet filtering, packet screening, and/or charging support. In various embodiments, the gateway122may be a Packet Data Network Gateway (PGW) or another equivalent core network component of the wireless carrier network102. In some embodiments, the wireless carrier network102may include a session management platform124for determining a current status of access rights with respect to a particular online service and forcing an authentication reset upon determining that access to an online service is unavailable. Additionally, the wireless carrier network102may include one or more data repository126that stores information associated with online services, user accounts, or any other suitable information. For example, such a data repository126may include at least a data store housing eligibility data128that includes rules for accessing online services, as well as account data130that includes information pertaining to individual user accounts. By way of illustrating various interactions between the components depicted inFIG.1, consider the following example. In this example, a user of a user device104attempts to access an online service associated with a third-party server114via a connection using the wireless carrier network102. Initially, the user of the user device104may be asked to provide login credentials in order to access an account associated with that user. At that time, the third-party server114may perform an initial check to determine whether the user is authorized to access the online services. Upon determining that the user is authorized to access the online services, the third-party server114may provide a session token132to the user device104to be placed in memory. The session token132includes an authentication status as determined by the third-party server during its initial assessment with respect to access rights. On future attempts to access the online service, the user device104provides the third-party server114with the session token132instead of requiring authentication once more from the user. In some embodiments, the initial authorization determination described above is made by the wireless carrier network102based on information associated with an account maintained by that wireless carrier network102. For example, a user may have access to the online service provided by the third-party server114by virtue of his or her association with the wireless carrier network102. In these cases, the authentication of the user and determination of current access rights may be performed by the wireless carrier network102instead of by the third-party server. In this case, the session token132may also be provided to the user device104by the wireless carrier network102. In this situation, when a user attempts to access the online service, the third-party server114may not have direct access to access right eligibility and may rely upon the session token132as an indication of authorization. In embodiments, the core network108may receive a request from a user device104to determine a current status of access rights for the user device104with respect to the online service. This request may be transmitted in parallel to a request to access the online service. For example, a user may execute a mobile application118on his or her user device104. Upon execution, the mobile application118may provide a session token132to the third-party server114in order to gain access to the online service. The user device104or the third-party server114may, at that time, transmit the request to the core network108. Such a request is then routed to the session management platform124. In some embodiments, the session management platform124may determine a current status of access rights for a particular online service in response to a received request. Such a request may be received from a software application associated with that online service (e.g., mobile application118). To do this, the session management platform124may, upon receiving a request in relation to an online service, retrieve one or more eligibility requirements for the online service from the eligibility data128. Additionally, the session management platform124may retrieve information related to the user device104and/or an account associated with the user device104from the account data130. The retrieved data may then be compared in order to determine a current state of eligibility for the account with respect to the online service. Upon determining that the user is currently eligible to access the online services, the session management platform124may take no further action. However, upon determining that the user is not currently eligible to access the online service, the session management platform124may initiate a reset of the session token132. In some embodiments, the session management platform124provides instructions to the user device104to cause the user device to delete or otherwise remove the session token132from memory. In some embodiments, the user's current attempt to access the online services may be successful because of the presence of the session token132at the time of the attempt. In this way, session tokens132can be updated so that a user is forced to re-authenticate himself/herself on the next attempt to access a service. This prevents disruption of the online service to the user and allows the user to provide different login credentials in the event that the user has access to more than one account. FIG.2is a block diagram showing various components of a system architecture that supports removal of a session token in order to force reauthentication of a user in accordance with embodiments. The system architecture may include one or more computing devices200on which a session management platform124is implemented. The computing devices200may include a communication interface202, one or more processors204, memory206, and hardware208. The communication interface202may include wireless and/or wired communication components that enable the computing devices200to transmit data to, and receive data from, other networked devices. The hardware208may include additional user interface, data communication, or data storage hardware. For example, the user interfaces may include a data output device (e.g., visual display, audio speakers), and one or more data input devices. The data input devices may include, but are not limited to, combinations of one or more of keypads, keyboards, mouse devices, touch screens that accept gestures, microphones, voice or speech recognition devices, and any other suitable devices. The memory206may be implemented using computer-readable media, such as computer storage media. Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communications media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, DRAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanisms. The one or more processors204and the memory206of the computing devices200may implement functionality from or one or more software modules and data stores. Such software modules may include routines, program instructions, objects, and/or data structures that are executed by the processors204to perform particular tasks or implement particular data types. The one or more software modules may include at least a module for determining a current status of access rights with respect to a user and an online service (e.g., access determination engine212) as well as a module for resetting or removing a session token on one or more user devices (e.g., session reset engine214). The memory may include an application programming interface (API)216that enables interaction between a user device or third-party server and the session management platform124. For example, a user device or third-party server platform may submit a request to the session management platform by calling one or more methods available via the API216. Additionally, the memory206may include one or more data stores to include at least a database that includes requirements for obtaining access to online services (e.g., eligibility data218) and information associated with one or more user accounts (e.g., account data220). The access determination engine212may be configured to determine a current status of access rights for a user with respect to a particular online service. In some embodiments, the session management platform124receives a request from an online service provider or a user device. In some cases, the request may be provided to the session management platform upon an attempt by a user of a user device to access an online service. In some embodiments, the request may be received on a periodic basis (e.g., monthly, weekly, daily, hourly, etc.). Such a request may be received via the API216of the session management platform124. Upon receiving the request, the session management platform124may forward the request to the access determination engine212to be processed. Upon receiving a request to determine access rights for a user with respect to an online service from the session management platform, the access determination engine212may retrieve information associated with access rights requirements from the eligibility data218as well as information associated with an account of the user from the account data220. The access determination engine212then identifies one or more conditions to be met within the access rights requirements (e.g., account type, payment status, usage limits, etc.). The access determination engine212then compares relevant portions of the retrieved account information to those conditions to determine to what extent the conditions are satisfied by the account information. In some embodiments, a set of conditions may be retrieved in association with the online service and each condition in the set of conditions must be met in order for the user account to have access rights for the online service. In some embodiments, one or more of the conditions in the set of conditions may be optional rather than required. In some embodiments, the access determination engine212may be configured to determine a level of access rights currently associated with a particular user and online service. For example, the access determination engine212may determine, based on user account information, what particular services and/or content are available to the user via the online service. In some embodiments, the access determination engine212may identify a number of user devices associated with the same account and may determine what online services are available for each of the user devices. The session reset engine214may be configured to reset or remove a session token included in a memory of one or more user devices. In some embodiments, the session reset engine214may be called by the access determination engine212upon determining that a current user of a user device does not have access rights to an online service. In some embodiments, the session reset engine214may be called with respect to multiple user devices associated with a user. In some cases, the session reset engine214may determine a type (e.g., brand and model) of the user device and/or an operating system operating on the device from which a session token is to be removed. In some embodiments, the session reset engine214may determine token storage procedures associated with the inline service. For example, the session reset engine214may identify naming conventions and storage locations for the session token associated with a particular online service. In some embodiments, the session token may be a random (or pseudorandom) string of characters that is mapped to a user's communication session. The session reset engine214may then generate instructions that are compatible with the user device to cause that user device to remove the session token having the determined naming from the determined location in memory. These instructions may then be provided to the user device. In some embodiments, the generated instructions are pushed to the user device by the wireless carrier network via a push message within a software update. The user device104may be any electronic device capable of interacting with the computing devices200as described herein. The user device104may include a processor222and a computer readable memory224as well as a communication interface226. The memory224may include at least one mobile application226. Such a mobile application may be configured to enable a user of the user device to access one or more online services provided by a third-party service provider. In some embodiments, the memory224may also store a number of session tokens228, with each session token associated with a particular online service. Upon interaction with a third-party server, the user device may be configured to provide one or more of the session tokens228to the third-party server in response to receiving a request for authentication from the third-party server if such a session token is available. If no session token is available to be provided in response to a request for authentication, then a mobile application226, upon attempting to access the online service, may cause a user of the user device to be prompted for authentication credentials. The provided credentials may then be checked to determine current access rights for the user. FIG.3depicts a flow diagram showing an example process flow for forcing reauthentication of a user of online services upon detecting a loss of access rights in accordance with embodiments. The process300comprises interactions between various components of the architecture100described with respect toFIG.1. More particularly, the process300comprises interactions between a user device104, a core network108of a wireless carrier network, a session management platform124of the wireless carrier network, and at least one third-party server114. Additionally, at least a portion of the process300may comprise one or more interactions between a user and the user device104. At302of the process300, a user may initially attempt to access an online service. In some embodiments, this may comprise the execution of a mobile application installed upon the user device by the user. At304, the user device initiates access to the online service by establishing a connection with a third-party server that provides that online service. This connection may use a communication channel established via the core network of a wireless carrier network. Upon receiving a request to access the online service, the third-party server may attempt to retrieve a session token from the user device. If such a session token is unavailable (e.g., because the online service has not previously been accessed via the user device), then the third-party server will be unable to retrieve such a session token. Upon determining that such a session token is not available on the user device, the third-party server may establish a new communication session with the user device. To do this, the third-party server may request login credentials (e.g., a username and password) from the user device. In some embodiments, this may comprise presenting a login screen to the user via the user device. Upon receiving login credentials form the user, the third-party server may authenticate the user based on the login credentials at306. In some embodiments, this may comprise comparing the received login credentials to account data stored by the third-party server. In some embodiments, this may comprise providing the login credentials to the session management platform via the core network in order to receive an indication of a current status of access rights for the online service based on the received login credentials. Once the user has been authenticated, the third-party server may generate a session token to be associated with further requests received from the user device. In some embodiments, a session token may include a random, or pseudorandom, string of characters that is mapped to a set of communications (e.g., communications between the user device and the third-party server) within a database. The third-party server may then provide the session token to the user device at308(A). In some cases, the third-party server may also provide the session token to the session management platform at308(B), which may then store that session token in association with the user's account. At310, the user device may store the received session token in its local memory. In some embodiments, the session token may be encrypted and/or stored in a secure element of the memory. In some embodiments, the user device may also store an association between the session token and the third-party server. Once the user device has been authenticated at306, the user device may be provided with access to the online service at312. Note that this may occur in parallel to (e.g., at substantially at the same time as) the portions of the process described at308and310. The third-party server may provide an online service in a variety of ways. For example, in some embodiments, the third-party server provides an address to the user device at which the online service can be accessed. In another example, the third-party server provides a cryptographic key that can be used to access media content (e.g., streaming video) via the online service. In this example, the media content may be hosted on one or more edge servers in an encrypted format. The cryptographic key may be used by the user device to decrypt, and hence access, the media content hosted on the one or more edge servers. At a subsequent time, the user may again attempt to access the online service at314. In this scenario, assume that the user has, since a time associated with block302, lost his or her access rights with respect to the inline service (e.g., the user no longer qualifies to access the online service). The user may attempt to access the online service in a manner similar to that described with respect to302above. Upon receiving the second request to access the online service, the third-party server may attempt to retrieve a session token from the user device at316. In some embodiments, the user device may retrieve the session token from its memory based on a mapping between the session token and the online service/third-party server. Upon receiving the session token from the user device, the third-party server318may proceed to provide the user device with access to the online service at318in a manner similar to that described above with respect to312. In addition, the third-party server may initiate a request for a redetermination of access rights at320. In some embodiments, such a request may be initiated upon determining that some predetermined amount of time has passed since the user's status with respect to access rights for the online service has been verified. For example, such a request may only be initiated upon determining that 30 days have passed since the last time that the user's access rights were verified with respect to the online service. While the request is described here as being initiated by the third-party server, in some embodiments the request may be initiated by the user device. For example, a mobile application associated with the online service that is installed upon the user device may initiate such a request. The request may be provided to the core network, which then routes the request to the session management platform at322. Upon receiving the request, the session management platform may determine a current status of access rights for a user with respect to the online service. To do this, the session management platform may retrieve information associated with access rights requirements as well as information associated with an account of the user. The session management platform (e.g., via an access determination engine212) then identifies one or more conditions to be met within the access rights requirements (e.g., account type, payment status, usage limits, etc.). Relevant portions of the retrieved account information are then compared against those conditions to determine to what extent the conditions are satisfied by the account information. The session management platform then makes a determination as to a current status of access rights for the user with respect to the online service. In some embodiments, such a determination is made on a device-by-device basis for each of a number of user devices determined to be associated with the user. For example, a user may have access rights to an online service for only a subset of a set of devices associated with that user. If a determination is made that the user no longer has a right to access the online service, the session management platform may update the session token at326. In some cases, this may comprise generating a set of instructions that, when executed by the user device, will cause the session token to be removed from the memory of that user device. In some embodiments, the session management platform may provide the instructions to the core network, which may then forward the instructions to the user device. In some cases, the instructions may be provided to the user device within an update (e.g., a software update) provided in a push notification. The user device, upon receiving the instructions, may delete the session token from its local memory. At a subsequent time, the user may once more attempt to access the online service at328. Such an attempt may be made in a manner similar to that described above with respect to302and314. At330, the user device once more initiates access to the online service by establishing a connection with the third-party server. Once more, the third-party server attempts to retrieve the session token from the user device. However, if the session token has been removed from the memory of the user device at326, then no such session token would be available and the third-party server would then proceed to authenticate the user once more at332in a manner similar to that described above with respect to306. In this way, the user is no longer able to use an outdated session token to continue to use an online service for which he or she no longer qualifies. Instead, the user is forced to provide login credentials associated with an account that qualifies to use the online service. FIG.4depicts a flow diagram showing an example process flow for forcing reauthentication of a user of online services upon detecting a change in account status in accordance with embodiments. The process400comprises interactions between various components of the architecture100described with respect toFIG.1. More particularly, the process400comprises interactions between a user device104, a core network108of a wireless carrier network, and a session management platform124of the wireless carrier network. At402of the process400, the core network of the wireless carrier may detect one or more changes made to an account of a user. For example, a user may cancel some set of services associated with his or her account, change a level of service or plan, add or remove a user device, or make any other suitable change. Upon detecting such a change, the core network of the wireless carrier may initiate a request to the session management platform at404. It should be noted that such a request, in this scenario, may be initiated independent of any attempt to access an online service by the user. At406of the process400, the session management platform may identify a number of online services currently associated with an account of the user. In some embodiments, this number of online services may be compared to a previously generated list of online services for the user to identify online services that have been added or canceled since the previously generated list was generated. In some embodiments, the session management platform may further identify a number of third-party service providers associated with the number of online services. At408of the process400, the session management platform determines, for each service associated with the user, a level of access rights for that service. To do this, the session management platform may retrieve a set of conditions associated with access rights for the respective online service. The session management platform may then compare information associated with the user's account to the one or more conditions to determine whether that set of conditions has been satisfied. For example, a set of conditions may dictate that in order to have access rights to a particular online service, a user must maintain a minimum status with the wireless carrier network, live within a particular region, and use a particular model (or set of models) of user device. In this example, the session management platform may determine that the user's address has changed, and the user no longer lives within the particular region required by the set of conditions. Accordingly, the session management platform may determine that the user does not have access rights to the online service in this example. Upon identifying services that have been canceled at406, the session management platform may determine that the user has no access rights in those services. At410of the process400, the session management platform may identify a number of user devices associated with the user. For example, the user may have a service plan with the wireless carrier network that is associated with four different mobile phones. In another example, the user may have previously accessed at least one online service using his or her account from both a mobile phone and a tablet computer. In this example, both the mobile phone and tablet computer may be identified. It should be noted that a number of user devices associated with a user may be identified prior to determining access rights. In some embodiments, access rights may be determined on a device-by-device basis for each of the identified devices. At412of the process400, the session management platform may generate instructions that will cause a session token to be removed from one or more of the identified user devices. In some embodiments, this may comprise identifying a location in memory and/or a naming convention used to store session tokens associated with a particular online service as well as instructions associated with a particular type (e.g., model) of user device/operating system. In some embodiments, such instructions may include methods or functions within a library of such methods that are compatible with a particular type of user device. In some embodiments, the session management platform may maintain a library of information that pertains to session token storage for a number of online services. In some embodiments, the session management platform may request a location/name of a session token for a particular user device from a provider of the online service (e.g., a third-party server). At414of the process400, the session management platform transmits the generated instructions to the core network108to be provided to at least one of the identified users devices. The core network then relays the generated instructions to the user device at416. In some embodiments, the instructions may be relayed to the user device via a mobile data channel, such as a mobile communication channel that uses a long-term evolution (LTE) standard. In some embodiments, the generated instructions may be provided to the user device via a push message. In some cases, the generated instructions may be provided as a software update to be automatically executed by the user device. At418of the process400, the user device, upon receiving the generated instructions, is caused to locate the session token in its memory (if present) and delete it. In some embodiments, multiple session tokens may be removed from the memory of the user device via a single set of generated instructions. FIG.5depicts a flow diagram showing an example process flow for forcing authentication, or re-authentication, of a user of an online service in accordance with embodiments. The process500may be performed by a computing device within a wireless carrier network. For example, the process500may be performed by a server that performs the functionality attributed to the session management platform124as described herein. The process500is illustrated as a collection of blocks in a logical flow chart, which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions may include routines, code segments, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the process. At502, the process500comprises receiving a request to determine a current status of access rights with respect to a user and at least one online service. In some embodiments, the request is received from a mobile application installed upon the at least one user device. In some embodiments, the request is received from a third-party provider of the online service. In some cases, the request is received from the third-party provider of the online service upon a determination that some predetermined amount of time has passed since a latest request to determine access rights was submitted. In some embodiments, the request is received from a core network upon detecting a change in an account associated with the user. At504, the process500comprises determining a current status of access rights for the user with respect to the online service. In some embodiments, determining a current status of access rights for the user with respect to the online service comprises retrieving one or more conditions associated with the access rights and determining an extent to which information associated with the user satisfies the one or more conditions. In some cases, the information associated with the user comprises information retrieved from an account associated with the user with the wireless carrier network. At506, the process500comprises identifying at least one user device associated with the user. In some embodiments, the at least one user device comprises multiple user devices associated with the user. The multiple user devices may be identified by virtue of having been used to access the online service in the past. At508, the process500comprises generating programmatic instructions to cause the identified user device to delete a session token. In some embodiments, the session token is associated with a set of communications between the user device and the third-party provider of the online service. The session token may be a string of random characters that is mapped to the set of communications. The session token may be associated with a previous authentication by a user such that the session token enables access to the online service without authentication. At510, the process500comprises providing the generated instructions to the user device, such that the user device is caused to execute those instructions. In some embodiments, the instructions are provided to the at least one user device via a mobile data channel. For example, the instructions may be provided via a mobile communication channel that uses a long-term evolution (LTE) standard. In some embodiments, the instructions are provided to the at least one user device within a push message as a software update. Each of the followingFIGS.6-9are provided by way of illustrating types of account changes that may trigger a request to the session management platform as described herein. Particularly, each ofFIGS.6-9depict scenarios in which an account maintained by a wireless carrier network is altered in some fashion. Particularly, each of the examples depicts a case in which an account with the wireless carrier includes four “lines,” each of which represent a separate right to access certain telecommunications functions of the wireless carrier network. Each of the examples depicts a change made to a first account having four lines as well as access to an online service for each of its four lines. In each of the examples, each of the four lines may be associated with a separate login for the online service at the outset of the respective example. FIG.6depicts a first illustrative example of an account change that may result in initiation of a request to the session management platform as described elsewhere. In this example, a first account is depicted as including four lines (Line 1, Line 2, Line 3, and Line 4). The first account is also enrolled with an online service with respect to each of its four lines. Each of the four lines includes a separate login to the online service (Login 1, Login 2, Login 3, and Login 4). Each of the four lines may be assigned to a different user device. Each of the four lines is additionally accompanied by a status indicator602that indicates whether a session token is currently included within a memory of the user device associated with that line. In the illustrated example, a check mark indicates that a session token is included in a memory of the user device such that a user is not required to re-authenticate upon an attempt to access the online service using the respective login for that line. In contrast, an “X” indicates that a session token is not included in a memory of the user device such that a user would be required to re-authenticate upon an attempt to access the online service using the respective login for that line. In this first illustrative example, an owner of the first account may remove one of the lines included on the account (e.g., Line 3 in this example). In some cases, removal of a line may be the result of the line being cancelled. In some cases, removal of a line may be the result of the line being moved from the first account to a different account. In this case, upon determining that Line 3 has been removed from the first account, a request is initiated with the session management platform as described elsewhere. The session management platform, upon receiving that request, may first identify each of the online services in which the first account has rights. Upon determining that the first account in enrolled with the online service, the session management platform may identify the login associated with the removed Line 3 (e.g., Login 3). The session management platform may then determine that Login 3 no longer has access rights to the online service. The session management platform may then identify any user devices which have been used to access the online service using Login 3. Once the user device(s) has been identified, the session management platform would then cause the session token associated with the online service on that user device to be deleted. In this current example, where each operator of a line has used its corresponding login credentials to access the online service, this would result in only the user device associated with Line 3 having its session token removed as indicated via status indicator604. FIG.7depicts a second illustrative example of an account change that may result in initiation of a request to the session management platform as described elsewhere. Similar to the first example, a first account is depicted as including four lines (Line 1, Line 2, Line 3, and Line 4). The first account is also enrolled with an online service with respect to each of its four lines. Each of the four lines includes a separate login to the online service (Login 1, Login 2, Login 3, and Login 4). Each of the four lines may be assigned to a different user device. Each of the four lines is additionally accompanied by a status indicator702that indicates whether a session token is currently included within a memory of the user device associated with that line. In this second illustrative example, an owner of the first account may remove one of the lines included on the account (e.g., Line 3 in this example). However, unlike the first example, users for each of the lines in the account in this example may have used login credentials associated with Line 3 (e.g., Login 3) to access the online service. In this case, upon determining that Line 3 has been removed from the first account, a request is initiated with the session management platform as described elsewhere. The session management platform, upon receiving that request, may first identify each of the online services in which the first account has rights. Upon determining that the first account in enrolled with the online service, the session management platform may identify the login associated with the removed Line 3 (e.g., Login 3). The session management platform may then determine that Login 3 no longer has access rights to the online service. The session management platform may then identify any user devices which have been used to access the online service using Login 3. In the present scenario, each of the user devices on the first account have used Login 3 to access the online service. Accordingly, the session management platform would then cause the session token associated with the online service to be deleted from each of the user devices associated with the account. In this current example, the session tokens are removed from each of the user devices of the account, as indicated by status indicator704. This results in forcing each of the users of Line 1, Line 2, and Line 4 to re-authenticate upon next attempting to access the online service, which they would be able to do by providing a valid login (e.g., Login 1, Login 2, or Login 4). FIG.8depicts a third illustrative example of an account change that may result in initiation of a request to the session management platform as described elsewhere. Similar to the first example, a first account is depicted as including four lines (Line 1.1, Line 1.2, Line 1.3, and Line 1.4). The first account is also enrolled with an online service with respect to each of its four lines. Each of the four lines includes a separate login to the online service (Login 1.1, Login 1.2, Login 1.3, and Login 1.4). Each of the four lines may be assigned to a different user device. Each of the four lines is additionally accompanied by a status indicator802that indicates whether a session token is currently included within a memory of the user device associated with that line. In this third illustrative example, Line 1.3 may be moved from the first account to a second account (Account 2). In this illustrative example, Account 1 has access rights to the online service and Account 2 does not. In a manner similar to the example ofFIG.6, the session management platform may determine that Login 1.3 no longer has access rights to the online service and may cause a session token in a user device associated with Line 1.3 to be deleted. As indicated by the status indicator804, such a user device is no longer able to access the online service without re-authentication. FIG.9depicts a fourth illustrative example of an account change that may result in initiation of a request to the session management platform as described elsewhere. Similar to the first example, a first account is depicted as including four lines (Line 1.1, Line 1.2, Line 1.3, and Line 1.4). The first account is also enrolled with an online service with respect to each of its four lines. Each of the four lines includes a separate login to the online service (Login 1.1, Login 1.2, Login 1.3, and Login 1.4). Each of the four lines may be assigned to a different user device. Each of the four lines is additionally accompanied by a status indicator902that indicates whether a session token is currently included within a memory of the user device associated with that line. In this fourth illustrative example, Line 1.3 may be moved from the first account to a second account (Account 2). In this illustrative example, Account 1 has a first level of access rights to the online service (Package A) and Account 2 has a different level of access rights to the online service (Package B). In a manner similar to the example ofFIG.8, the session management platform may determine that Login 1.3 no longer has access rights to the online service and may cause a session token in a user device associated with Line 1.3 to be deleted. This would then cause the user of Line 1.3 (now Line 2.3) to be re-authenticated upon the next attempted login to the online service. However, the user is then able to provide his or her new login credentials (Login 2.3) which then grants access to the online service. A new session token can then be stored in the memory of the user device for Line 2.3 that allows access to the online service without re-authentication. As indicated by the status indicator904, such a user device is once more able to access the online service without re-authentication. It should be noted that the examples provided above are merely illustrative in nature and are not intended to be limiting. One skilled in the art would recognize that the scenarios presented in the above examples are not the only scenarios that could be implemented in accordance with embodiments as described herein.
51,887
11943619
DESCRIPTION OF EXAMPLE EMBODIMENTS Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments. Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification. Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control. Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein. Overview The presently claimed disclosure is directed to methods that may be implemented at a computer. Methods and systems consistent with the present disclosure may include extending protocols associated with authenticating client (i.e. supplicant) devices and with authorizing those supplicant devices to access a wireless network. These methods may include sending data relating to the failure of an authentication and/or an authorization process to a supplicant device attempting to access a wireless network. Methods discussed within may include securely sending failure codes or reasons to a supplicant device that identify why an authentication or authorization process failed. These methods may include sending messages between a supplicant device, an authenticator device, and an authentication and authorization server. Functions of the authenticator device may be implemented at an access point of a wireless network. After a first failure, the supplicant device may be able to access the wireless network after a reason or code of that failure has been reported to the supplicant device. The presently claimed technology may include a computer implemented method, a system, or a non-transitory computer readable storage medium that improves the efficiency of a wireless network. Techniques presented herein define new reasons for an Identity Provider (IdP) rejecting a supplicant in the context of a Federation. Further defined herein are3methods for the rejection reason to be transmitted to a user equipment (UE), thus avoiding that the UE would retry profiles that would always fail, and thus allowing the UE to better arbitrate between retry, wait, switch profile or switch Service Set Identifier (SSID). Techniques discussed herein may be inserted into Wi-Fi Alliance Passpoint and other applicable 802.11 standards. The supplicant may be a computing device that attempts to join a wireless network. In a first embodiment, a method of the present invention technology includes receiving at a computer an Extensible Authentication Protocol (EAP) request that was generated at a supplicant device when that supplicant device is attempting to access network services via a network environment. This method may also include the computer identifying that the supplicant has failed either or both authentication for the network environment and authorization for the network environment that were initiated based on receipt of the EAP request by the computer. After either or both the authentication or authorization failure is identified, the computer may facilitate transmission of a messaged to the supplicant. This message may include a failure indication that may include one or more reasons why the either or both of the authentication or authorization failure occurred. In a second embodiment, a system of the present disclosure includes one or more processors and at least one computer-readable storage medium that stores instructions executable by the one or more processors to perform operations that include receiving an EAP request that was generated at a supplicant device when that supplicant device is attempting to access network services via a network environment. The execution of the instructions by the one or more processors may also result in identifying that the supplicant has failed either or both authentication for the network environment and authorization for the network environment that were initiated based on receipt of the EAP request. After either or both the authentication or authorization failure is identified, the transmission of a messaged to the supplicant may be facilitated. This message may include a failure indication that may identify one or more reasons why the either or both of the authentication or authorization failure occurred. In a third embodiment, a method consistent with the present disclosure may be implemented by a non-transitory computer readable storage medium that stores instructions that when executed by a processor cause the processor to perform operations that include receiving an EAP request that was generated at a supplicant device when that supplicant device is attempting to access network services via a network environment. The execution of the instructions by the processor may also result in identifying that the supplicant has failed either or both authentication for the network environment and authorization for the network environment that were initiated based on receipt of the EAP request. After either or both the authentication or authorization failure is identified, the transmission of a message to the supplicant may be facilitated. This message may include a failure indication that may identify one or more reasons why the either or both of the authentication or authorization failure occurred. Description In a standard WiFi 802.1X network and in a standard offload to a Passpoint network, the local venue (e.g., an Access Network Provider (ANP)) connects to a single Identity Provider (IdP) source, typically allowing a single authentication and authorization method. Users attempting to connect with a pre-set profile are expected to succeed unless the connection is a first attempt and the exchange encounters a configuration issue (e.g., wrong password, etc.). As such, authentication and authorization failures have limited perimeters in current deployments. As the exchange between a device and an Authentication server partially occurs over the air, before encryption keys have been agreed upon, the result of the failure is the authenticator (e.g., an access point (AP)) sending an EAP Failure message. To avoid providing too much usable information to a potential attacker, the EAP message (as per Request For Comments (RFC) 3748) does not contain any reason code. The result of such current operations is that an end device does not receive through the EAP message detailed information about the reason of the failure. This structure collapses in the context of a federation (e.g. OpenRoaming™ (OR)) such that a device may have dozens of possible profiles for a given network. Yet, upon receiving an EAP Failure, the device is unable to determine if the rejection comes from a timeout, another temporal failure, an unsuitability of that profile (yet valid in other networks) with that ANP, a lack of service subscription, insufficient credit, e.g. if the service is chargeable at that time, or other applicable authorization or accounting related issues. The result is that the device has no choice but to treat the failure the same way it treats timeouts or other failures: by immediately retrying. The outcome is that device endlessly retries profiles for which credentials are invalid, or for a service not authorized and never switches to other (possible but lower in the device priority list) profiles, thereby consuming airtime and wireless network bandwidth with no favorable outcome. Another issue is that the reasons for failures (in the context of a federation) may largely span beyond the simple ‘wrong password’ failures. The relations between an IdP, an ANP and a user may be complex, and failures may occur because of lack of mutual approval between the ANP and the IdP, lack of acceptable Service Level Agreement (SLA) in this ANP (from the IdP perspective, possibly paying for the Wi-Fi offload) and much more. Thus, there is a need for a method that can expand the causes for EAP failures to accommodate federation uses cases, but also that can securely provide the failure reasons to the end device, so that the device can decide of a better course of action (e.g., wait, switch to another profile, request different SLA, etc.) FIG.1is a message sequence diagram for a call flow illustrating example details that may be associated with authentication and authorization failures in an EAP flow illustrating example details that may be associated with authentication failures in an EAP framework. Authentication, as used herein, includes verifying an identity of a client attempting to access a network. Authorization, as used herein, includes granting the client access to one or more network services through the network. In the context of a Wi-Fi network, several protocols can be encapsulated into the EAP exchange framework (e.g., EAP-TLS [EAP-Transport Layer Security]. EAP-TTLS [EAP-Tunneled TLS], EAP-SIM/AKA/AKA′ [EAP-Subscriber Identity Module]/[EAP-Authentication and Key Agreement]/[Improved EAP-AKA]). At the end of the authentication phase (that may be successful), the Authentication, Authorization and Accounting (AAA) server proceeds through the authorization examination. FIG.1illustrates a series of communications that are sent between computing devices of a supplicant computing device110, an authenticator device150, and an authentication & accounting (AAA) server190. Here the supplicant device110may be a computing device of any sort (e.g. a desktop computer, a notebook computer, a tablet, or a cell phone) that sends an EAP message120to authenticator device150. This message may be an EAP tunneled transport layer security (EAP-TTLS) message. Authenticator150may then send a message130(i.e. an EAP-TTLS message) to AAA server190after which the AAA server190performs an authentication function. Upon a failure, the AAA server returns a Remote Authentication Dial-In User Service (RADIUS) Access-Reject message140, accompanied with an EAP-Failure message (protocol-specific variations are examined below). The authenticator150then sends (over the air, thus over an unprotected medium) the EAP-Failure message160to the supplicant so as to close the connection. Conventionally, EAP failure message160includes no reason that identifies why the authentication process failed. The supplicant110thus does not learn through the EAP process the real reason of the failure (and, in current usage, ends up simply retrying ad infinitum). Techniques presented herein define new reasons for an IdP rejecting a supplicant in the context of a Federation. Further defined herein are3methods for the rejection reason to be transmitted to a UE, thus avoiding that the UE would retry profiles that would always fail, and thus allowing the UE to better arbitrate between retry, wait, switch profile or switch SSID. Consider various example details associated with the methods presented herein, which may be discussed as follows: Part A. Augment the failure reasons that can cause a failure with a set of new reasons that are applicable to authentication and/or authorization. Part B. Presents three methods to provide the failure reason to the supplicant, so the supplicant can decide if the best course of action is to switch to the next profile, wait, or retry to another action. Part A. In a federation environment, with multiple possible “Identity Providers” (IdPs), there may be loose coupling between a Wi-Fi venue owner and an IdP. Thus, choreography is more complex than with a single direct Hotspot-SP relationship. As such, there is a need for an augmented set of novel reasons for failures that can be passed from the IdP/AAA to the ANP/authenticator and to the supplicant (as discussed in Part B, below). Thus, the following additional failure reasons can be defined: notification-request-data=[displayable-string]%×00[network-info] displayable-string=*CHAR network-Info=“Failure-Cause=” cause-code cause-code=“01”; authentication failed cause-code=/“02”; failure before authentication cause-code=/“03”; authorization rejected−no service subscription cause-code=/“04”; authorization rejected−roaming not allowed in this network cause-code=/“05”; authorization rejected−QoS (quality of service) not acceptable cause-code=/“06”; authorization rejected−no credit cause-code=/“07”; authorization rejected−temporary denial The format can naturally be adapted to the structure of the proposal below (e.g., integrated into Transport Layer Security (TLS) Alert messages having a format following the current Alert message structure). Further, other cause codes can be envisioned. Part B, Method 1. EAP Failures do not include reason codes as an eavesdropper/attacker can use this information to collect information about the client. In a network implementing Pre-Association Security Negotiation (PASN) or other form of pre-association security (e.g. a form of non-robust association security), Method 1 defines a mechanism that allows the EAP Authenticator to provide an EAP peer with a rejection cause hint, independently of EAP method. OWE allows encrypted communications to be setup between devices without using an authentication process, for example, by use of a Diffie-Hellman key exchange. This information is sent to the peer in an EAP-Notification/Request message by appending it after the displayable message. The message can include a NUL character and/or other applicable character(s) after which the information can be appended. The EAP authenticator may send a failure cause hint to the peer in an EAP-Notification/Request message. The reception of a failure cause hint is not to be used by an EAP peer to infer that an EAP authentication has failed. If an EAP peer receives an EAP Success code after having received a failure cause hint, then it is to ignore the received failure cause information. The failure cause hint information is placed after the displayable string and a NUL character in the EAP-Notification/request. The above (in Part A) Augmented Backus-Naur Form (ABNF) [RFC4234] defines the Failure-Cause attribute for presenting the failure cause hint information. The attribute's value consists of a set of a cause-code. The notification hint is sent before the EAP Failure, over the protected link to the UE. Because the link is protected (PASN/OWE), an eavesdropper cannot use the rejection information to glean information about the supplicant. Various example details associated with Method 1 are illustrated with reference to the call flow ofFIG.2. FIG.2is a message sequence diagram for a call flow illustrating example details that may be associated with an augmentation method for EAP failures, according to an example embodiment. LikeFIG.1,FIG.2also includes a supplicant device210, an authenticator250and accounting serer (AAA)290. Here inFIG.2, however, the messages passed between supplicant210, authenticator250, and AAA server290are slightly different than the messages discussed in respect toFIG.1. A first set of messages220establish opportunistic wireless encrypted communications between supplicant210and authenticator device250. Next, EAP-TTLS message230may be sent to authenticator and EAP-TTLS message240may be sent to AAA server290. After the AAA server attempts to authenticate supplicant210, message260may be sent to Authenticator250.FIG.2illustrates that this message is a (RADIUS) access-reject message that may be accompanied by text and an EAP failure message. After authenticator250receives message260, authenticator250may send message270that includes an EAP notification message and the text of message260. Message270includes an authentication failure “hint” (i.e. information that the supplicant can use to potentially identify a reason why an authentication failed). Since message270is encrypted, an eavesdropper is unable to see and interpret the information included in message270. After message270is sent, authenticator250sends EAP failure message280to supplicant210. By receiving the failure “hint” the suppliant210may attempt access the wireless network again. At this time supplicant210may provide information in subsequent messages that is different from information associated with the previously failed authentication process. Differences in this second attempt to access a wireless network may have based on the information that the failure “hint” provided the supplicant210. Part B, Method 2. In cases where Method 1 is not suitable (e.g., no OWE/PASN), Method 2 can include augmenting the Passpoint specification, individual RFCs:https://tools.ietforg/html/rfc3748; https://tools.ietforg/html/rfc8110;https://tools.ietforg/html/rfc5216; https://tools.ietforg/html/rfc5246;https://tools.ietforg/html/rfc5281; https://tools.ietforg/html/draft-rfced-info-zorn-01;https://tools.ietforg/html/rfc2548; and https://tools.ietforg/html/rfc4186, as well as 802.11 (9.4.1.7), namely EAP, OWE, EAP-TLS, EAP-TTLS and EAP-SIM/AKA to insert a failure reason into the existing EAP choreography. In particular, RFC 5256 Annex 3 can be augmented to increase the Alert message perimeter, and insert the reasons codes in Part A. As per RFC 5246, the AAA server can now send an EAP-TLS Request message to the supplicant (via a protected tunnel) of the Alert type. The alert indicates one or more of the augmented failure reasons. This allows the supplicant to know new reasons caused by authentication issues, but also failures due to authorization issues between the IdP and the ANP parameters. The supplicant replies with an empty EAP-TLS response to acknowledge the failure and its reason. The AAA server then sends the Access-Reject+EAP Failure to the authenticator, who replays the EAP-Failure message to the supplicant as per current usage. Various example details associated with such augmenting for Method 2 involving RFC 5246 are illustrated with reference to the call flow ofFIG.3. FIG.3is another message sequence diagram for a call flow illustrating example details that may be associated with an augmentation method for EAP failures, according to an example embodiment.FIG.3includes supplicant310, authenticator350, and authentication and authorization (AAA) server390that use yet another sequence of messages when supplicant310attempts to access a wireless network. Initially message120is communicated between supplicant310and Authenticator350and then authenticator350sends message330to authentication server390such that authentication server390can perform functions associated with authentication supplicant310and potentially authorizing supplicant310to access a wireless network. Messages320and330may be EAP-TTLS messages that initiate an authentication process at the AAA server. A secure tunnel may then be created that may be used to send secure messages between AAA server390and supplicant310. Message340may be sent via this secure tunnel to supplicant310and message340may include an alert that identifies a failure code or reason why an authentication of supplicant310failed. As mentioned above, the supplicant310may send a reply (or response) message360that acts as an acknowledgement of the receipt of message340. Reply message360may not include data. Message370may then be sent to authenticator350and authenticator350may then send message380to supplicant310. Message370may be a (RADIUS) access-reject message and message380may be an EAP failure message sent to supplicant310that includes no failure code/reason. Supplicant310may then evaluate the failure code or reason included in message340and then attempt to connect to the wireless network again. Here again supplicant may modify data sent to AAA server390in a second attempt to access the wireless network. Exemplary data/codes that may be included in alert message340may include: enum {warning(1), fatal(2), (255)}AlertLevel, close_notify(0); unexpected_message(10); bad_record_mac(20); decryption_failed_Reserved(21); record_overflow(22); decompression_failure(30); handshake_failure(40); no_certificate_Reserved(41); bad_certificate (42); unsupported_certificate(43); certificate_revoked(44); certificate_expired(45); certificate_unknown(46); illegal_parameter(47); unknown_ca(48); access_denied(49); decode_error(50); decrpt_error(51); expor_rrestrition_insufficient_security(71); internal_error(80); user_cancelled(90); no-renegotiation(100); unspuuprted_extension(110); and /*new*/(255) AlertDescription. Additionally, RFC 5281 and draft-rfced-info-zorn-01 can be augmented with new MS-CHAP error attributes. Thus, using EAP-TTLS, upon encountering one of the errors listed in Part A above, the AAA returns an EAP-Request (via a protected tunnel) of type access-challenge, that includes one of the errors in Part A as the error attribute Attribute-Value-Pair (AVP). The supplicant is now informed of the reasons of the failure (authentication or authorization). The AAA then sends the Access-Reject+EAP Failure to the authenticator, who replays the EAP-Failure message to the supplicant as per current usage. Various example details associated with such augmenting for Method 2 involving RFC 5282 and draft-rfced-info-zorn-01 are illustrated with reference to the call flow ofFIG.4. FIG.4is another message sequence diagram for a call flow illustrating example details that may be associated with an augmentation method for EAP failures, according to an example embodiment.FIG.4includes supplicant410, authenticator450, and authentication and authorization (AAA) server490that use yet another sequence of messages when supplicant410attempts to access a wireless network. Initially message420is communicated between supplicant410and Authenticator450and then authenticator450sends message430to authentication server490such that authentication server490can perform functions associated with authentication supplicant410and potentially authorizing supplicant410to access a wireless network. Messages420and430may be EAP-TTLS messages that initiate an authentication process at the AAA server. A secure tunnel may then be created that may be used to send secure messages between AAA server490and supplicant410. Message440may be sent via this secure tunnel to supplicant410and message440may include an EAP TTLS request via the tunnel that includes an access challenge, error data, and potentially an attribute value pair mentioned above. Message440may inform supplicant410of a reason why an authentication or authorization process failed at AAA server490. Here again data that identifies the failure reason may be or include a code. AAA server490may then send message460that may be a (RADIUS) access-reject message and message470may be an EAP failure message send to supplicant410that includes no failure code/reason. Supplicant410may then evaluate the failure code or reason included in message440and then attempt to connect to the wireless network again. Here again supplicant may modify data sent to AAA server490in a second attempt to access the wireless network. The attribute data sent in message440may include exemplary error attribute data of: 646 Error_Restricted_Logon_Hours; 647 Error_Acct_Disabled; 648 Error_Password_Expired; 649 Error_No_Dilin_Permission; 691 Error_Authentication_Failure; 709 Error_Changing_Passord, or some new or other error code, description, or classification. It is noted that EAP-SIM/AKA/AKA′ do not include reason code mechanisms or exchanges relevant to authorization, and that authentication failure only results from an AT_NOTIFICATION failure. As such, RFC 4186/RFC 4187/RFC 5448 can be augmented to insert the failure reason within the format of the Access challenge message, here as well containing the Error attribute. Further, the usage of the EAP Request structure is allowed with the Notification payload that contains the reject reason. The supplicant is now informed of the reasons of the failure (authentication or authorization). The AAA then sends the Access-Reject+EAP Failure to the authenticator, who replays the EAP-Failure message to the supplicant as per current usage. Various example details associated with such augmenting for Method 2 involving RFC 4186 are illustrated with reference to the call flow ofFIG.5. FIG.5is another message sequence diagram for a call flow illustrating example details that may be associated with an augmentation method for EAP failures, according to an example embodiment.FIG.5includes supplicant510, authenticator550, and authentication and authorization (AAA) server590that use yet another sequence of messages when supplicant510attempts to access a wireless network.FIG.5includes a sequence of messages that are very similar to the message sequence ofFIG.4, here however the EAP messages may be according to the EAP-SIM/AKA/AKA′ model discussed above. FIG.5includes message520that is communicated between supplicant510and Authenticator550and then authenticator550sends message530to authentication server590such that authentication server590can perform functions associated with authentication supplicant510and potentially authorizing supplicant510to access a wireless network. Messages520and530may be SIM/AKA/AKA′ messages that initiate an authentication process at the AAA server. A secure tunnel may then be created that may be used to send secure messages between AAA server590and supplicant510. Message540may be sent via this secure tunnel to supplicant510and message540may include an EAP TTLS request via the tunnel that includes an access challenge or notification/notice and error notification/notice data. Message540may inform supplicant510of a reason why an authentication or authorization process failed at AAA server590. Here again data that identifies the failure reason may be or include a code. AAA server590may then send message560that may be a (RADIUS) access reject message and message570may be an EAP failure message send to supplicant510that includes no failure code/reason. Supplicant510may then evaluate the failure code or reason included in message540and then attempt to connect to the wireless network again. Here again supplicant may modify data sent to AAA server590in a second attempt to access the wireless network. Part B, Method 3. For Method 3, 802.11 networks are allowed to carry the rejection information. In this embodiment, the EAP exchange concludes as per current usage with an EAP Failure message from the authenticator (without a provided reason). Then the AP, which is also the authenticator (along with the associated wireless LAN controller (WLC)), is allowed to close the connection with a deauthentication or a deassociation message that contains new reasons codes (augmenting 802.11-2016, Section 9.4.1.7, table 9-45) with the new reason codes defined in Part A. Various example details associated with such augmenting for Method 3 are illustrated with reference to the call flow ofFIG.6. FIG.6is another message sequence diagram for a call flow illustrating example details that may be associated with an augmentation method for EAP failures, according to an example embodiment.FIG.6includes supplicant610, authenticator650, and authentication and authorization (AAA) server690that use yet another sequence of messages when supplicant610attempts to access a wireless network.FIG.6includes message620that is communicated between supplicant610and Authenticator650and then authenticator650then sends message630to authentication server690such that authentication server690can perform functions associated with authentication supplicant610and potentially authorizing supplicant610to access a wireless network. Messages620and630may be EAP-TTLS messages that initiate an authentication process at the AAA server. Message640may then be sent from AAA server690to authenticator650. Message640may be a (RADIUS) access-reject message that identifies that an authentication or authorization process has been rejected. Message640may include a failure reason and an EAP failure data. Next message660may be sent from authenticator650to supplicant610. After message660is sent to supplicant610, message670may be sent to supplicant610from authenticator650. Message670may be a message that closes a communication connection with the supplicant610. As discussed above this closing of the connection may include deauthentication or a deassociation message that contains new reasons codes (augmenting 802.11-2016, Section 9.4.1.7, table 9-45) with the new reason codes defined in Part A above, for example. In certain instances, error codes provided to a supplicant computing device may result in messages being provided to a user of that supplicant computing device via a user interface. This may instruct the user to seek out help from an internet support staff at a company to help resolve an issue with their computer. For example, an error code indicating that a password of the user had expired may be displayed on a display of the computer along with a message to call the helpdesk at their company. This could then allow an internet support person the update the password of the user. In all the cases discussed with reference to Part B, it is envisioned that the failure may be accompanied by a timer (‘do not retry for N seconds’, e.g.,3600). In some embodiments, this additional information may be used by the UE to better refine its behavior (e.g., possibly retrying after the timer value, with the preferred profile). As referred to herein, an access point may include any combination of hardware (e.g., communications units, receiver(s), transmitter(s), antenna(s) and/or antenna array(s), processor(s), memory element(s), baseband processor(s) (modems), etc.)], controllers (e.g., wireless local area network controllers, etc.), software, logic, and/or any other elements/entities that may facilitate over-the-air RF connections for one or more elements of a system. In various embodiments, a UE may be associated with any user, subscriber, employee, client, customer, electronic device, etc. wishing to initiate a flow in the system and may be inclusive of any device that initiates a communication in the system, such as a computer, an electronic device such as an industrial device, automation device, enterprise device, appliance, Internet of Things (IoT) device, etc., a laptop or electronic notebook, a cellular/Wi-Fi enabled telephone/smart phone, tablet, etc. and/or any other device, component, element, or object capable of initiating voice, audio, video, media, or data exchanges within the system. A UE may include any combination of hardware (e.g., communications units, receiver(s), transmitter(s), antenna(s) and/or antenna array(s), processor(s), memory element(s), baseband processor(s) (modems), etc.)], controllers (e.g., wireless local area network controllers, etc.), software, logic, and/or any other elements/entities that may facilitate over the-air RF connections with one or more access networks/access points. In summary, techniques herein define new reasons for an IdP rejecting a supplicant in the context of a Federation. Further defined are3methods for the rejection reason to be transmitted to a UE, thus avoiding that the UE would retry profiles that would always fail, and thus allowing the UE to better arbitrate between retry, wait, switch profile or switch SSID. In various embodiments, the techniques presented herein may be inserted into Wi-Fi Alliance Passpoint certification and applicable 802.11 standards Referring toFIG.7,FIG.7illustrates a hardware block diagram of a computing device700that may perform functions associated with operations discussed herein. In various embodiments, a computing device, such as computing device700or any combination of computing devices700, may be configured as any entity/entities as discussed for the techniques discussed herein in order to perform operations of the various techniques discussed herein. In at least one embodiment, computing device700may include one or more processor(s)702, one or more memory element(s)704, storage706, a bus708, one or more network processor unit(s)710interconnected with one or more network input/output (I/O) interface(s)712, one or more I/O interface(s)714, and control logic720. In various embodiments, instructions associated with logic for computing device700can overlap in any manner and are not limited to the specific allocation of instructions and/or operations described herein. In at least one embodiment, processor(s)702is/are at least one hardware processor configured to execute various tasks, operations and/or functions for computing device700as described herein according to software and/or instructions configured for computing device. Processor(s)702(e.g., hardware processor(s)) can execute any type of instructions associated with data to achieve the operations detailed herein. In one example, processor(s)702can transform an element or an article (e.g., data, information) from one state or thing to another state or thing. Any of potential processing elements, microprocessors, digital signal processor, baseband signal processor, modem, PHY, controllers, systems, managers, logic, and/or machines described herein can be construed as being encompassed within the broad term ‘processor’. In at least one embodiment, memory element(s)704and/or storage706is/are configured to store data, information, software, and/or instructions associated with computing device700, and/or logic configured for memory element(s)704and/or storage706. For example, any logic described herein (e.g., control logic720) can, in various embodiments, be stored for computing device700using any combination of memory element(s)704and/or storage706. Note that in some embodiments, storage706can be consolidated with memory element(s)704(or vice versa), or can overlap/exist in any other suitable manner. In at least one embodiment, bus708can be configured as an interface that enables one or more elements of computing device700to communicate in order to exchange information and/or data. Bus708can be implemented with any architecture designed for passing control, data and/or information between processors, memory elements/storage, peripheral devices, and/or any other hardware and/or software components that may be configured for computing device700. In at least one embodiment, bus708may be implemented as a fast kernel-hosted interconnect, potentially using shared memory between processes (e.g., logic), which can enable efficient communication paths between the processes. In various embodiments, network processor unit(s)710may enable communication between computing device700and other systems, entities, etc., via network I/O interface(s)712to facilitate operations discussed for various embodiments described herein. In various embodiments, network processor unit(s)710can be configured as a combination of hardware and/or software, such as one or more Ethernet driver(s) and/or controller(s) or interface cards, Fibre Channel (e.g., optical) driver(s) and/or controller(s), and/or other similar network interface driver(s) and/or controller(s) now known or hereafter developed to enable communications between computing device700and other systems, entities, etc. to facilitate operations for various embodiments described herein. In various embodiments, network I/O interface(s)712can be configured as one or more Ethernet port(s), Fibre Channel ports, and/or any other I/O port(s) now known or hereafter developed. Thus, the network processor unit(s)710and/or network I/O interface(s)712may include suitable interfaces for receiving, transmitting, and/or otherwise communicating data and/or information in a network environment. I/O interface(s)714allow for input and output of data and/or information with other entities that may be connected to computing device700. For example, I/O interface(s)714may provide a connection to external devices such as a keyboard, keypad, a touch screen, and/or any other suitable input device now known or hereafter developed. In some instances, external devices can also include portable computer readable (non-transitory) storage media such as database systems, thumb drives, portable optical or magnetic disks, and memory cards. In still some instances, external devices can be a mechanism to display data to a user, such as, for example, a computer monitor, a display screen, or the like. In various embodiments, control logic720can include instructions that, when executed, cause processor(s)702to perform operations, which can include, but not be limited to, providing overall control operations of computing device; interacting with other entities, systems, etc. described herein; maintaining and/or interacting with stored data, information, parameters, etc. (e.g., memory element(s), storage, data structures, databases, tables, etc.); combinations thereof; and/or the like to facilitate various operations for embodiments described herein. The programs described herein (e.g., control logic720) may be identified based upon application(s) for which they are implemented in a specific embodiment. However, it should be appreciated that any particular program nomenclature herein is used merely for convenience; thus, embodiments herein should not be limited to use(s) solely described in any specific application(s) identified and/or implied by such nomenclature. FIG.8illustrates an example wireless communication network800in which some aspects of the technology can be implemented. The wireless communication network800can form an enterprise wireless network. In turn, the systems and techniques described herein can be utilized in controlling link selection and aggregation across the wireless communication network800and another network. The wireless communication network200includes an Access Point (AP), configured for wireless communication with multiple receivers or client devices (e.g., STA1, STA2, and STA3). It is understood that additional (or fewer) STAs and/or APs could be implemented in network200, without departing from the scope of the technology. The STAs and AP shown inFIG.2can be configured to form a WiFi network. A WiFi network, as used herein, is a network that is formed in maintained in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards. Specifically, the AP and the STAs can wirelessly communicate with each other according to the IEEE 802.11 family of standards to form a WiFi network. The AP ofFIG.8may perform the functions of the authenticator computing devices (e.g.150,250,350,450,550, and650) ofFIGS.1-6. Each of the client devices (e.g. STA1, STA2, and STA2) ofFIG.8may perform the function of supplicant computing devices (e.g.110,210,310,410,510, &610) ofFIGS.1-6. In various embodiments, entities as described herein may store data/information in any suitable volatile and/or non-volatile memory item (e.g., magnetic hard disk drive, solid state hard drive, semiconductor storage device, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), application specific integrated circuit (ASIC), etc.), software, logic (fixed logic, hardware logic, programmable logic, analog logic, digital logic), hardware, and/or in any other suitable component, device, element, and/or object as may be appropriate. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element’. Data/information being tracked and/or sent to one or more entities as discussed herein could be provided in any database, table, and register, list, cache, storage, and/or storage structure: all of which can be referenced at any suitable timeframe. Any such storage options may also be included within the broad term ‘memory element’ as used herein. Note that in certain example implementations, operations as set forth herein may be implemented by logic encoded in one or more tangible media that is capable of storing instructions and/or digital information and may be inclusive of non-transitory tangible media and/or non-transitory computer readable storage media (e.g., embedded logic provided in: an ASIC, digital signal processing (DSP) instructions, software [potentially inclusive of object code and source code], etc.) for execution by one or more processor(s), and/or other similar machine, etc. Generally, memory element(s)704and/or storage706can store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, and/or the like used for operations described herein. This includes memory element(s)704and/or storage706being able to store data, software, code, instructions (e.g., processor instructions), logic, parameters, combinations thereof, or the like that are executed to carry out operations in accordance with teachings of the present disclosure. In some instances, software of the present embodiments may be available via a non-transitory computer useable medium (e.g., magnetic or optical mediums, magneto-optic mediums, CD-ROM, DVD, memory devices, etc.) of a stationary or portable program product apparatus, downloadable file(s), file wrapper(s), object(s), package(s), container(s), and/or the like. In some instances, non-transitory computer readable storage media may also be removable. For example, a removable hard drive may be used for memory/storage in some implementations. Other examples may include optical and magnetic disks, thumb drives, and smart cards that can be inserted and/or otherwise connected to a computing device for transfer onto another computer readable storage medium. Variations and Implementations Embodiments described herein may include one or more networks, which can represent a series of points and/or network elements of interconnected communication paths for receiving and/or transmitting messages (e.g., packets of information) that propagate through the one or more networks. These network elements offer communicative interfaces that facilitate communications between the network elements. A network can include any number of hardware and/or software elements coupled to (and in communication with) each other through a communication medium. Such networks can include, but are not limited to, any local area network (LAN), virtual LAN (VLAN), wide area network (WAN) (e.g., the Internet), software defined WAN (SD-WAN), wireless local area network (WLAN), wireless wide area (WWA) access network, metropolitan area network (MAN), Intranet, Extranet, virtual private network (VPN), Low Power Network (LPN), Low Power Wide Area Network (LPWAN), Machine to Machine (M2M) network, Internet of Things (IoT) network, Ethernet network/switching system, any other appropriate architecture and/or system that facilitates communications in a network environment, and/or any suitable combination thereof. Networks through which communications propagate can use any suitable technologies for communications including wireless communications (e.g., 4G/5G/nG, IEEE 802.11 (e.g., Wi-Fi®/Wi-Fib®), IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), Radio-Frequency Identification (RFID), Near Field Communication (NFC), Bluetooth™, mm.wave, Ultra-Wideband (UWB), etc.), and/or wired communications (e.g., T1 lines, T3 lines, digital subscriber lines (DSL), Ethernet, Fibre Channel, etc.). Generally, any suitable means of communications may be used such as electric, sound, light, infrared, and/or radio to facilitate communications through one or more networks in accordance with embodiments herein. Communications, interactions, operations, etc. as discussed for various embodiments described herein may be performed among entities that may directly or indirectly connected utilizing any algorithms, communication protocols, interfaces, etc. (proprietary and/or non-proprietary) that allow for the exchange of data and/or information. In various example implementations, entities for various embodiments described herein can encompass network elements (which can include virtualized network elements, functions, etc.) such as, for example, network appliances, forwarders, routers, servers, switches, gateways, bridges, load balancers, firewalls, processors, modules, radio receivers/transmitters, and/or any other suitable device, component, element, or object operable to exchange information that facilitates or otherwise helps to facilitate various operations in a network environment as described for various embodiments herein. Note that with the examples provided herein, interaction may be described in terms of one, two, three, or four entities. However, this has been done for purposes of clarity, simplicity and example only. The examples provided should not limit the scope or inhibit the broad teachings of systems, networks, etc. described herein as potentially applied to a myriad of other architectures. Communications in a network environment can be referred to herein as ‘messages’, ‘messaging’, ‘signaling’, ‘data’, ‘content’, ‘objects’, ‘requests’, ‘queries’, ‘responses’, ‘replies’, etc. which may be inclusive of packets. As referred to herein and in the claims, the term ‘packet’ may be used in a generic sense to include packets, frames, segments, datagrams, and/or any other generic units that may be used to transmit communications in a network environment. Generally, a packet is a formatted unit of data that can contain control or routing information (e.g., source and destination address, source and destination port, etc.) and data, which is also sometimes referred to as a ‘payload’, ‘data payload’, and variations thereof. In some embodiments, control or routing information, management information, or the like can be included in packet fields, such as within header(s) and/or trailer(s) of packets. Internet Protocol (IP) addresses discussed herein and in the claims can include any IP version 4 (IPv4) and/or IP version 6 (IPv6) addresses. To the extent that embodiments presented herein relate to the storage of data, the embodiments may employ any number of any conventional or other databases, data stores or storage structures (e.g., files, databases, data structures, data or other repositories, etc.) to store information. Note that in this Specification, references to various features (e.g., elements, structures, nodes, modules, components, engines, logic, steps, operations, functions, characteristics, etc.) included in ‘one embodiment’, ‘example embodiment’, ‘an embodiment’, ‘another embodiment’, ‘certain embodiments’, ‘some embodiments’, ‘various embodiments’, ‘other embodiments’, ‘alternative embodiment’, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Note also that a module, engine, client, controller, function, logic or the like as used herein in this Specification, can be inclusive of an executable file comprising instructions that can be understood and processed on a server, computer, processor, machine, compute node, combinations thereof, or the like and may further include library modules loaded during execution, object files, system files, hardware logic, software logic, or any other executable modules. It is also noted that the operations and steps described with reference to the preceding figures illustrate only some of the possible scenarios that may be executed by one or more entities discussed herein. Some of these operations may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the presented concepts. In addition, the timing and sequence of these operations may be altered considerably and still achieve the results taught in this disclosure. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the embodiments in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the discussed concepts. As used herein, unless expressly stated to the contrary, use of the phrase ‘at least one of’, ‘one or more of’, ‘and/or’, variations thereof, or the like are open-ended expressions that are both conjunctive and disjunctive in operation for any and all possible combination of the associated listed items. For example, each of the expressions ‘at least one of X, Y and Z’, ‘at least one of X, Y or Z’, ‘one or more of X, Y and Z’, ‘one or more of X, Y or Z’ and ‘X, Y and/or Z’ can mean any of the following: 1) X, but not Y and not Z; 2) Y, but not X and not Z; 3) Z, but not X and not Y; 4) X and Y, but not Z; 5) X and Z, but not Y; 6) Y and Z, but not X; or 7) X, Y, and Z. Additionally, unless expressly stated to the contrary, the terms ‘first’, ‘second’, ‘third’, etc., are intended to distinguish the particular nouns they modify (e.g., element, condition, node, module, activity, operation, etc.). Unless expressly stated to the contrary, the use of these terms is not intended to indicate any type of order, rank, importance, temporal sequence, or hierarchy of the modified noun. For example, ‘first X’ and ‘second X’ are intended to designate two ‘X’ elements that are not necessarily limited by any order, rank, importance, temporal sequence, or hierarchy of the two elements. Further as referred to herein, ‘at least one of’ and ‘one or more of’ can be represented using the ‘(s)’ nomenclature (e.g., one or more element(s)). One or more advantages described herein are not meant to suggest that any one of the embodiments described herein necessarily provides all of the described advantages or that all the embodiments of the present disclosure necessarily provide any one of the described advantages. Numerous other changes, substitutions, variations, alterations, and/or modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and/or modifications as falling within the scope of the appended claims.
53,011
11943620
DETAILED DESCRIPTION The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions. A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured. A firewall generally protects networks from unauthorized access while permitting authorized communications to pass through the firewall. A firewall is typically a device, a set of devices, or software executed on a device that provides a firewall function for network access. For example, a firewall can be integrated into operating systems of devices (e.g., computers, smart phones, or other types of network communication capable devices). A firewall can also be integrated into or executed as software applications on various types of devices or security devices, such as computer servers, gateways, network/routing devices (e.g., network routers), or data appliances (e.g., security appliances or other types of special purpose devices). Firewalls typically deny or permit network transmission based on a set of rules. These sets of rules are often referred to as policies (e.g., network policies or network security policies). For example, a firewall can filter inbound traffic by applying a set of rules or policies to prevent unwanted outside traffic from reaching protected devices. A firewall can also filter outbound traffic by applying a set of rules or policies (e.g., allow, block, monitor, notify or log, and/or other actions can be specified in firewall/security rules or firewall/security policies, which can be triggered based on various criteria, such as described herein). A firewall may also apply anti-virus protection, malware detection/prevention, or intrusion protection by applying a set of rules or policies. Security devices (e.g., security appliances, security gateways, security services, and/or other security devices) can include various security functions (e.g., firewall, anti-malware, intrusion prevention/detection, proxy, and/or other security functions), networking functions (e.g., routing, Quality of Service (QoS), workload balancing of network related resources, and/or other networking functions), and/or other functions. For example, routing functions can be based on source information (e.g., source IP address and port), destination information (e.g., destination IP address and port), and protocol information. A basic packet filtering firewall filters network communication traffic by inspecting individual packets transmitted over a network (e.g., packet filtering firewalls or first generation firewalls, which are stateless packet filtering firewalls). Stateless packet filtering firewalls typically inspect the individual packets themselves and apply rules based on the inspected packets (e.g., using a combination of a packet's source and destination address information, protocol information, and a port number). Application firewalls can also perform application layer filtering (e.g., using application layer filtering firewalls or second generation firewalls, which work on the application level of the TCP/IP stack). Application layer filtering firewalls or application firewalls can generally identify certain applications and protocols (e.g., web browsing using HyperText Transfer Protocol (HTTP), a Domain Name System (DNS) request, a file transfer using File Transfer Protocol (FTP), and various other types of applications and other protocols, such as Telnet, DHCP, TCP, UDP, and TFTP (GSS)). For example, application firewalls can block unauthorized protocols that attempt to communicate over a standard port (e.g., an unauthorized/out of policy protocol attempting to sneak through by using a non-standard port for that protocol can generally be identified using application firewalls). Stateful firewalls can also perform stateful-based packet inspection in which each packet is examined within the context of a series of packets associated with that network transmission's flow of packets/packet flow (e.g., stateful firewalls or third generation firewalls). This firewall technique is generally referred to as a stateful packet inspection as it maintains records of all connections passing through the firewall and is able to determine whether a packet is the start of a new connection, a part of an existing connection, or is an invalid packet. For example, the state of a connection can itself be one of the criteria that triggers a rule within a policy. Advanced or next generation firewalls can perform stateless and stateful packet filtering and application layer filtering as discussed above. Next generation firewalls can also perform additional firewall techniques. For example, certain newer firewalls sometimes referred to as advanced or next generation firewalls can also identify users and content. In particular, certain next generation firewalls are expanding the list of applications that these firewalls can automatically identify to thousands of applications. Examples of such next generation firewalls are commercially available from Palo Alto Networks, Inc. (e.g., Palo Alto Networks' PA Series next generation firewalls, Palo Alto Networks' VM Series virtualized next generation firewalls, and CN Series container next generation firewalls). For example, Palo Alto Networks' next generation firewalls enable enterprises and service providers to identify and control applications, users, and content—not just ports, IP addresses, and packets—using various identification technologies, such as the following: App-ID™ (e.g., App ID) for accurate application identification, User-ID™ (e.g., User ID) for user identification (e.g., by user or user group), and Content-ID™ (e.g., Content ID) for real-time content scanning (e.g., controls web surfing and limits data and file transfers). These identification technologies allow enterprises to securely enable application usage using business-relevant concepts, instead of following the traditional approach offered by traditional port-blocking firewalls. Also, special purpose hardware for next generation firewalls implemented, for example, as dedicated appliances generally provides higher performance levels for application inspection than software executed on general purpose hardware (e.g., such as security appliances provided by Palo Alto Networks, Inc., which utilize dedicated, function specific processing that is tightly integrated with a single-pass software engine to maximize network throughput while minimizing latency for Palo Alto Networks' PA Series next generation firewalls). Overview of Techniques for Applying Context-Based Security Over Interfaces in NG-RAN Environments and/or O-RAN Environments in Mobile Networks Next Generation RAN (NG-RAN) architecture is a newly defined radio access network for 5G mobile networks. However, this new radio access network is vulnerable to new threat vectors. As an example, the NG-RAN architecture in mobile networks opens up new security threats over the Xn-U interface. Self-driving cars and industrial Internet of Things (IoT) applications will generate significant traffic over Xn interfaces (e.g., including Xn-C and Xn-U interfaces). As a result, self-driving cars that are compromised/infected with malware could potentially attack or infect other cars over these interfaces or other interfaces in mobile networks (e.g., 4G, 5G, 6G, or later mobile networks). In addition, Open Radio Access Network (O-RAN) is an evolution of the Next Generation RAN (NG-RAN) architecture. However, this new architecture similarly opens up new threat vectors. As an example, the new O-RAN architecture in, for example, 5G mobile networks opens up new security threats over the F1-U interface. Self-driving cars and industrial Internet of Things (IoT) applications will generate significant traffic over F1 interfaces (e.g., including F1-C and F1-U interfaces). As a result, self-driving cars that are compromised/infected with malware could potentially attack or infect other cars over these interfaces or other interfaces in mobile networks (e.g., 4G, 5G, 6G, or later mobile networks). Thus, technical and security challenges with service provider networks exist for devices in mobile networks. As such, what are needed are new and improved security techniques for devices in such service provider network environments (e.g., mobile networks). Specifically, what are needed are new and improved solutions for monitoring such network traffic and applying context-based security policies (e.g., firewall policies) for devices communicating on service provider networks, including over various NG-RAN related interfaces (e.g., Xn interfaces, including Xn-C and Xn-U interfaces) in NG-RAN environments in mobile networks as well as over various O-RAN related interfaces (e.g., F1 interfaces, including F1-C and F1-U interfaces) in O-RAN environments in mobile networks. In an example implementation, the security platform is configured to inspect XnAP traffic over an Xn-C interface between a source NG-RAN node and target NG-RAN node to extract contextual information in NG-RAN environments in mobile networks. The security platform can also inspect GTP-U traffic over an Xn-U interface between the source NG-RAN node and the target NG-RAN node to apply layer 7 security on user plane traffic. The security platform can correlate the context information with the user plane traffic to deliver the context-based security capabilities for inter node traffic in NG-RAN environments in mobile networks (e.g., 4G, 5G, 6G, or later mobile networks). In some embodiments, the security platform is configured to provide the following DPI capabilities: stateful inspection of XnAP traffic over Xn-C interfaces; and stateful inspection of GTP-U traffic over Xn-U interfaces and to apply context-based security in NG-RAN environments in mobile networks as will be further described below. In another example implementation, the security platform is configured to inspect F1 AP traffic over an F1-C interface between O-RAN Distributed Unit (O-DU) and O-RAN Centralized Unit Control Plane (O-CU-CP) to extract contextual information in O-RAN environments in mobile networks. The security platform can also inspect GTP-U traffic over an F1-U interface between O-DU and O-RAN Centralized Unit Data Plane (O-CU-DP) to apply layer-7 security on User Plane (UP) traffic. The security platform can correlate the context information with the user plane traffic to deliver the context-based security capabilities in O-RAN environments in mobile networks (e.g., 4G, 5G, 6G, or later mobile networks). In some embodiments, the security platform is configured to provide the following DPI capabilities: stateful inspection of F1AP traffic over F1-C interfaces between O-DU and O-CU-CP; and stateful inspection of GTP-U traffic over F1-U interfaces to extract contextual information and to apply context-based security in O-RAN environments in mobile networks as will be further described below. As such, the disclosed techniques facilitate enhanced security for NG-RAN environments in mobile networks and also for O-RAN environments in mobile networks. For example, security functions (e.g., security platforms) can be located closer to the user/device (e.g., UE) for performing security policy analysis and enforcement. As another example, security functions can be implemented to facilitate security for selective industry verticals. As yet another example, security can be implemented in highly sensitive locations, such as government network environments, military network environments, and power plant or other key infrastructure network environments. Accordingly, new and improved security solutions that facilitate applying security (e.g., network-based security) using a security platform (e.g., a firewall (FW)/Next Generation Firewall (NGFW), a network sensor acting on behalf of the firewall, or another (virtual) device/component that can implement security policies using the disclosed techniques, including, for example, Palo Alto Networks' PA Series next generation firewalls, Palo Alto Networks' VM Series virtualized next generation firewalls, and CN Series container next generation firewalls, and/or other commercially available virtual-based or container-based firewalls can similarly be implemented and configured to perform the disclosed techniques) in a mobile network (e.g., a 4G/5G/6G/later versions of mobile networks) on various interfaces and protocols in NG-RAN environments and/or O-RAN environments are disclosed in accordance with some embodiments. These and other embodiments and examples for applying context-based security over interfaces in NG-RAN environments and/or O-RAN environments in mobile networks will be further described below. Example System Architectures for Applying Context-Based Security Over Interfaces in NC-RAN Environments in Mobile Networks Accordingly, in some embodiments, the disclosed techniques include providing a security platform (e.g., the security function(s)/platform(s) can be implemented using a firewall (FW)/Next Generation Firewall (NGFW), a network sensor acting on behalf of the firewall, or another (virtual) device/component that can implement security policies using the disclosed techniques, such as PANOS executing on a virtual/physical NGFW solution commercially available from Palo Alto Networks, Inc. or another security platform/NFGW, including, for example, Palo Alto Networks' PA Series next generation firewalls, Palo Alto Networks' VM Series virtualized next generation firewalls, and CN Series container next generation firewalls, and/or other commercially available virtual-based or container-based firewalls can similarly be implemented and configured to perform the disclosed techniques) configured to provide DPI capabilities (e.g., including stateful inspection) of, for example, GTP-U sessions (e.g., GTP-U traffic) over Xn-U interfaces between a source NG-RAN node and a target NG-RAN node to apply security on user plane traffic based on a policy (e.g., layer-7 security and/or other security policy enforcement) as further described below. As another example, the security platform can be configured to correlate the context information with the user plane traffic to deliver the context-based security capabilities for inter node traffic in NG-RAN based 5G networks. FIG.1is a block diagram of an architecture of a 5G wireless network with a security platform for applying context-based security over interfaces in an NG-RAN environment in mobile networks in accordance with some embodiments. Specifically,FIG.1is an example 5G mobile network environment that includes a Security Platform102(e.g., the security function(s)/platform(s) can be implemented using a firewall (FW)/Next Generation Firewall (NGFW), a network sensor acting on behalf of the firewall, or another (virtual) device/component that can implement security policies using the disclosed techniques, including, for example, Palo Alto Networks' PA Series next generation firewalls, Palo Alto Networks' VM Series virtualized next generation firewalls, and CN Series container next generation firewalls, and/or other commercially available virtual-based or container-based firewalls can similarly be implemented and configured to perform the disclosed techniques) for applying context-based security over interfaces in NG-RAN environments in mobile networks (e.g., 5G or later mobile networks) as further described below. As shown, the 5G mobile network environment can also include 5G Radio Access Network (RAN) access (e.g., gNB) as shown at104A and104B, and/or other networks (not shown inFIG.1) to facilitate data communications for subscribers (e.g., using User Equipment (UE), such as smart phones, laptops, computers (which may be in a fixed location), and/or other cellular enabled computing devices/equipment, such as IoT devices as shown at106, or other network communication enabled devices) including over a Central Data Network (e.g., the Internet)120to access various applications, web services, content hosts, etc. and/or other networks. As shown inFIG.1, each of the 5G network access mechanisms104A and104B are in communication (e.g., via an N3 interface) with 5G Mobile Core User Plane (UP) Function114and are in communication (e.g., via an N2 interface) with 5G Mobile Core Control Plane (CP) Function112, which is in communication with a Central Data Network120. Referring toFIG.1, network traffic communications are monitored using Security Platform102. As shown, network traffic communications are monitored/filtered in the 5G network using Security Platform102(e.g., (virtual) devices/appliances that each include a firewall (FW), a network sensor acting on behalf of the firewall, or another device/component that can implement security policies using the disclosed techniques) configured to perform the disclosed techniques for applying context-based security over interfaces in an NG-RAN environment in mobile networks as similarly described above and as further described below. Specifically, security platform102monitors Xn-C and Xn-U interfaces. In some embodiments, a security platform is configured to provide the following DPI capabilities: stateful inspection of XnAP traffic over such Xn-C interfaces and GTP-U traffic over such Xn-U interfaces. In an example implementation, the security platform is configured to provide DPI capabilities (e.g., including stateful inspection) of, for example, XnAP sessions (e.g., XnAP traffic) over Xn-C interfaces and GTP-U sessions (e.g., GTP-U traffic) over Xn-U interfaces between a source NG-RAN node and a target NG-RAN node to apply security on user plane traffic based on a policy (e.g., layer-7 security and/or other security policy enforcement) as further described below. As another example, the security platform can be configured to correlate the context information with the user plane traffic to deliver the context-based security capabilities for inter node traffic in NG-RAN environments in 5G networks (see, e.g., 3GPP TS 38.423-v16.6.0 5G; NG-RAN; Xn Application Protocol (XnAP), which is available at https://www.etsi.org/deliver/etsi_ts/138400_138499/138423/16.06.00_60/ts_138423v160600p.p df). In an example implementation, the security platform is configured to inspect XnAP traffic over an Xn-C interface between a source NG-RAN node and target NG-RAN node to extract contextual information. The security platform can also inspect GTP-U traffic over an Xn-U interface between the source NG-RAN node and the target NG-RAN node to apply layer 7 security on user plane traffic. The security platform can correlate the context information with the user plane traffic to deliver the context-based security capabilities for inter node traffic in NG-RAN environments in 5G networks. In some embodiments, the security platform is configured to provide the following DPI capabilities: stateful inspection of XnAP traffic over Xn-C interfaces and GTP-U traffic over Xn-U interfaces and to apply context-based security as described herein. In addition, Security Platform102can also be in network communication with a Cloud Security Service122(e.g., a commercially available cloud-based security service, such as the WildFire™ cloud-based malware analysis environment that is a commercially available cloud security service provided by Palo Alto Networks, Inc., which includes automated security analysis of malware samples as well as security expert analysis, or a similar solution provided by another vendor can be utilized), such as via the Internet. For example, Cloud Security Service122can be utilized to provide the Security Platforms with dynamic prevention signatures for malware, DNS, URLs, CNC malware, and/or other malware as well as to receive malware samples for further security analysis. Referring toFIG.1, Security Platform102performs XnAP and GTP-U stateful inspection in this example 5G mobile network environment by parsing XnAP session traffic on the Xn-C interfaces and parsing GTP-U session traffic on the Xn-U interfaces, respectively, to extract certain information as will be further described below with respect toFIGS.2A and2B. As will now be apparent, network traffic communications can be monitored/filtered using one or more security platforms for network traffic communications in various locations within the 5G network (e.g., 5G network or converged 5G network) to facilitate applying context-based security over interfaces in an NG-RAN environment in mobile networks. FIGS.2A and2Bare tables of the parameters that are extracted by the security platform from a handover request during setup of a GTP-U tunnel session in accordance with some embodiments. This message is sent by a source NG-RAN node to the target NG-RAN node to request the preparation of resources for a handover (e.g., direction: source NG-RAN node4target NG-RAN node). In some embodiments, the security platform is configured to use the User Plane (UP) transport layer information extracted from the “HANDOVER REQUEST” message to set up GTP-U tunnel session. As shown inFIG.2A, the “PDU Session Resources To Be Setup List” IE contains PDU session resource related information used at UE context transfer between NG-RAN nodes. It contains UpLink (UL) tunnel information per the PDU Session Resource. As also shown inFIG.2A, “Masked IMEISV” information is also provided, and the security platform can extract the Type Allocation Code (TAC) to obtain a make and model of a 5G device. As shown inFIG.2B, other information that can be extracted from the “PDU Session Resources To Be Setup List” IE includes “S-NSSAI” which indicates the S-NSSAI as defined in 3GPP TS 23.003 version 16.3.0 Release 16 (e.g., available at https://www.etsi.org/deliver/etsi_ts/123000_123099/123003/16.03.00_60/ts_123003v160300p.p df), “UL NG-U UP TNL Information at UPF” which indicates the UPF endpoint of the NG-U transport bearer for delivery of UL PDUs, and “Source DL NG-U TNL Information” which indicates the possibility to keep the NG-U GTP-U tunnel termination point at the target NG-RAN node. FIG.2Cis a handover protocol sequence diagram for an NG-RAN environment in mobile networks. As shown, a source NG-RAN node210sends a HANDOVER REQUEST message to a target NG-RAN node220. The target NG-RAN node sends a response with a HANDOVER REQUEST ACKNOWLEDGE message as shown inFIG.2C. Example System Architectures for Applying Context-Based Security Over Interfaces in O-RAN Environments in Mobile Networks Generally, 5G is the 5th generation of the mobile communications system. The 3rd Generation Partnership Project (3GPP) includes seven telecommunications standard development organizations (i.e., ARIB, ATIS, CCSA, ETSI, TSDSI, TTA, and TTC). The project covers cellular telecommunications network technologies, including radio access, the core transport network, and service capabilities. The specifications also provide hooks for non-radio access to the core network, and for interworking with Wi-Fi networks and other organizations including ITU, IETF, and ETSI that are developing 5G standards. Some of the improvements of the new 5G network standards include, for example, an evolution of the Next Generation RAN (NG-RAN) architecture that is generally referred to as the Open Radio Access Network (O-RAN). O-RAN (Open RAN) is a term used for industry-wide standards for RAN (Radio Access Network) interfaces that support interoperation between vendors' equipment and offer network flexibility at a lower cost. The main purpose of open RAN is to have an interoperability standard for RAN elements including non-proprietary white box hardware and software from different vendors. Network operators that opt for RAN elements with standard interfaces can avoid being locked into one vendor's proprietary hardware and software. O-RAN ALLIANCE's mission is to re-shape the RAN industry towards more intelligent, open, virtualized and fully interoperable mobile networks. The new O-RAN standards will enable a more competitive and vibrant RAN supplier ecosystem with faster innovation to improve user experience. O-RAN based mobile networks will at the same time improve the efficiency of RAN deployments as well as operations by the mobile operators. O-RAN architecture is based on standards defined by O-RAN ALLIANCE, which are fully supporting and complimentary to standards promoted by 3GPP and other industry standards organizations. It includes interfaces defined and maintained by O-RAN including A1, O1, O2, E2 and Open Fronthaul interface. Moreover, this architecture also includes 3GPP interfaces including E1, F1-c, F1-u, NG-c, NG-u, X2-c, X2-u, X2-c, Xn-c, Xn-u and Uu. However, as similarly discussed above, this new O-RAN architecture opens up new threat vectors. As an example, the new O-RAN architecture in 5G networks opens up new security threats over the Xn-U interface. As such, various techniques for securing this new O-RAN environment in 5G networks are disclosed and will now be described further with respect toFIGS.3-4B. FIG.3is a block diagram of an architecture of a 5G wireless network with a security platform for applying context-based security over interfaces in an O-RAN environment in mobile networks in accordance with some embodiments. Specifically,FIG.3is an example 5G mobile network environment that includes a Security Platform102(e.g., the security function(s)/platform(s) can be implemented using a firewall (FW)/Next Generation Firewall (NGFW), a network sensor acting on behalf of the firewall, or another (virtual) device/component that can implement security policies using the disclosed techniques, including, for example, Palo Alto Networks' PA Series next generation firewalls, Palo Alto Networks' VM Series virtualized next generation firewalls, and CN Series container next generation firewalls, and/or other commercially available virtual-based or container-based firewalls can similarly be implemented and configured to perform the disclosed techniques) for applying context-based security over interfaces in O-RAN environments in mobile networks (e.g., 5G or later mobile networks) as further described below. As shown, the 5G mobile network environment can also include a 5G Radio Unit (RU) (e.g., a radio hardware unit that converts radio signals to and from the antenna into a digital signal) as shown at304, and/or other networks (not shown inFIG.3) to facilitate data communications for subscribers (e.g., using User Equipment (UE), such as smart phones, laptops, computers (which may be in a fixed location), and/or other cellular enabled computing devices/equipment, such as IoT devices as shown at106, or other network communication enabled devices) including over a Central Data Network (e.g., the Internet)120to access various applications, web services, content hosts, etc. and/or other networks. As shown inFIG.3, the 5G network access mechanism, RU304, is in communication with an O-RAN Distributed Unit (O-DU)306to facilitate network communications for UE, such as customer device302(e.g., a device capable of network/radio communications, such as a smart phone or another device capable of network/radio communications). O-DU306is in communication (e.g., via an F1-C interface) with O-RAN Centralized Unit Control Plane (O-CU-CP)308and is also in communication (e.g., via an F1-U interface) with O-RAN Centralized Unit User Plane (O-CU-UP)310. O-CU-CP308is in communication (e.g., via an N2 interface) with 5G Mobile Core Control Plane (CP) Function112. O-CU-UP310is in communication (e.g., via an N3 interface) with 5G Mobile Core User Plane (UP) Function114, which is in communication with Internet120. Referring toFIG.3, network traffic communications are monitored using Security Platform102. As shown, network traffic communications are monitored/filtered in the 5G network using Security Platform102(e.g., (virtual) devices/appliances that each include a firewall (FW), a network sensor acting on behalf of the firewall, or another device/component that can implement security policies using the disclosed techniques, including, for example, Palo Alto Networks' PA Series next generation firewalls, Palo Alto Networks' VM Series virtualized next generation firewalls, and CN Series container next generation firewalls, and/or other commercially available virtual-based or container-based firewalls can similarly be implemented and configured to perform the disclosed techniques) configured to perform the disclosed techniques for applying context-based security over interfaces in an O-RAN environment in mobile networks as similarly described above and as further described below. Specifically, security platform102monitors F1-C and F1-U interfaces. In some embodiments, a security platform is configured to provide the following DPI capabilities: stateful inspection of F1AP traffic over such F1-C interfaces and GTP-U traffic over such F1-U interfaces. In an example implementation, the security platform is configured to provide DPI capabilities (e.g., including stateful inspection) of, for example, F1AP sessions (e.g., F1 AP traffic) over F1-C interfaces between O-DU and O-CU-CP and GTP-U sessions (e.g., GTP-U traffic) over F1-U interfaces between O-DU and O-CU-UP to apply security on user plane traffic based on a policy (e.g., layer-7 security and/or other security policy enforcement) as further described below. As another example, the security platform can be configured to correlate the context information with the user plane traffic to deliver the context-based security capabilities for inter node traffic in O-RAN environments in 5G networks. In an example implementation, the security platform is configured to use UP transport layer information extracted from the “UE CONTEXT SETUP REQUEST” and “UE CONTEXT SETUP RESPONSE” messages exchanged between gNB-DU and gNB-CU during the “UE Context Setup Procedure” to set up a GTP-U tunnel session. The UP Transport Layer Information IE identifies an F1 transport bearer associated to a DRB. It contains a Transport Layer Address and a GTP Tunnel Endpoint Identifier. The Transport Layer Address is an IP address to be used for the F1 user plane transport. The GTP Tunnel Endpoint Identifier is to be used for the user plane transport between gNB-CU and gNB-DU. In addition, in this example implementation, the security platform is configured to inspect F1AP traffic over an F1-C interface between O-DU and O-CU-CP to extract contextual information. The security platform can also inspect GTP-U traffic over an F1-U interface between O-DU and O-CU-UP to apply layer-7 security on User Plane (UP) traffic (e.g., see 3GPP TS 38.473-V16.6.0, 5G; NG-RAN; F1 Application Protocol (F1AP) (3GPP TS 38.473 version 16.6.0 Release 16, which is available at https://www.etsi.org/deliver/etsi_ts/138400_138499/138473/16.06.00_60/ts_138473v160600p.p df); and see also 3GPP TS 38.470-V16.50, 5G, NG-RAN, F1 general aspects and principles (3GPP TS 38.470 version 16.50, which is available at https://www.etsi.org/deliver/etsi_ts/138400_138499/138470/16.05.00_60/ts_138470v160500p.p df)). The security platform can correlate the context information with the user plane traffic to deliver the context-based security capabilities in O-RAN based mobile networks (e.g., 5G networks). In some embodiments, the security platform is configured to provide the following DPI capabilities: stateful inspection of F1AP traffic over F1-C interfaces between O-DU and O-CU-CP to extract contextual information and to apply context-based security as described herein. In addition, Security Platform102can also be in network communication with a Cloud Security Service122(e.g., a commercially available cloud-based security service, such as the WildFire™ cloud-based malware analysis environment that is a commercially available cloud security service provided by Palo Alto Networks, Inc., which includes automated security analysis of malware samples as well as security expert analysis, or a similar solution provided by another vendor can be utilized), such as via the Internet. For example, Cloud Security Service122can be utilized to provide the Security Platforms with dynamic prevention signatures for malware, DNS, URLs, CNC malware, and/or other malware as well as to receive malware samples for further security analysis. Referring toFIG.3, Security Platform102performs F1AP and GTP-U stateful inspection in this example 5G mobile network environment by parsing F1AP session traffic on the F1-C interfaces and parsing GTP-U session traffic on the F1-U interfaces, respectively, to extract certain information as will be further described below with respect toFIGS.4A and4B. As will now be apparent, network traffic communications can be monitored/filtered using one or more security platforms for network traffic communications in various locations within the mobile network (e.g., 5G network or converged 5G network) to facilitate applying context-based security over interfaces in O-RAN (e.g., including distributed O-RAN environments) in mobile networks. FIGS.4A and4Bare tables of the parameters that are extracted by the security platform from a handover request during setup of a GTP-U tunnel session in accordance with some embodiments. In some embodiments, the security platform is configured to use the User Plane (UP) transport layer information extracted from the “UE CONTEXT SETUP REQUEST” and “UE CONTEXT SETUP RESPONSE” messages to set up the GTP-U tunnel session. The UP Transport Layer Information IE identifies an F1 transport bearer associated to a DRB. It contains a Transport Layer Address and a GTP Tunnel Endpoint Identifier. The Transport Layer Address is an IP address to be used for the F1 user plane transport. The GTP Tunnel Endpoint Identifier is to be used for the user plane transport between gNB-CU and gNB-DU. As also shown inFIG.4A, “Masked IMEISV” information is also provided, and the security platform can extract the Type Allocation Code (TAC) to obtain a make and model of a 5G device. As shown inFIG.4A, other information that can be extracted from the “UE CONTEXT SETUP REQUEST” includes “S-NS SAT” which indicates the S-NSSAI as defined in 3GPP TS 23.003 version 16.3.0 Release 16 (e.g., available at https://www.etsi.org/deliver/etsi_ts/123000_123099/123003/16.03.00_60/ts_123003v160300p.p df), “UL NG-U UP TNL Information at UPF” which indicates the UPF endpoint of the NG-U transport bearer for delivery of UL PDUs, and “UL UP TNL Information” which indicates the gNB-CU endpoint of the F1 transport bearer for delivery of UL PDUs. As shown inFIG.4B, other information that can be extracted from the “UE CONTEXT SETUP RESPONSE” (e.g., this message is sent by the gNB-DU to confirm the setup of a UE context, gNB-DU 4 gNB-CU) includes “DL UP TNL Information” which indicates the gNB-DU endpoint of the F1 transport bearer for delivery of DL PDUs. FIG.4Cis a handover protocol sequence diagram for an O-RAN environment in mobile networks. As shown, a gNB-DU node410sends a UE CONTEXT SETUP REQUEST message to a gNB-CU node420. The gNB-CU node sends a response with a UE CONTEXT SETUP RESPONSE message as shown inFIG.4C. Example Use Cases of Enhanced Security for Applying Context-Based Security Over Interfaces in NG-RAN and O-RAN Environments in Mobile Networks The disclosed techniques for providing enhanced security for mobile/service provider networks using a security platform for security policy enforcement, including for applying context-based security over interfaces in NG-RAN and/or O-RAN environments in mobile networks (e.g., including distributed O-RAN environments), can be applied in a variety of additional example use case scenarios for facilitating enhanced security for NG-RAN and/or O-RAN environments in mobile networks (e.g., 4G/5G/6G and later mobile networks) as will now be described with respect to various example use cases. As a first example use case, the disclosed techniques can be used to facilitate context-based security over an interface in an NG-RAN environment in mobile networks (e.g., including context-based security over Xn-C and Xn-U interfaces in an NG-RAN environment in 5G networks) and/or in an O-RAN environment in mobile networks (e.g., including context-based security over F1-C and F1-U interfaces in an O-RAN environment in 5G networks). As a second example use case, the disclosed techniques can be used to facilitate known and unknown threat identification over an interface in an NG-RAN environment in mobile networks (e.g., including known and unknown threat identification over Xn-C and Xn-U interfaces in an NG-RAN environment in 5G networks) and/or in an O-RAN environment in mobile networks (e.g., including known and unknown threat identification over F1-C and F1-U interfaces in an F1-RAN environment in 5G networks). As a third example use case, the disclosed techniques can be used to facilitate an investigation of a security event related to a user equipment (UE) (e.g., self-driving cars, industrial IoT, etc.) exchanging user traffic over an interface (e.g., Xn-C and/or Xn-U interfaces in an NG-RAN environment or F1-C and/or F1-U interfaces in an O-RAN environment). For example, Scada systems infected with vulnerabilities related to Remote Code Execution (RCE) or remote information retrieval can be detected using the disclosed techniques. Example vulnerabilities applicable to the above-described, for example, second and third example use cases are listed below. (1) Delta Industrial Automation DIAEnergie HandlerAlarmGroup.aspx SQL Injection Vulnerability CVE-2021-38393. (2) Delta Industrial Automation CNC Soft ScreenEditor DPB Element Section Stack Buffer Overflow Vulnerability CVE-2021-2267. (3) Advantech WebAccess SCADA bwrunmie.exe Policy Bypass Vulnerability CVE-2019-13552. (4) Advantech WebAccess/SCADA Memory Corruption Vulnerability CVE-2019-10991. (5) Advantech WebAccess SCADA bwrunrpt.exe Stack-based Buffer Overflow Vulnerability CVE-2019-13556. (6) GE Industrial Solutions Remote Command Execution Vulnerability CVE-2016-0861. As a fourth example use case, the disclosed techniques can be used to facilitate advanced L7 security control for user traffic exchanged over an interface (e.g., Xn-C and/or Xn-U interfaces in an NG-RAN environment or F1-C and/or F1-U interfaces in an O-RAN environment). For example, detecting and blocking Command and Control (C&C) traffic (e.g., IoT spyware C&C traffic and/or IoT malware C&C traffic) between industrial machines when one machine is compromised/infected with C&C malware can be performed using the disclosed techniques. As a fifth example use case, the disclosed techniques can be used to facilitate application (e.g., application layer) control over an interface (e.g., Xn-C and/or Xn-U interfaces in an NG-RAN environment or F1-C and/or F1-U interfaces in an O-RAN environment). As examples, the following security solutions can be effectively and efficiently implemented using the disclosed techniques: (a) allow only trusted applications and protocols (e.g., modbus) for industrial robots/machines connected to a separate 5G base station (gNB) in a smart factory; (b) allow only selected functions (e.g., modbus read write register, modbus read coils, modbus input registers, etc.) over trusted protocols for industrial robots/machines; and (c) block untrusted applications for industrial robots/machines connected to a separate 5G base station (gNB) in a smart factory. As a sixth example use case, the disclosed techniques can be used to facilitate URL filtering over an interface in an O-RAN environment (e.g., including URL filtering over Xn-C and Xn-U interfaces in an O-RAN environment in 5G networks). As a seventh example use case, the security platform can be configured with a security policy to perform detection and prevention of Denial of Service (DoS) attacks for applying context-based security over interfaces in an O-RAN environment in mobile networks. As an eighth example use case, the disclosed techniques can be applied to improve energy efficiency in 5G networks (e.g., O-RAN environments and/or distributed O-RAN environments). Specifically, 5G devices' energy efficiency can be derailed by various types of malware attacks. For example, cryptocurrency mining is an example of an attack (e.g., a device can be compromised to be used for processing power for a cryptocurrency mining operation using distributed computing resources, including compromised 5G devices). As such, the disclosed techniques can facilitate detection and prevention of these malware attacks/threats that would otherwise derail energy efficiency of such 5G devices. As a ninth example use case, the disclosed techniques can be applied to improve the security of various new 5G sensors. For example, connected cows in a 5G network in which dairy herds wear mobile connected sensors that collect biometric information on the cows' body temperature, pulse, and daily movements to let them graze further and better manage milk production, but such 5G sensors can also be compromised by malware. As such, the disclosed techniques can facilitate detection and prevention of these malware attacks/threats including malware, remote code execution. Example vulnerabilities applicable to the above-described, for example, third, fourth, eighth, and ninth example use cases are listed below. (1) Damstra Smart Asset SQL Injection Vulnerability CVE-2020-26525. (2) CHIYU IoT Devices XSS Vulnerability CVE-2021-31250. (3) InduSoft Web Studio and InTouch Machine Edition Remote Code Execution Vulnerability CVE-2018-10620. (4) Ecava IntegraXor Human-Machine Interface Stack-based Buffer Overflow Vulnerability CVE-2010-4597. As will now be apparent to one of ordinary skill in the art, the disclosed techniques for applying context-based security over interfaces in NG-RAN and/or O-RAN environments in mobile networks using a security platform for security policy enforcement can be applied in a variety of additional example use case scenarios to detect/prevent these and other types of attacks for facilitating enhanced security for O-RAN/NG-RAN environments in mobile networks. Example Hardware Components of a Network Device for Applying Context-Based Security Over Interfaces in NG-RAN and/or O-RAN Environments in Mobile Networks FIG.5is a functional diagram of hardware components of a network device for applying context-based security over interfaces in NG-RAN environments and/or O-RAN environments in mobile networks in accordance with some embodiments. The example shown is a representation of physical/hardware components that can be included in network device500(e.g., an appliance, gateway, or server that can implement the security platform disclosed herein). Specifically, network device500includes a high performance multi-core CPU502and RAM504. Network device500also includes a storage510(e.g., one or more hard disks or solid state storage units), which can be used to store policy and other configuration information as well as signatures. In one embodiment, storage510stores certain information (e.g., XnAP traffic information and/or GTP-U traffic information as similarly described above) that is extracted from monitored traffic over various interfaces (e.g., XnAP traffic over Xn-C interfaces and GTP-U traffic over Xn-U interfaces) that are monitored for implementing the disclosed security policy enforcement techniques for applying context-based security over interfaces in an O-RAN environment in mobile networks using a security platform(s) as similarly described above with respect toFIGS.1-4B. Network device500can also include one or more optional hardware accelerators. For example, network device500can include a cryptographic engine506configured to perform encryption and decryption operations, and one or more FPGAs508configured to perform signature matching, act as network processors, and/or perform other tasks. Example Logical Components of a Network Device for Applying Context-Based Security Over Interfaces in an O-RAN Environment in Mobile Networks FIG.6is a functional diagram of logical components of a network device for applying context-based security over interfaces in NG-RAN environments and/or O-RAN environments in mobile networks in accordance with some embodiments. The example shown is a representation of logical components that can be included in network device600(e.g., a data appliance, which can implement the disclosed security function/platform and perform the disclosed techniques for applying context-based security over interfaces in an O-RAN environment in mobile networks). As shown, network device600includes a management plane602and a data plane604. In one embodiment, the management plane is responsible for managing user interactions, such as by providing a user interface for configuring policies and viewing log data. The data plane is responsible for managing data, such as by performing packet processing and session handling. Suppose a mobile device attempts to access a resource (e.g., a remote web site/server, an MEC service, an IoT device, or another resource) using an encrypted session protocol, such as SSL. Network processor606is configured to monitor packets from the mobile device and provide the packets to data plane604for processing. Flow608identifies the packets as being part of a new session and creates a new session flow. Subsequent packets will be identified as belonging to the session based on a flow lookup. If applicable, SSL decryption is applied by SSL decryption engine610using various techniques as described herein. Otherwise, processing by SSL decryption engine610is omitted. Application identification (APP ID) module612is configured to determine what type of traffic the session involves (e.g., PFCP over UDP traffic between various monitored interfaces as similarly described above with respect toFIGS.1-4B) and to identify a user associated with the traffic flow (e.g., to identify an Application-ID as described herein). For example, APP ID612can recognize a GET request in the received data and conclude that the session requires an HTTP decoder614. As another example, APP ID612can recognize a GTP-U session establishment/modification/release messages (e.g., over Xn-C and Xn-U interfaces, such as similarly described above with respect toFIGS.1-4B) and conclude that the session requires a GTP-U decoder (e.g., to extract information exchanged in the GTP-U traffic session over Xn-C and Xn-U interfaces including various parameters, such as similarly described above with respect toFIGS.1-4B). For each type of protocol, there exists a corresponding decoder614. In one embodiment, the application identification is performed by an application identification module (e.g., APP ID component/engine), and a user identification is performed by another component/engine. Based on the determination made by APP ID612, the packets are sent to an appropriate decoder614. Decoder614is configured to assemble packets (e.g., which may be received out of order) into the correct order, perform tokenization, and extract out information (e.g., such to extract various information exchanged in GTP-U traffic over Xn-C/Xn-U/other interfaces as similarly described above and further described below). Decoder614also performs signature matching to determine what should happen to the packet. SSL encryption engine616performs SSL encryption using various techniques as described herein and the packets are then forwarded using a forward component618as shown. As also shown, policies620are received and stored in the management plane602. In one embodiment, policy enforcement (e.g., policies can include one or more rules, which can be specified using domain and/or host/server names, and rules can apply one or more signatures or other matching criteria or heuristics, such as for security policy enforcement for subscriber/IP flows on service provider networks based on various extracted parameters/information from monitored GTP-U traffic and/or DPI of monitored GTP-U and/or other protocol(s) traffic, such as XnAP traffic over Xn-C interfaces, as disclosed herein) is applied as described herein with respect to various embodiments based on the monitored, decrypted, identified, and decoded session traffic flows. As also shown inFIG.6, an interface (I/F) communicator622is also provided for security platform manager communications (e.g., via (REST) APIs, messages, or network protocol communications or other communication mechanisms). In some cases, network communications of other network elements on the service provider network are monitored using network device600, and data plane604supports decoding of such communications (e.g., network device600, including I/F communicator622and decoder614, can be configured to monitor and/or communicate on, for example, reference point interfaces such as Xn-C, Xn-U, and/or other interfaces where wired and wireless network traffic flow exists). As such, network device600including I/F communicator622can be used to implement the disclosed techniques for security policy enforcement on mobile/service provider network environments, including MEC services security, as described above and as will be further described below. Additional example processes for the disclosed techniques for applying context-based security over interfaces in NG-RAN and/or O-RAN environments in mobile networks will now be described. Example Processes for Applying Context-Based Security Over Interfaces in NG-RAN Environments in Mobile Networks FIG.7is a flow diagram of a process for applying context-based security over interfaces in NG-RAN environments in mobile networks in accordance with some embodiments. In some embodiments, a process700as shown inFIG.7is performed by the security platform and techniques as similarly described above including the embodiments described above with respect toFIGS.1-2B,5, and6. In one embodiment, process700is performed by data appliance500as described above with respect toFIG.5, network device600as described above with respect toFIG.6, a virtual appliance (e.g., Palo Alto Networks' VM Series virtualized next generation firewalls, CN Series container next generation firewalls, and/or other commercially available virtual-based or container-based firewalls can similarly be implemented and configured to perform the disclosed techniques), an SDN security solution, a cloud security service, and/or combinations or hybrid implementations of the aforementioned as described herein. At702, monitoring network traffic on a mobile network at a security platform to identify a GTP-U tunnel session setup message associated with a new session is performed. For example, the security platform (e.g., a firewall, a network sensor acting on behalf of the firewall, or another device/component that can implement security policies) can monitor, in some cases, various protocols, such as GTP-U (e.g., over Xn-U interface), XnAP (e.g., over Xn-C interface), and/or other protocols, on the mobile network and, more specifically, by performing the disclosed techniques can monitor various interfaces, such as the Xn-C and Xn-U interfaces, as similarly described above. In some embodiments, the security platform inspects XnAP traffic over an Xn-C interface between a source NG-RAN node and a target NG-RAN node to extract contextual information (e.g., and can store the contextual information locally in the security platform or in a cloud-based storage). In some embodiments, the security platform inspects GTP-U traffic over an Xn-U interface between a source NG-RAN node and a target NG-RAN node to apply Layer-7 security on user plane traffic. At704, extracting a plurality of parameters from the GTP-U tunnel session setup message and from XnAP traffic to extract contextual information at the security platform is performed. For example, the parameters as similarly described above with respect toFIGS.2A and2Bcan be extracted. In some embodiments, the security platform correlates the context information with the user plane traffic to perform context-based security for inter node traffic in the NG-RAN environment. At706, enforcing a security policy at the security platform on the new session based on one or more of the plurality of parameters to apply context-based security to the network traffic transported between NG-RAN nodes in an NG-RAN environment in the mobile network is performed. For example, security policy enforcement can include allowing or blocking the session. In some embodiments, the security platform performs context-based security over an Xn-U interface in the NG-RAN environment. Other examples of security policy enforcement include the following: (1) detection and prevention of known and unknown threat identification and prevention over an Xn-U interface in the NG-RAN environment; (2) application identification and control over an Xn-U interface in the NG-RAN environment; and (3) URL filtering over an Xn-U interface in the NG-RAN environment. Example Processes for Applying Context-Based Security Over Interfaces in NG-RAN Environments in Mobile Networks FIG.8is a flow diagram of a process for applying context-based security over interfaces in O-RAN environments in mobile networks in accordance with some embodiments. In some embodiments, a process800as shown inFIG.8is performed by the security platform and techniques as similarly described above including the embodiments described above with respect toFIGS.3-4B,5, and6. In one embodiment, process800is performed by data appliance500as described above with respect toFIG.5, network device600as described above with respect toFIG.6, a virtual appliance (e.g., Palo Alto Networks' VM Series virtualized next generation firewalls, CN Series container next generation firewalls, and/or other commercially available virtual-based or container-based firewalls can similarly be implemented and configured to perform the disclosed techniques), an SDN security solution, a cloud security service, and/or combinations or hybrid implementations of the aforementioned as described herein. The process begins at802. At802, monitoring network traffic on a mobile network at a security platform to identify a GTP-U tunnel session setup message associated with a new session is performed. For example, the security platform (e.g., a firewall, a network sensor acting on behalf of the firewall, or another device/component that can implement security policies) can monitor, in some cases, various protocols, such as GTP-U (e.g., over F1-U interface), F1AP (e.g., over F1-C interface), and/or other protocols, on the mobile network and, more specifically, by performing the disclosed techniques can monitor various interfaces, such as the F1-C and F1-U interfaces, as similarly described above. In some embodiments, the security platform inspects F1AP traffic over an F1-C interface between an O-DU node and an O-CU-CP node to extract contextual information (e.g., and can store the contextual information locally in the security platform or in a cloud-based storage). In some embodiments, the security platform inspects GTP-U traffic over an F1-U interface between a gNB-DU node and a gNB-CU node to apply Layer-7 security on user plane traffic. At804, extracting a plurality of parameters from the GTP-U tunnel session setup message and from F1AP traffic at the security platform is performed. For example, the parameters as similarly described above with respect toFIGS.4A and4Bcan be extracted. In some embodiments, the security platform correlates the context information with the user plane traffic to perform context-based security for inter node traffic in the O-RAN environment. At806, enforcing a security policy at the security platform on the new session based on one or more of the plurality of parameters to apply context-based security to the network traffic transported between O-RAN Distributed Unit (O-DU) and O-RAN Centralized Unit Control Plane (O-CU-CP) nodes in an O-RAN environment in the mobile network is performed. For example, security policy enforcement can include allowing or blocking the session. In some embodiments, the security platform performs context-based security over an F1-U interface in the O-RAN environment. Other examples of security policy enforcement include the following: (1) detection and prevention of known and unknown threat identification and prevention over an F1-U interface in the O-RAN environment; (2) application identification and control over an F1-U interface in the O-RAN environment; and (3) URL filtering over an F1-U interface in the O-RAN environment. As will now be apparent in view of the disclosed embodiments, a network service provider/mobile operator (e.g., a cellular service provider entity), a device manufacturer (e.g., an automobile entity, IoT device entity, and/or other device manufacturer), and/or system integrators can specify such security policies that can be enforced by a security platform using the disclosed techniques to solve these and other technical network security challenges for applying context-based security in NG-RAN environments and O-RAN environments (e.g., including distributed O-RAN environments) in mobile networks, including 4G networks, 5G networks, 6G networks, and/or later generations of mobile networks. Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
59,502
11943621
DETAILED DESCRIPTION Some communication protocols include data or processes that facilitate coarse or fine location determination, sometimes referred to as localization, of a device communicating according to the communication protocols. For example, some short-range communication protocols (such as BLUETOOTH®) include tones in at least some data transmissions to enable determination of a location of a device sending a transmission and/or a location of a device receiving a transmission. The tones are, in various examples, data symbols or any other known data without limitation on length. By processing the tones according to angle of departure (AoD) processing (for the device sending the transmission) and/or angle of arrival (AoA) processing (for the device receiving the transmission), a location of the device can be determined. At least some examples of the processing depend on comparing the transmitted tones or the received tones to a known value. For this reason, the tones are often included in a data transmission in plain text, in a manner that is outside of encryption of the data transmission and/or outside of data whitening of the data transmission. However, including the tones as plain text can create security problems in the communications. For example, when the tones are included as plain text, the location determination becomes susceptible to a man-in-the-middle attack in which a third-party device that is not an intended part of the transmission intercepts the data transmission and then imitates one of the devices that are a part of the transmissions. However, encrypting the tones so that they do not appear as plain text can also create challenges. Particularly, the tones are conventionally appended to the end of a payload pursuant to standards requirements. Advanced Encryption Standard (AES), as one common encryption protocol, encrypts data in blocks of 16 bytes, using each encrypted block as an initialization vector for the next block to be encrypted. This creates an encryption output that is highly random. However, as discussed above, for AoA and/or AoD processing, the tones must be a known value. Therefore, placing the tones in the conventional location, such as at the end of data in the payload when the payload data is not encrypted or following a message authentication code field and/or CRC when the payload is encrypted, in at least some examples, is incompatible with encrypting the payload. Additionally, as discussed above, the tones must be a known value when received in a transmission for AoA processing. However, the whitening process discussed above alters the data packet to, in some examples, scramble data of the data packet to eliminate or minimize highly redundant data and/or at least partially control an average frequency of the data packet. Accordingly, in some examples the whitening modifies the tones away from the known values and inhibits the ability to perform AoA and/or AoD processing. At least some aspects of the present disclosure provide for secure localization (e.g., location determination) in short-range communications. In at least some examples, the short-range communications extend to approximately 100 meters (m), 200 m, 300 m, and/or any other distances as specified by an applicable communication standard (e.g., such as a BLUETOOTH® 5.0 standard). To provide the secure localization, some examples incorporate localization tones preceding data in a payload of a data packet (e.g., at the beginning of the payload) of the short-range communications. By including the localization tones at the beginning of the payload, in some examples, encryption of the tones can be predicted based on a known session key and a known initialization vector. Additionally, by performing further prediction and/or estimation, the effects of the whitening process on the tones is mitigated. By estimating an output of the whitening process and the encryption, in some examples the tones are modified according to the estimations prior to the encryption and the whitening such that the tones have the known value when received in a transmission, despite the encryption and the whitening. Turning now toFIG.1, a block diagram of an illustrative system100for localization is shown. The system100includes, in some examples, a first wireless device105and a second wireless device110. The first wireless device105includes at least a transceiver115, an antenna120, a processor125, and an angle estimator130. The second wireless device110includes at least a transceiver135, an antenna140, a processor145, and an angle estimator150. The first wireless device105and the second wireless device110are communicatively coupled via wireless communications, such as via a short-range communication standard (e.g., such as BLUETOOTH® as specified by the Bluetooth Special Interest group). As discussed herein, the first wireless device105is the transmitting device and the second wireless device110is the receiving device. In at least some examples, the processor125constructs a data packet155for transmission to the second wireless device110. The data packet155includes, among other elements, a payload160containing data162and localization tone165, a CRC code170, and a header175(e.g., a media access control (MAC) header). In various examples, the data packet155includes further elements (not shown) such as a preamble and/or any other suitable data elements. The processor125constructs the data packet155, in at least some examples, by receiving the data162for communication to the second wireless device110and obtaining a value for the localization tone165. While described herein in the singular as a localization tone165, various examples of the localization tone include any amount of data, symbols, tones, or other suitable contents, the scope of which is not limited herein. The value for the localization tone165, in some examples, is a known value established by standard and/or communicated previously between the first wireless device105and the second wireless device110. The value for the localization tone165, in some examples, is the value that the second wireless device110expects to receive to perform the AoA processing (e.g., via angle estimator150). In other examples, the value for the localization tone165is the value that the first wireless device105expects to receive to perform the AoD processing (e.g., via the angle estimator130). In some examples, subsequent processing by the first wireless device105to prepare the data packet155for transmission will alter the contents of the data packet155prior to transmission. For example, whitening and/or encryption will modify the binary sequence of data of the data packet155and the second wireless device110will reverse the process of whitening and/or encryption upon receipt of the transmitted data packet to obtain the original data162. In an example, the whitening is performed by performing an exclusive-OR (XOR) logical operation between each bit of an output of a linear feedback shift register (LFSR) (not shown) and each bit of the data to be whitened, generating whitened data. In some examples, before each transmission from the first wireless device105(e.g., while forming the data packet155), the LFSR is initialized with a value. In at least some examples, the value is a portion of a clock signal (e.g., a master BLUETOOTH® clock). The whitening is applied, in some examples, to the payload160of the data packet155, scrambling the contents of the payload160in a pseudo-random manner defined by the initialization value of the LFSR. Data whitening, particularly in the context of BLUETOOTH® communication, is a standardized process governed by BLUETOOTH® standards to provide for interoperability among BLUETOOTH® capable devices, and therefore further detail regarding data whitening is not included herein for the sake of brevity and ease of understanding. To enable the first wireless device105to transmit the localization tone165to the second wireless device110while maintaining the value for the localization tone165, in some examples the processor125estimates an effect of the whitening and/or the encryption on the localization tone165. The first wireless device105performs the estimation, in at least one example, at an application layer of a communication protocol stack of the first wireless device105. The whitening is performed by the first wireless device105, in at least one example, at a physical layer of the communication protocol stack. For example, the processor125processes the localization tone165to estimate an output of the LFSR based on an initial seed value and a bit in the payload160at which the localization tone165begins. In at least some examples, it is desirable (e.g., necessary under some communication standards, preferred under some operating procedures, efficiently superior to at least some other methods, and/or an optional operational characteristic) for each bit of the localization tone165to be a logical “1” value when transmitted by the first wireless device105, and a sequence of logical “1” values is expected by the second wireless device110to perform the AoA processing. To estimate the effects of the whitening to provide for an output of the whitening to be a series of logical “1” values, in some examples the processor125performs an XOR operation between a set value of a logical “1” and an estimated output of the LFSR for a specific seed value. The seed value is, in some examples, dependent on an advertising channel (e.g., channel 37, 38, or 39) on which the first wireless device105will transmit the data packet155and is a defined, standardized value, the scope of which is not limited herein. By performing the XOR operation, the processor125determines a value for each bit of the localization tone165such that, after the whitening process, the localization tone165will retain the known value expected by the second wireless device110. Prior to performing the whitening process, the processor125replaces the known value in the localization tone165with the values determined according to the estimation such that, after the estimation and prior to the whitening, the localization tone165does not contain the known value. As discussed above, at least some aspects of the estimation process are channel dependent. However, the first wireless device105is capable in various examples of transmitting via a plurality of channels (e.g., such as channels 37, 38, and 39, spanning a range of 2.4 gigahertz (GHz) to 2.46 GHz). Accordingly, estimated values for the localization tone165that are estimated for one channel, in some examples, are not automatically usable with other channels, resulting in estimating the values of the localization tone165for each channel prior to each transmission. In some examples, to mitigate the need for estimating the localization tone165for each channel, the same localization tone165is transmitted to each channel. To enable use of the localization tone165on a channel for which it was not intended (e.g., a localization tone estimated for channel 37 but transmitted on channel 39), the first wireless device105, in some examples, notifies the second wireless device110, prior to transmission of the data packet155, that the localization tone165is estimated for a channel other than the channel on which the data packet155is being transmitted. In other examples, the second wireless device110determines that the localization tone165is estimated for a channel other than the channel on which the data packet155has been received, for example, based at least partially on a marker in the data packet155. In such examples, the first wireless device105inserts a marker in the data packet155(e.g., in the header175, immediately preceding the localization tone165, or any other suitable location in the data packet155) indicating that the localization tone165is estimated for a channel other than the channel on which the data packet155is being transmitted. The marker is subsequently subjected to whitening by the first wireless device105such that, after the second wireless device110receives the data packet155and performs de-whitening, the second wireless device110can determine for which channel the localization tone165was estimated by the first wireless device105. In yet another example, the first wireless device105estimates a localization tone for each channel and includes each estimated localization tone collectively in the data packet155as the localization tone165. In such examples, the first wireless device105further includes data in the data packet155indicating a length, beginning bit location, and/or corresponding channel for each individual localization tone of the localization tone165. In some examples, the first wireless device105and the second wireless device110have previously performed authentication protocols (e.g., such as MAC authentication) such that the processor125does not encrypt the data packet155prior to transmission by the first wireless device105. However, in other examples, and/or in at least some examples in which authentication has previously been performed, the first wireless device105encrypts the data packet155prior to transmission. In such examples, the encryption further modifies the contents of the data packet155such that the localization tone165will no longer have the known value after the encryption. To compensate for this alteration, as with compensating for the whitening, as discussed above, in some examples the first wireless device105estimates an effect of the encryption process on the localization tone165. AES encryption is discussed as an example of encryption herein, though the present disclosure is not limited to this sole encryption protocol and similar estimation of the effects of other encryption protocols is contemplated by the present disclosure. As discussed above, AES begins with a session key, an initialization vector, and a first block of data to be encrypted. The encrypted first block of data is then used as the initialization vector for encrypting a second block of data, the second block of data is used as the initialization vector for encrypting a third block of data, and so forth until the data is fully encrypted. This provides for a highly random result for the encrypted data. However, to estimate or control the output of the encryption, in at least some examples each of the initialization vector, the session key, and the data to be encrypted must be known. For this reason, in at least some encryption examples, an encryption output estimation can only be determined for a block of data for which values of the block of data, desired values for the encrypted data, the session key, and the initialization vector are known. Accordingly, in at least some examples, the localization tone165is positioned in the payload160preceding the data162, when any data162is included in the payload160, to facilitate estimation of values of the encrypted localization tone165. For example, to determine a value to be used as the localization tone165, in some examples a calculation of Decrypt_AES(IV,SK,array(bit_i)) is used, where Decrypt_AES is an established algorithm according to the AES encryption protocol, IV is the initialization vector, SK is the session key, and bit_i is the data to be encrypted. In some examples, bit_i represents the output of the whitening estimation, discussed above. In other examples, bit_i represents the known value. In some examples, both the whitening estimation discussed above and the encryption estimation are performed to determine the localization tone165(e.g., whitening estimation followed by encryption estimation, or vice versa), while in other examples only one of the whitening estimation or the encryption estimation is performed. In at least some examples, to facilitate ease of processing by the second wireless device110, a length of the localization tone165is equal to a multiple of the block size of the encryption process used to encrypt the data packet155. In some examples, rather than positioning the localization tone165before the data162in the payload160, as shown inFIG.1, the localization tone is located in another portion of the data packet155, such as (although not shown) following the cyclic redundancy check (CRC) block170. In such examples, the localization tone165is neither encrypted nor a part of an authenticated portion of the data packet155, again exposing the localization tone165to vulnerability from malicious actors. In these examples, there are limited options available for securing the localization tone165. An example of one option for securing the localization tone165includes incorporating additional processing logic into the first wireless device105and/or the second wireless device110to prevent data from repeat data packets from being used for localization. For example, if a first data packet155has been received by the second wireless device110and then the second wireless device110receives a repeat transmission of the data packet155, the second wireless device110will use only data from the first received data packet155for localization. Such a solution involves, in some examples, prior knowledge of the security scheme and software and/or hardware modifications to the second wireless device110. Another example of an option for securing the localization tone165includes the first wireless device105negotiating and/or agreeing prior to transmission of the data packet155that authentication of the payload160(e.g., via the header175) also applied to the localization tone165, regardless of the location of the localization tone165in the data packet155. At least some of the above examples provide for securing localization tones used for localization of wireless devices. Such securing of the localization protects against actions of malicious actors, such as man-in-the-middle attacks, that is presently unavailable in the art while conforming with at least some standards requirements to facilitate backwards compatibility (e.g., by continuing to provide a signal expected by a receiving device for localization despite processing performed on the signal pursuant to at least some of the examples disclosed herein). Referring now toFIG.2, a flowchart of an illustrative method200is shown. In at least some examples, the method200is a method for generating a data packet including a localization tone. The method200is implemented, in at least some examples, by a wireless device, such as the first wireless device105, discussed above with respect toFIG.1. For example, a processor, such as the processor125, discussed above with respect toFIG.1, implements and/or performs at least some operations of the method200. At operation205, a localization tone is obtained. The localization tone, in at least some examples, has the known value discussed above and expected by a receiving device to enable the receiving device to perform localization according to the localization tone. In at least one example, the localization tone comprises or includes a plurality of digital logic “1” values such that the localization tone can be said to be a signal of “all 1's.” The localization tone may have any suitable length, the scope of which is not limited herein. In at least some examples, the localization tone has a length optimized for encryption, such as a length that is an integer multiple of a block size of the encryption, as discussed herein. A source of the localization tone is not limited herein. For example, the localization tone may be generated by the wireless device, retrieved from a storage device by the wireless device, or received by the wireless device from another device. At operation210, the localization tone is inserted into a data packet at an application layer of a communication protocol stack. In at least some examples, the localization tone is inserted into the data packet preceding payload data of the data packet. For example, the localization tone is inserted into the payload of the data packet, where a first bit of the localization tone is the first bit of the payload, the last bit of the localization tone immediately precedes the first bit of the payload data, and the last bit of the payload data is the last bit of the payload. In at least some examples, positioning the localization tone at the beginning of the payload and preceding the payload data provides an improvement over positioning the localization tone elsewhere in the payload or in the data packet. For example, positioning the localization tone at the beginning of the payload and preceding the payload data enables encryption of the localization tone that, in at least some examples, is not otherwise possible if the localization tone is located elsewhere in the data packet following preceding data. However, positioning the localization tone at the beginning of the payload and preceding the payload data creates an additional technical problem in maintaining the localization tone as the known value when the data packet is transmitted, despite processing being performed on the payload (including the localization tone) that alters the processed data and that would not be performed on the localization tone if the localization tone were located outside of the payload (e.g., after a CRC code of the data packet). At operation215, a modified localization tone is generated at the application layer of a communication protocol stack. The modified localization tone is generated, in some examples, to compensate for at least some effects of the processing that the localization tone is subjected to because of its inclusion in the payload of the data packet. For example, when located after the CRC code of the data packet, the localization tone would not be subject to data whitening. However, when located in the payload of the data packet, the localization tone is subject to data whitening that, in at least some circumstances, will modify values of the localization tone away from the known value, thereby rendering the localization inoperable for use in performing localization. Accordingly, to solve this technical problem with repositioning the localization tone to secure the localization tone, the effects of the data whitening are estimated and the modified localization tone is generated, changing the contents of the modified localization tone away from the known value such that, after modification due to data whitening, the whitened modified localization tone returns to the known value. In at least some examples, the effects of the data whitening are estimated by performing an XOR operation between the data of the localization tone (e.g., the known value) and an output of a LFSR. The output of the LFSR, in some examples, is based on a seed value loaded into the LFSR, where the seed value is a defined value depending on an advertising channel on which the data packet will be transmitted. A result of the XOR operation, in at least some examples, is the modified localization tone. For example, a first bit of the localization tone XOR'd with a first output bit of the LFSR becomes a first bit of the modified localization tone, a second bit of the localization tone XOR'd with a second output bit of the LFSR becomes a second bit of the modified localization tone, and so forth. At operation220, the localization tone is replaced in the data packet by the modified localization tone. In at least some examples, an application or other service operating at the application layer of the communication protocol stack replaces the localization tone with the modified localization tone. At operation225, the data packet is transmitted to a receiving device. The data packet is transmitted, in some examples via a transceiver of the wireless device. In at least some examples, the transmitted data packet includes the known value, for example, beginning at a first bit of the payload of the data packet. In at least some examples, the method200further includes operation230. Operation230is, in some examples, performed after operation220and prior to operation225. At operation230, data whitening is performed. The data whitening is performed, in at least some examples, according to standardized and/or defined procedures, the scope of which is not limited herein. For example, the data whitening is performed according to a BLUETOOTH® standard such that the data packet having the data whitening is capable of being processed by devices that operate according to the BLUETOOTH® standard. In at least some examples, the BLUETOOTH® standard is the BLUETOOTH® 5.0 standard, as specified by the BLUETOOTH® Special Interest Group, or any subsequent BLUETOOTH® standard incorporated in, incorporating, or expanding upon the BLUETOOTH® 5.0 standard. In at least some examples, the data whitening alters a value of the modified localization tone from values generated according to operation215and inserted into the data packet at operation210to the known value expected by a device receiving the data packet (e.g., such as the value of the localization tone received at operation205). In at least some examples, the method200further includes operation235. Operation235is, in some examples, performed after operation230and prior to operation225. At operation235, a header is appended to the data packet. In some examples, the header is a MAC header. The header, in at least some examples, includes any one or more of an address field, a type field, one or more flag bits (e.g., flow, acknowledgement, and/or sequence flag bits), and/or a checksum field. In other examples, the header contains any suitable information as defined by an applicable communication standard or protocol, the scope of which is not limited herein. In at least some examples, contents of the header are defined according to the BLUETOOTH® standard. In at least some examples, the method200further includes operation240. Operation240is, in some examples, performed after operation220and prior to operation230. At operation240, a CRC code is generated. The CRC code is, in some examples, a numerical value determined according to a polynomial division of a remainder of the data packet. For example, the CRC code is a remainder obtained by performing a polynomial division, or other mathematic manipulation, equation, or algorithm, on the data packet. In at least some examples, a device receiving the data packet performs the same polynomial division, or other mathematic manipulation, equation, or algorithm and compares the remainder obtained during that process to the CRC code included in the data packet to perform error detection on the data packet. Referring now toFIG.3, a flowchart of an illustrative method300is shown. In at least some examples, the method300is a method for generating a data packet including a localization tone. The method300is implemented, in at least some examples, by a wireless device, such as the first wireless device105, discussed above with respect toFIG.1. For example, a processor, such as the processor125, discussed above with respect toFIG.1, implements and/or performs at least some operations of the method300. Additionally, at least some aspects of the method300are suitable for combination with the method200, as discussed in greater detail below, for example, where redundant operations of the method200and the method300are omitted and unique operations from one of the method200or method300are added to the other of the method200or the method300. At operation305, a localization tone is obtained. The localization tone, in at least some examples, has the known value, such as previously described above with respect to operation205of method200ofFIG.2. At operation310, the localization tone is inserted into a data packet at an application layer of a communication protocol stack. In at least some examples, the localization tone is inserted into the data packet preceding payload data of the data packet. For example, the localization tone is inserted into the payload of the data packet, where a first bit of the localization tone is the first bit of the payload, the last bit of the localization tone immediately precedes the first bit of the payload data, and the last bit of the payload data is the last bit of the payload. In at least some examples, positioning the localization tone at the beginning of the payload and preceding the payload data facilitates encryption of the localization tone in a manner not otherwise possible if the localization tone is positioned elsewhere in the payload or the data packet. At operation315, a modified localization tone is generated at the application layer of a communication protocol stack. The modified localization tone is generated, in some examples, to compensate for at least some effects of the processing that the localization tone is subjected to during an encryption process. For example, when the localization tone is located outside of the payload, the localization tone is not subjected to encryption. However, when located in the payload of the data packet, the localization tone is subject to encryption that, in at least some circumstances, modifies values of the localization tone away from the known value, thereby rendering the localization inoperable for use in performing localization. In at least some examples, the effects of the encryption are estimated by calculating Decrypt_AES(IV,SK,array(bit_i)), where Decrypt_AES is an established algorithm according to the AES encryption protocol, IV is the initialization vector, SK is the session key, and bit_i is the data to be encrypted (e.g., the input to operation315). A result of the Decrypt_AES operation, in at least some examples, is the modified localization tone. For example, a first bit of the result of the Decrypt_AES operation becomes a first bit of the modified localization tone, a second bit of the result of the Decrypt_AES operation becomes a second bit of the modified localization tone, and so forth. In other examples, such as when AES encryption is not used, the effects of the encryption are estimated according to any other suitable means for the form of encryption employed. At operation320, the localization tone is replaced in the data packet by the modified localization tone. In at least some examples, an application or other service operating at the application layer of the communication protocol stack replaces the localization tone with the modified localization tone. In various examples, operation315and operation320are suitable for inclusion in method200ofFIG.2. For example, operation315and operation320are suitable for implementation following operation220and preceding the remaining operations of method200. In such an example, the modified localization tone that has been modified to compensate for effects of data whitening is used as an input to operation315(the “localization tone” of operation315) and the modified localization tone referenced in the remainder of method200is an output of operation320(the “modified localization tone” of operation320). In another example, operation315and operation320are suitable for implementation prior to operation215of method200. In such an example, the localization tone of operation315is the localization tone inserted into the data packet at operation210of method200and the modified localization tone inserted into the data packet at operation320is the “localization tone” of operation215of method200. At operation325, the data packet is transmitted to a receiving device. The data packet is transmitted, in some examples via a transceiver of the wireless device. In at least some examples, the transmitted data packet includes the known value, for example, beginning at a first bit of the payload of the data packet. In at least some examples, the method300further includes operation330. Operation330is, in some examples, performed after operation320and prior to operation325. At operation330, encryption is performed. The encryption is performed, in at least some examples, according to standardized and/or defined procedures, the scope of which is not limited herein. For example, the encryption is performed according to an AES standard such that the data packet having the encryption complies with AES security protocols or requirements and is suitable for decryption based on AES decryption schemes. In at least some examples, the encryption alters a value of the modified localization tone from values generated according to operation315and inserted into the data packet at operation310to the known value expected by a device receiving the data packet (e.g., such as the value of the localization tone received at operation305). In at least some examples, the method300further includes operation335. Operation335is, in some examples, performed after operation330and prior to operation325. At operation335, a header is appended to the data packet, for example, as described above with respect to operation235of method200ofFIG.2. In at least some examples, the method300further includes operation340. Operation340is, in some examples, performed after operation335and prior to operation325. At operation340, a CRC code is generated, for example, as described above with respect to operation240ofFIG.2. Referring now toFIG.4, a flowchart of an illustrative method400is shown. In at least some examples, the method400is a method of communicating a data packet having a localization tone located in a payload of the data packet and estimated to compensate for processing of the payload. The method400is implemented, in at least some examples, by a wireless device, such as the first wireless device105, discussed above with respect toFIG.1. For example, a processor, such as the processor125, discussed above with respect toFIG.1, implements and/or performs at least some operations of the method400. At operation405, the processor generates the data packet having the localization tone located in the payload of the data packet. The data packet is generated, in at least some examples, to compensate for effects of data whitening and/or encryption of the payload on the localization tone to maintain the localization tone, after data whitening and/or encryption, as the known value during transmission. For example, the processor generates the data packet according to the method200, described above with respect toFIG.2, and/or the method300, described above with respect toFIG.3. At operation410, the processor communicates with a receiving device to pre-arrange and/or agree on characteristics of transmission of the data packet. For example, the processor communicates with the receiving device to inform the receiving device that the localization tone included in the data packet is estimated for a first communication channel and the data packet is being (or will be) transmitted on a second communication channel. In at least some examples, transmitting the data packet having the localization tone estimated for the first communication channel while the data packet is transmitted on the second communication channel improves functioning and processing of the wireless device including the processor by reducing processing performed by the processor when the data packet is transmitted on a communication channel other than the first communication channel. In other examples, the communication with the receiving device informs the receiving device that a header (e.g., a MAC header) of the data packet includes authentication that encompasses the localization tone (e.g., when the localization tone follows a CRC code of the data packet). At operation415, the processor instructs a transceiver to transmit the data packet, causing the transceiver to transmit the data packet according to criteria received from the processor. The criteria include, for example, a transmission strength, a communication channel, or any other suitable criteria for controlling transmission of a data packet. Referring now toFIG.5, a block diagram of an illustrative user equipment500is shown. In at least some examples, the user equipment500is any device suitable for implementation as the first wireless device105and/or the second wireless device110, each ofFIG.1, and suitable for performing at least some of the operations prescribed thereto. In at least some examples, the user equipment500is further suitable and/or configured for implementing at least some of the method200, method300, and/or method400, described above. User equipment500is a device (e.g., a computer system, a user equipment, a mobile phone, a beacon, a tablet device, a wearable device, etc.) that generates and transmits a data packet to another computing device, where the data packet includes a localization tone. For example, in at least some embodiments, the user equipment500is at least partially implemented as a wireless device configured to estimate and compensate for effects of data whitening and/or encryption on a localization tone included in a payload of a data packet, as described with respect to system100, method200, method300, and/or method400for example, according to a computer program product executed on, or by, at least one processor. The user equipment500comprises one or more input devices510. Some of the input devices510can include microphones, keyboards, touchscreens, buttons, toggle switches, cameras, sensors, and/or other devices that allow a user to interact with, and provide input to, the user equipment500. Some other of the input devices510can include downstream ports coupled to a transceiver (Tx/Rx)520, which are transmitters, receivers, or combinations thereof. The Tx/Rx520transmits and/or receives data to and/or from other computing devices via at least some of the input devices510. Similarly, the user equipment500comprises a plurality of output devices540. Some of the output devices540can include speakers, a display screen (which may also be an input device such as a touchscreen), lights, or any other device that allows a user to interact with, and receive output from, the user equipment500. At least some of the output devices540can include upstream ports coupled to another Tx/Rx, wherein the Tx/Rx520transmits and/or receives data from other nodes via the upstream ports. The downstream ports and/or the upstream ports can include electrical and/or optical transmitting and/or receiving components. In another embodiment, the user equipment500comprises one or more antennas (not shown) coupled to the Tx/Rx520. The Tx/Rx520transmits and/or receives data from other computing or storage devices wirelessly via the one or more antennas. A processor530is coupled to the Tx/Rx520and at least some of the input devices510and/or the output devices540and is configured to generate data packets including localization tone compensated for effects of processing that alter values of the localization tones prior to transmission. In an embodiment, the processor530comprises one or more multi-core processors and/or memory modules550, which functions as data stores, buffers, etc. The processor530is implemented as a general processor or as part of one or more application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or digital signal processors (DSPs). Although illustrated as a single processor, the processor530is not so limited and alternatively comprises multiple processors. The processor530further comprises processing logic configured to execute a data packet generation computer program product560that is configured to generate a data packet as described with respect to system100, method200, method300, and/or method400, discussed above. In at least some examples, the data packet generation computer program product560generates the data packet by replacing a localization tone of the data packet with a modified localization known not having a value expected by a receiving device to compensate for effects of data whitening and/or encryption on the data packet. FIG.5also illustrates that a memory module550is coupled to the processor530and is a non-transitory medium configured to store various types of data. Memory module550comprises memory devices including secondary storage, read-only memory (ROM), and random access memory (RAM). The secondary storage is typically comprised of one or more disk drives, optical drives, solid-state drives (SSDs), and/or tape drives and is used for non-volatile storage of data and as an over-flow storage device if the RAM is not large enough to hold all working data. The secondary storage is used to store programs that are loaded into the RAM when such programs are selected for execution. The ROM is used to store instructions and perhaps data that are read during program execution. The ROM is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage. The RAM is used to store volatile data and perhaps to store instructions. Access to both the ROM and RAM is typically faster than to the secondary storage. The memory module550can also be used to house the instructions for carrying out the various embodiments described herein. For example, the memory module550may comprise the data packet generation computer program product560, which is executed by processor530. It is understood that by programming and/or loading executable instructions onto the user equipment500, at least one of the processor530and/or the memory module550are changed, transforming the user equipment500in part into a particular machine or apparatus having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules known in the art. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and number of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable and will be produced in large volume may be preferred to be implemented in hardware (e.g., in an ASIC) because for large production runs the hardware implementation may be less expensive than software implementations. Often a design may be developed and tested in a software form and then later transformed, by design rules well-known in the art, to an equivalent hardware implementation in an ASIC that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus. In the foregoing discussion, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . .” Also, the term “couple” or “couples” is intended to mean either an indirect or direct wired or wireless connection. Thus, if a first device, element, or component couples to a second device, element, or component, that coupling may be through a direct coupling or through an indirect coupling via other devices, elements, or components and connections. Similarly, a device, element, or component that is coupled between a first component or location and a second component or location may be through a direct connection or through an indirect connection via other devices, elements, or components and/or couplings. A device that is “configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof. Furthermore, a circuit or device that is said to include certain components may instead be configured to couple to those components to form the described circuitry or device. For example, a structure described as including one or more semiconductor elements (such as transistors), one or more passive elements (such as resistors, capacitors, and/or inductors), and/or one or more sources (such as voltage and/or current sources) may instead include only the semiconductor elements within a single physical device (e.g., a semiconductor die and/or integrated circuit (IC) package) and may be configured to couple to at least some of the passive elements and/or the sources to form the described structure either at a time of manufacture or after a time of manufacture, for example, by an end-user and/or a third-party. While certain components are described herein as being of a particular process technology (e.g., field effect transistor (FET), metal oxide semiconductor FET (MOSFET), n-type, p-type, etc.), these components may be exchanged for components of other process technologies (e.g., replace FET and/or MOSFET with bi-polar junction transistor (BJT), replace n-type with p-type or vice versa, etc.) and reconfiguring circuits including the replaced components to provide desired functionality at least partially similar to functionality available prior to the component replacement. Additionally, uses of the phrase “ground voltage potential” in the foregoing discussion are intended to include a chassis ground, an Earth ground, a floating ground, a virtual ground, a digital ground, a common ground, and/or any other form of ground connection applicable to, or suitable for, the teachings of the present disclosure. Unless otherwise stated, “about”, “approximately”, or “substantially” preceding a value means +/−10 percent of the stated value. The above discussion is meant to be illustrative of the principles and various examples of the present disclosure. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the present disclosure be interpreted to embrace all such variations and modifications.
46,675
11943622
DETAILED DESCRIPTION In the following description, certain specific details are set forth in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that embodiments may be practiced without one or more of these specific details, or with other methods, components, materials, etc. Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open, inclusive sense that is as “including, but not limited to.” Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. The use of ordinals such as first, second and third does not necessarily imply a ranked sense of order, but rather may only distinguish between multiple instances of an act or structure. The headings and Abstract of the Disclosure provided herein are for convenience only and do not interpret the scope or meaning of the embodiments. FIG.1is a schematic view of a system100for managing remote control units and paired devices, according to various embodiments as disclosed herein. Specifically, remote control unit104is paired to device A102, according to one non-limiting illustrated embodiment. The device A102may be any electronic device that has a functionality that may be made available to a user. For example, the device A102may be one or any combination of: a media device (e.g., any electronic device that receives and/or stores and plays video and/or audio); a receiving device (e.g., cable and/or satellite set-top box or a radio); a television, a digital versatile disk (DVD) player and/or recorder; a digital video recorder (DVR); a music player; a desktop computer; a mainframe computer; a server; a notebook computer; a tablet device; a video game console; an electronic game; a gaming device; an electronic educational device; an electronic children's toy; an electronic book reader; an entertainment system and/or device; an electronic locking device; a remote control device; a network appliance; a home appliance; an office appliance; a home security system device; a watch; a vehicle head unit, deck, stereo, navigation system and/or other electronic media system of a vehicle; a mobile communications and/or processing device having a handheld form factor (e.g., cellular phones, personal digital assistants or (PDAs), Blackberry® devices, iPhone® devices, Android® devices, smartphones, cellular enabled laptop computers, netbook computers and/or tablet devices); or the like. In various embodiments, the device A102is able to communicate with remote control unit104directly over wireless connection108aand wireless connection108b. For example, in some embodiments, the type of wireless communication connection108amay be a non-line-of-sight connection (a connection that does not require a line of sight between the remote control unit104and the device A102to communicate with the device A102), such as a short-range radio wireless connection including, but not limited to, one or more of: a wireless point-to-point connection; a radio frequency identification (RFID) connection; a near field communication (NFC) connection; a Bluetooth® connection; a wireless universal serial bus (USB) connection, a Z-Wave connection according to the ITU-T G.9959 specification or applicable variations thereof, a ZigBee connection according to the IEEE 802.15 specification or applicable variations thereof, a wireless home area network (HAN) connection (e.g., such as that based on the IEEE 802.11 specification or other applicable wireless standards); a wireless body area network connection (WBAN); a wireless personal area network (WPAN) connection, such as that based on the standard IEEE 802.15 specification or variations thereof; a Wi-Fi connection such as that based on IEEE 802.11 specification or variations thereof; and/or variations of such connections and applicable wireless protocol standards thereof. In some embodiments, the type of wireless communication connection108bmay be a wireless line-of-sight connection, for example, an infrared connection, such as an Infrared Data Association (IrDA) connection according to the applicable IrDA specifications or applicable variations thereof. In such embodiments, the remote control unit104may send commands as infrared signals to device A102via the connection108bwithout being paired to device A102. In the present example embodiments, remote control unit104may send commands via both the non-line-of-sight connection108aand the line-of-sight connection108b. The device A102will then receive the commands via a corresponding communications module(s) that include a corresponding receiver and/or transceiver and networking interface(s) configured to receive and process commands via the non-line-of-sight connection (e.g., Bluetooth®) and line-of-sight connection (e.g., infrared). FIG.2is a schematic view of a system200for managing remote control units and paired devices in a scenario where the remote control unit104is moved to a different room, according to various embodiments as disclosed herein. As shown inFIG.2, the remote control unit104has now been moved to a different room that has a different device B202. For example, the device A102may be separated by a wall or other barrier210from device B202, such that the barrier210now blocks or prevents the line-of-sight signals (e.g., infrared signals) sent via the line-of-sight connection108bfrom the remote control unit104from being received by device A102. However, the remote control unit104may still be able to communicate through the barrier210with device A102via non-line-of-sight connection108a(e.g., Bluetooth® connection), and thus, device A102may still be able to receive such commands sent via the non-line-of-sight connection108aand be controlled by remote control unit104. This may cause problems and be very confusing for the end user. However, since being paired with device A102, for each command sent from the remote control unit104via the line-of-sight connection108b, the remote control unit104also sends a code unique to and associated with the current pairing of the remote control unit104to device A102. This code is sent via the line-of-sight connection108b, which is different than the wireless medium (non-line-of-sight connection108a) over which the pairing occurred, but such commands sent via the line-of-sight connection108bare no longer being received by device A102due to device A102being in the other room behind the wall or other barrier210. Device B202then receives such commands sent from the remote control unit104via the line-of-sight connection108balong with the code unique to and associated with the current pairing of the remote control unit104to device A102. Device B202may extract the code from the command (or otherwise receive the code) and compare the code to one or more previously stored codes associated with one or more respective remote control units currently paired to device B202(if any). If device B202determines the extracted code does not match any previously stored code associated with a remote control unit currently paired to the device, then device B may determine that the command is not from a remote control unit currently paired with device B202based on that determination. If device B202determines the extracted code does match a previously stored code associated with a remote control unit currently paired to the device, then device B may determine that the command is from a remote control unit currently paired with device B202based on that determination and then immediately execute the command. If device B202determines that the command is not from a remote control unit currently paired with device B202, then device B202may initiate pairing of the remote control unit104to device B202in response to that determination. In some embodiments, the pairing process with device B202includes first causing the remote control unit104to unpair from device A102. Then the command may be executed by device B202after the pairing (or in some instances the command may first be executed by device B202and the pairing process will then be initiated). In some embodiments, the command will be executed only if received via the line-of-sight connection108band after pairing via the non-line-of-sight connection108a. In some embodiments, sending the command both via the line-of-sight connection108band the non-line-of-sight connection108aautomatically handles the scenario when connectivity of the non-line-of-sight connection108ahas issues because of low-battery or a faulty module (e.g., faulty Bluetooth® module), as the device B202may still execute the command received via the line-of-sight connection108b(e.g., infrared signal) even though the command may not have been received via the non-line-of-sight connection108a. Device B202may then (or as part of the pairing process) communicate a code to the remote control unit104that is unique to and is associated with the pairing of the remote control unit104to device B202. The remote control unit104will then send the code along with, embedded with, or otherwise associated with each remote control command sent going forward via a wireless medium different than the wireless medium over which the pairing occurred (e.g., via line-of-sight connection108b). Device B202may then store the code communicated to the remote control unit104for future comparison with codes of additional commands sent to device B202via a line-of-sight connection to determine whether the additional commands are from one or more remote control units currently paired with device B202. In some embodiments, the remote control unit104and/or the corresponding device may generate the code such that it is unique to and is associated with the pairing of the specific remote control unit to the specific device. For example, both the remote control unit104and/or the corresponding device may verify that a code is unique to and is associated with the pairing of the specific remote control unit to the specific device. This same process may be implemented on multiple different devices and corresponding remote control units in various different rooms, such that when a remote control unit is moved to a new room it will seamlessly pair with the device in that new room to operate that device and cease controlling the device in the previous room until the remote control unit is moved back to the previous room. In various embodiments, the system200for managing remote control units and paired devices may include additional elements than that shown inFIG.2, such as in embodiments including multiple remote control units and additional devices in additional different rooms in a building. FIG.3is a schematic view of a controller306in a system for managing remote control units and paired devices, according to various embodiments as disclosed herein. In various embodiments, the controller306is an example of a controller that may be of a device300, such as device A102or device B202and/or a remote control unit, such as remote control unit104. Thus, in various example embodiments, device300may be an example of and/or represent device A102, device B202and/or remote control unit104. The controller306includes a microprocessor310, a communications module308, and a power interface manager320connected via one or more buses318(only one illustrated). The controller306performs or causes various operations described herein of the system200to be performed. For example, the controller306is configured to use the communications module308to wirelessly receive information directly from the external remote control unit104directly over the corresponding wireless connections108aand108b(shown inFIG.1andFIG.2) and make a determination whether a command received via connection108bis from a remote control unit currently paired with the device, and then determine whether to initiate pairing of the remote control unit to the device in response to the determination of whether the command is from a remote control unit currently paired with the device. In some embodiments, the microprocessor310of the controller306may also be that which controls other functions of device A102and/or device B202. Additionally, in some embodiments, an equivalent controller306, or applicable modules thereof, may also be present in remote control unit104to cause the functions described herein of the remote control unit104to be performed. The microprocessor310, for example, is a microprocessor, microcontroller, programmable logic controller (PLC), CPU programmable gate array (PGA), application specific integrated circuit (ASIC) or another controller capable of receiving signals from various sensors, performing logical operations, and sending signals to various components. Typically, the microprocessor310may take the form of a CPU microprocessor of the type used in a computer, such as those made by INTEL, AMD, and the like. The controller306may also include one or more non-transitory processor- or computer-readable storage media, for example, read only memory (ROM)312, random access memory (RAM)314, and other storage316(e.g., solid-state storage media such as flash memory or EEPROM, spinning storage media such as hard disk). The non-transitory computer-readable storage media312,314,316may be in addition to any non-transitory storage medium (e.g., registers) which is part of the microprocessor310. The controller306may include one or more buses318(only one illustrated) coupling various components together, for example, one or more power buses, instruction buses, data buses, etc. As illustrated, the ROM312, or some other one of the non-transitory processor- or computer-readable storage media312,314,316, stores instructions and/or data or values for variables or parameters. The sets of data may take a variety of forms, for example, a lookup table, a set of records in a database, etc. The instructions and sets of data or values are executable by the microprocessor310. Execution of the instructions and sets of data or values causes the microprocessor310to perform specific acts to cause the controller306to generate control signals to, as applicable: use the communications module308to wirelessly receive information directly from the external remote control unit104and/or other remote control unit106directly over the corresponding wireless connections108aand108b(shown inFIG.1andFIG.2); determine whether the command is from a remote control unit currently paired with the device; determine whether to initiate pairing of the remote control unit to the device in response to the determination of whether the command is from a remote control unit currently paired with the device; execute the command; and other functionalities of the system200as described herein. Performance of specific operations caused by the controller306is described herein and also below with reference to various flow diagrams (shown inFIGS.5-7). The microprocessor310may use RAM314for volatile storage of instructions, data, etc. The microprocessor310may use other storage316to log or retain information, for example, information including, but not limited to: wirelessly received information from the remote control unit104and/or other remote control units106directly over the corresponding wireless connections108aand108b, codes unique to and associated with corresponding pairings of the remote control unit to specific remote control units; user credentials such as user name and passwords, other codes, a security key, an identification number, a time-based code, a combination, biometric data, an encryption key, an encrypted key, computer executable instructions; etc. The instructions are executable by the microprocessor310to control operation of the controller306in response to input from remote systems such as those of the remote control unit104. The controller306may also receive signals from various sensors, transmitters, transceivers, and/or components of the remote control unit104via the communications module308. This information may include information that characterizes or is indicative of the authenticity, authorization level, operation, status, and/or condition of such components, the remote control unit104and/or other remote control units. The communications module308may include one or more communications modules or components which facilitates communications with the various components of the remote control unit104and other remote control units, such that data may be exchanged between the remote control unit104and the device300for authentication purposes. The communications module308may additionally provide wired communications, such as to communicate over those which may occur between the device300and other devices, such as receiving devices, network equipment and other media devices. The communications module308may include one or more ports, wireless receivers, wireless transmitters or wireless transceivers to provide wireless signal paths to the remote control unit104and/or various other remote components or systems. The communications module308may, for example, include components enabling communication over a short-range wireless connection including, but not limited to, one or more of: a wireless point-to-point connection; a radio frequency identification (RFID) connection; a near field communication (NFC) connection; a Bluetooth® connection; an Infrared Data Association (IrDA) connection according to the applicable IrDA specifications or applicable variations thereof; a wireless universal serial bus (USB) connection; a Z-Wave connection according to the ITU-T G.9959 specification or applicable variations thereof; a ZigBee connection according to the IEEE 802.15 specification or applicable variations thereof; a wireless home area network (HAN) connection (e.g., such as that based on the IEEE 802.11 specification or other applicable wireless standards); a wireless body area network connection (WBAN); a wireless personal area network (WPAN) connection, such as that based on the standard IEEE 802.15 specification or variations thereof; a Wi-Fi connection such as that based on IEEE 802.11 specification or variations thereof; and/or variations of such connections and applicable wireless protocol standards thereof. The communications module308may include one or more modems or one or more Ethernet or other types of communication cards or components for enabling network communications as applicable. The communications module308may include one or more modules suitable to handle network traffic including switched packet type communications protocols (TCP/IP), Ethernet or other networking protocols. In some embodiments, some or all of the components of the controller306may be located outside of the device300as a separate device that authenticates, verifies or otherwise controls other security functions of the device300. Also, the communications module308may be configured to provide encrypted communications over the connections108a,108b. In some embodiments, a separate communications module (not shown) of the device300is configured for and responsible for communications over the other networks (e.g., the Internet). The Power Interface Manager320is controllable by the microprocessor310and is configured to provide power to the controller306from either a built-in battery (not shown) or an external power source. FIG.4is a flow diagram showing a method400in a system for managing remote control units and paired devices, according to various embodiments as disclosed herein. A process of the method400starts at402. At404, the system100receives a command from a remote control unit via a communications module. At406, the system100determines whether the command is from a remote control unit currently paired with a device of the system. If the system100determines that the command is from a remote control unit currently paired with the device, then the process proceeds to410, where the command is executed. If the system100determines that the command is not from a remote control unit currently paired with the device, then the process proceeds to408. At408, the system100initiates pairing of the remote control unit to the device in response to the determination that the command is from a remote control unit currently paired with the device. The process then proceeds to410, where the command is executed (or in some instances the command may first be executed and the pairing process will then be initiated). The process then ends at412. FIG.5is a flow diagram showing a method500for determining whether a command is from currently paired remote that is useful in the method ofFIG.3, according to various embodiments as disclosed herein. A process of the method500starts at502. At504, the system100extracts a code from the command. At506, the system100determines whether the extracted code matches one or more previously stored codes associated with one or more respective remote control units currently paired to a device of the system. If the system100determines that the extracted code does not match any previously stored codes associated with one or more respective remote control units currently paired to the device of the system, then the process proceeds to508. If the system100determines that the extracted code does match a previously stored code associated with one or more respective remote control units currently paired to the device of the system, then the process proceeds to510. At508, the system100determines that the command is not from a remote control unit currently paired with the device based on the determination that the extracted code does not match any previously stored code associated with a remote control unit currently paired to the device. At510, the system100determines that the command is from a remote control unit currently paired with the device based on the determination that the extracted code matches one or more previously stored codes associated with one or more respective remote control units currently paired to the device. The process then ends at512. FIG.6is a flow diagram showing a method600of a remote control unit in a system for managing remote control units and paired devices, according to various embodiments as disclosed herein. A process of method600starts at602. At604, the remote control unit104, for each command sent from the remote control unit via the communications module, sends a code unique to and associated with a pairing of the remote control unit to a first device (e.g., device A102) controlled by the remote control unit, the code sent via a wireless medium different than a wireless medium over which the pairing occurs. At606, the remote control unit104pairs with a second device (e.g., device B202) in response to the second device receiving the code unique to and associated with a pairing of the remote control unit to the first device. At608, the remote control unit104unpairs the remote control unit to the first device. At610, the remote control unit104, after unpairing the remote control unit to the first device, for each command sent from the remote control unit via the communications module, sends a code unique to and associated with the pairing of the remote control unit to the second device. The process then ends at612. FIG.7is a flow diagram showing a method700for pairing of a remote control device in a system for managing remote control units and paired devices, according to various embodiments as disclosed herein. A process of method700starts at702. At704, the system100initiates the pairing of the remote control unit to a device in response to a determination that the command is not from a remote control unit currently paired with the device. At706, the system100communicates a code to the remote control unit that is unique to and is associated with the pairing of the remote control unit to the device to enable the remote control unit to send the code with commands via a wireless medium different than a wireless medium over which the pairing occurs. At708, the system100stores the code communicated to the remote control unit for future comparison with codes of additional commands sent to the device to determine, via a wireless medium different than a wireless medium over which the pairing occurs, whether the additional commands are from one or more remote control units currently paired with the device. The process then ends at710. The various methods described herein may include additional acts, omit some acts, and/or may perform the acts in a different order than set out in the various flow diagrams. The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, schematics, and examples. Insofar as such block diagrams, schematics, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, the present subject matter may be implemented via one or more microcontrollers. However, those skilled in the art will recognize that the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits (e.g., Application Specific Integrated Circuits or ASICs), as one or more computer programs executed by one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs executed by one or more controllers (e.g., microcontrollers) as one or more programs executed by one or more processors (e.g., microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and/or firmware would be well within the skill of one of ordinary skill in the art in light of the teachings of this disclosure. When logic is implemented as software and stored in memory, logic or information can be stored on any non-transitory computer-readable medium for use by or in connection with any processor-related system or method. In the context of this disclosure, a memory is a non-transitory computer- or processor-readable storage medium that is an electronic, magnetic, optical, or other physical device or means that non-transitorily contains or stores a computer and/or processor program. Logic and/or the information can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions associated with logic and/or information. In the context of this disclosure, a “computer-readable medium” can be any physical element that can store the program associated with logic and/or information for use by or in connection with the instruction execution system, apparatus, and/or device. The computer-readable medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device. More specific examples (a non-exhaustive list) of the computer readable medium would include the following: a portable computer diskette (magnetic, compact flash card, secure digital, or the like), a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory), a portable compact disc read-only memory (CDROM), and digital tape. The various embodiments described above can be combined to provide further embodiments. The above description of illustrated embodiments, including what is described in the Abstract of the Disclosure, is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Although specific embodiments and examples are described herein for illustrative purposes, various equivalent modifications can be made without departing from the spirit and scope of the disclosure, as will be recognized by those skilled in the relevant art in light of the disclosure. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
29,303
11943623
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS FIG.1shows a schematic depiction of a wireless communication link between a first apparatus1and a second apparatus2. The first apparatus1and the second apparatus2may in principle be arbitrary apparatuses which communicate with one another by way of a wireless communication link, such as for example Bluetooth or another low-power radio link. For example, this may involve an audio source such as for example an MP3 player, in particular a smartphone or similar, which transmits the audio data as radio signals to a playback apparatus, for example a loudspeaker or a headphone. It should be understood, however, that the present invention is not restricted to radio links for audio transmission, but rather is applicable to arbitrary data transmissions. In order to ensure that the data are exchanged between the correct, authorized apparatuses, before the first setting up of the data link the two apparatuses1,2are coupled with one another. In this process, the second apparatus2authenticates itself towards the first apparatus1(or vice versa). After successful coupling between two apparatuses, at a later point in time a data link may be set up automatically between the two apparatuses as soon as the two apparatuses1,2are situated within radio range of one another and are switched on. If, for example in the example instanced above of an audio source and a playback apparatus, several playback apparatuses are within radio range of the audio source, the audio source will link with the first available or identified playback apparatus. If, however, the audio playback is to take place over another playback apparatus, the possibly automatically established link to the original playback apparatus has to be disconnected and subsequently a new link established to the desired playback apparatus. To this end, it has to be possible to identify the individual apparatuses against each other securely and reliably. FIG.2shows a schematic depiction of a block diagram of a system for wireless information exchange between two apparatuses1,2. A first apparatus1comprises a communication module11for sending and receiving radio signals. The first apparatus1further comprises a motion sensor12. The motion sensor12may be an arbitrary suitable component which is able to detect movements, in particular movements of the first apparatus1in three-dimensional space. For example, the motion sensor12may detect the movement by way of an acceleration sensor, a magnetic field sensor, a gyroscope, or the like. The motion sensor12may thereupon provide sensor data which correspond to the detected movements of the first apparatus1. These sensor data may be transmitted by way of the communication module11to the second apparatus2. Alternatively, the motion sensor12may also process the detected movement and provide the result of this processing to the communication module11for transmission to the second apparatus2. For example, information such as maximum and/or minimum acceleration within a predetermined time interval, information concerning spatial positions at predetermined points in time and/or within certain time intervals, or arbitrary other suitable information may be ascertained and provided. Furthermore, the motion sensor12may also compare the ascertained motion patterns with predetermined motion patterns, for example motion patterns stored in the motion sensor12and provide the result of this comparison to the communication module11. For example, the motion sensor12may identify a specified motion pattern and output an appropriate signal when this motion pattern has been identified. In principle, it is also possible for one of several specified motion patterns to have been identified by the motion sensor12and, if indicated, also for information about the respectively identified motion pattern to be provided to the communication module11. The second apparatus2may likewise comprise a communication module21and a motion sensor22. The communication module21may for example receive the information concerning the motion pattern of the first apparatus1sent out by the communication module11of the first apparatus1. The motion sensor22of the second apparatus2may detect a movement of the second apparatus2and provide appropriate sensor data. Alternatively, the second motion sensor22may also compare the detected motion pattern of the second apparatus2with one or several predetermined motion patterns and provide appropriate information concerning an identified previously stored motion pattern. Such predetermined motion patterns may, for example, comprise a circling movement, a rotation of the apparatus, a figure eight movement or similar, a shaking of the apparatus, or an identification of tapping movements, or similar. Of course, arbitrary other suitable predetermined motion patterns are also possible. Once the radio module21of the second apparatus2has received information concerning the motion pattern of the first apparatus1, the received information of the motion pattern of the first apparatus may be compared in the second apparatus2with a detected motion pattern of the second apparatus. For example, this comparison may take place within the communication module21of the second apparatus. Furthermore, in principle the comparison may also take place in a separate control unit or similar. If a relationship is identified between the motion pattern of the first apparatus1and the detected motion pattern of the second apparatus2, then a coupling, for example an authentication, may thereupon take place between the first apparatus1and the second apparatus2. The first apparatus1may authenticate itself against the second apparatus2through such coupling. Subsequently, it is possible for a data exchange to take place between the first apparatus1and the second apparatus2. If the second apparatus2has previously set up a data link with a further apparatus (not depicted here), then this data link may be terminated once a match has been detected through the comparison of the motion patterns of the first apparatus1and of the second apparatus2. Alternatively, after detecting a match in the motion patterns, the coupling between the first apparatus1and the second apparatus2may be added to the already existing further couplings. For coupling the two apparatuses1,2, the two apparatuses1,2may for example be taken by a user in the same hand and thus moved together with one another. In this way the motion sensors12,22of the first and second apparatus1,2may simultaneously detect the same motion pattern. If, therefore, it is established in the second apparatus2that the two detected motion patterns match both temporally and in their form, then a coupling between the two apparatuses1,2may thereupon take place. Alternatively, other types of motion patterns are also possible. For example, it is also possible that the two apparatuses to be coupled1,2are tapped against one another. Accordingly, the two apparatuses1,2will first execute opposite movements, or one of the two apparatuses remains at its spatial position while the other apparatus is moved towards this apparatus. At the point in time of the tapping against each other, an acceleration may then be detected by each of the motion sensors12,22, wherein the acceleration exhibits an opposite sign in at least one spatial direction. Such a motion pattern is depicted as an example inFIG.3. Moreover, it is also possible that for example first one of the two apparatuses1,2and subsequently the other apparatus is moved by the user one after the other. In so doing, the user may execute at least approximately the same movement with both apparatuses. For example, a user may first move the first apparatus1and subsequently execute the same movement with the second apparatus2. In this case, the two motion patterns may be compared with one another, wherein a temporal offset between the two detected motion patterns is permissible. Such a course of motion patterns is depicted for example inFIG.4. Moreover, of course arbitrary other types of motion patterns are also possible. For example, previously firmly specified motion patterns may be executed by a user for the coupling, which for example are indicated by way of a text label on the respective apparatuses, or which are specified by way of a separate medium, for example the operating instructions or similar. Furthermore, it is also possible that the first and/or second apparatus is initially in a standby or passive mode, and the respective apparatus is only activated by a predetermined motion pattern in order to initiate a subsequent checking of the motion patterns for a coupling procedure. Moreover, in the individual apparatuses1,2too, different motion patterns may be specified which have to be identified during the coupling procedure. Moreover, it is also possible to further increase the reliability of a coupling procedure by checking additional authentication information. For example, further authentication information may be output by the first apparatus1by way of a suitable transmitting element. These further authentication information may be received by an appropriate receiver in the second apparatus2. For example, the first apparatus1may emit an acoustic and/or optical signal, which is received and checked by the second apparatus2. FIG.5shows a flowchart underlying a method for coupling two electronic apparatuses according to one specific embodiment. In step S1, a motion pattern may be detected in a first apparatus1. This detected motion pattern or information which characterizes the detected motion pattern, may in step S2be transmitted to a second apparatus2. In step S3, a motion pattern may be detected in the second apparatus2. This detected motion pattern may possibly already be processed in step S4. For example, the detected motion pattern of the second apparatus2may be compared with one or several predetermined motion patterns. In step S5there takes place a comparison of the detected motion pattern of the first apparatus1or information, which characterizes this detected motion pattern, with the detected motion pattern of the second apparatus and/or respectively with information which characterizes this motion pattern of the second apparatus. In step S6there takes place a check as to whether the detected motion pattern of the first apparatus and the detected motion pattern of the second apparatus satisfy a predetermined coupling condition in accordance with the previously performed comparison. If the predetermined coupling condition is not satisfied, then the method is terminated in step S7without a coupling taking place. If, on the contrary, the predetermined coupling condition is satisfied, then in step S8a coupling between the two apparatuses1,2may be activated. In summary, the present invention relates to a coupling of two electronic apparatuses for a wireless information exchange. In particular, the coupling is authenticated through the evaluation of motion patterns previously executed by the apparatuses.
11,087
11943624
DETAILED DESCRIPTION Representative applications of methods and apparatus according to the present application are described in this section. These examples are being provided solely to add context and aid in the understanding of the described embodiments. It will thus be apparent to one skilled in the art that the described embodiments may be practiced without some or all of these specific details. In other instances, well known process steps have not been described in detail in order to avoid unnecessarily obscuring the described embodiments. Other applications are possible, such that the following examples should not be taken as limiting. In the following detailed description, references are made to the accompanying drawings, which form a part of the description and in which are shown, by way of illustration, specific embodiments in accordance with the described embodiments. Although these embodiments are described in sufficient detail to enable one skilled in the art to practice the described embodiments, it is understood that these examples are not limiting; such that other embodiments may be used, and changes may be made without departing from the spirit and scope of the described embodiments. These and other embodiments are discussed below with reference toFIGS.1A through9; however, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting. FIG.1Aillustrates a block diagram of different components of a system100that includes i) a mobile wireless device102, which can also be referred to as a wireless device, a mobile wireless device, a mobile device, a user equipment (UE), a device, and the like, ii) a group of base stations112-1to112-N that are managed by different Mobile Network Operators (MNOs)114, and iii) a set of provisioning servers116that are in communication with the MNOs114. The mobile wireless device102can represent a mobile computing device (e.g., an iPhone® or an iPad® by Apple®), the base stations112-1to112-N can represent cellular wireless network entities including evolved NodeBs (eNodeBs or eNBs) and/or next generation NodeBs (gNodeBs or gNBs) that are configured to communicate with the mobile wireless device102, and the MNOs114can represent different wireless service providers that provide specific services (e.g., voice and data) to which the mobile wireless device102can subscribe. The mobile wireless device102can include processing circuitry, which can include one or more processors104and a memory106, an embedded Universal Integrated Circuit Card (eUICC)108, and a baseband component110. In some embodiments, the mobile wireless device102includes one or more physical UICCs, also referred to as Subscriber Identity Module (SIM) cards (not shown), in addition to the eUICC108. The components of the mobile wireless device102work together to enable the mobile wireless device102to provide useful features to a user of the mobile wireless device102, such as cellular wireless network access, non-cellular wireless network access, localized computing, location-based services, and Internet connectivity. The eUICC108can be configured to store multiple electronic SIMs (eSIMs) for accessing services offered by one or more different MNOs114via communication through base stations112-1to112-N. To be able to access services provided by the MNOs, an eSIM can be provisioned to the eUICC108of the mobile wireless device102. In some embodiments, the eUICC108obtains one or more eSIMs (or updates for one or more eSIMs) from one or more associated provisioning servers116. It is noted that provisioning servers116can be maintained by a manufacturer of the mobile wireless device102, the MNOs114, third party entities, and the like. Communication of eSIM data between a provisioning server116and the eUICC108(or between the provisioning server116and processing circuitry of the mobile wireless device102external to the eUICC108, e.g., the processor104) can use a secure communication channel, and the provisioning server116can seek to ensure that the eUICC108of the mobile wireless device102is compatible with an eSIM to be downloaded to the mobile wireless device102. FIG.1Billustrates a diagram150of a set of entities that can provide and/or verify information to determine eligibility to transfer an eSIM from a source device, e.g., mobile wireless device102, to a target device, e.g., another mobile wireless device102. A mobile wireless device102can include software, e.g., a local profile assistant (LPA)152, which can be resident on a processor external to an eUICC108of the mobile wireless device102(or in some embodiments be included in the eUICC108), where the LPA152provides an interface for communication with one or more network-based servers for management of eSIMs of the eUICC108. The LPA152can assist with communication with a subscription manager data preparation (SM-DP+) server154that can provide initial downloads of one or more eSIMs to an eUICC108and/or provide updates for one or more eSIMs on the eUICC108of the mobile wireless device102. The SM-DP+ server154can also provide eligibility checking and attestation for transfer of an eSIM between mobile wireless devices102. The eUICC108of the mobile wireless device102can store one or more certificates (and associated public keys) from one or more network entities. The certificates (and the public keys) can be used for authentication and verification of the validity of messages and senders of messages to the mobile wireless device102. Network entities involved in generating and communicating certificates, as well as authentication, verification, and/or attestation, can include a certificate issuer (CI)156, an eUICC manufacturer (EUM)158, the SM-DP+154, a subscription manager discovery server (SM-DS)160, which can work in conjunction with the SM-DP+154, a digital letter of approval (DLOA) registrar162, a certificate authority (CA)164, and/or a subordinate CA (subCA)166. FIG.1Cillustrates a diagram170of an exemplary chain of certificates provided for authentication and verification by entities of an exemplary system. The certificate issuer (CI)156provides certificates, signed by the CI, to the eUICC manufacturer (EUM)158, the SM-DP+154, and the SM-DS160. Each certificate includes public keys for use by the respective entities that receive the certificate. The EUM158provides a certificate to the eUICC108of the mobile device, e.g., during manufacture or configuration of the eUICC108, where the certificate is signed by the EUM158and contains a public key for the eUICC108. The entities can each include secret keys associated with their respective public keys for use in cryptographic security protocols by the entities. FIG.2illustrates a block diagram200of a more detailed view of exemplary components of the system100ofFIG.1A. The one or more processors104, in conjunction with the memory106, can implement a main operating system (OS)202that is configured to execute applications204(e.g., native OS applications and user applications). In some embodiments, the main OS202can include all or a portion of the LPA152of the mobile wireless device102for assisting with communication between the eUICC108and one or more network-based servers for management of eSIMs208of the eUICC108. The eUICC108can be configured to implement an eUICC OS206that is configured to manage the hardware resources of the eUICC108(e.g., a processor and a memory embedded in the eUICC108). The eUICC OS206can also be configured to manage eSIMs208that are stored by the eUICC108, e.g., by enabling, disabling, modifying, or otherwise performing management of the eSIMs208within the eUICC108and providing the baseband component110with access to the eSIMs208to provide access to wireless services for the mobile wireless device102. The eUICC OS206can include an eSIM manager210, which can perform management functions for various eSIMs208. In some embodiments, the eUICC OS206can include all or a portion of the LPA152of the mobile wireless device102for assisting with communication between the eUICC108and one or more network-based servers for management of eSIMs208of the eUICC108. Each eSIM208can include a number of applets212that define the manner in which the eSIM208operates. For example, one or more of the applets212, when implemented by the baseband component110and the eUICC108, can be configured to enable the mobile wireless device102to communicate with an MNO114and provide useful features (e.g., phone calls and internet) to a user of the mobile wireless device102. A baseband component110of the mobile wireless device102can include a baseband OS214that is configured to manage hardware resources of the baseband component110(e.g., a processor, a memory, different radio components, etc.). According to some embodiments, the baseband component110can implement a baseband manager216that is configured to interface with the eUICC108to establish a secure channel with a provisioning server116and obtaining information (such as eSIM data) from the provisioning server116for purposes of managing eSIMs208. The baseband manager216can be configured to implement services218, which represents a collection of software modules that are instantiated by way of the various applets212of enabled eSIMs208that are included in the eUICC108. For example, services218can be configured to manage different connections between the mobile wireless device102and MNOs114according to the different eSIMs208that are enabled within the eUICC108. FIG.3illustrates a diagram300of an exemplary transfer320of cellular service account credentials for access to cellular services from a source device102-1to a target device102-2. The source device102-1and the target device102-2may be within proximity of each other to establish a direct secure connection between them or may be separated by a distance where transfer occurs via an indirect connection, such as over a wireless local area network (WLAN) and/or via one or more cellular wireless networks330. Transfer of credentials that permit access to services of cellular wireless networks330can also be referred to as transfer of one or more virtual credentials, such as one or more eSIMs208, also referred to as profiles or plans, from the source device102-1to the target device102-2. The eSIMs208may be initially present on the eUICC108-1of the source device102-1, and a user may seek to transfer one or more of the eSIMs208from the source device102-1to the eUICC108-2of the target device102-2. The eSIMs208may be associated with one or more cellular service accounts for one or more cellular service providers, also referred to as mobile network operators (MNOs). Transfer of one or more eSIMs208can occur without transferring a UICC304-2of the source device102-1or replacement of a UICC304-2of the target device1-202. As illustrated, the source device102-1and the target device102-2can each include one or more processors104and wireless circuitry308that can be used to communicate with one or more wireless networks330. The eSIMs208that are transferred can allow the target device102-2to access cellular services for one or more cellular wireless networks that previously were accessible by the source device102-1. FIG.4illustrates a diagram400of an exemplary potential transfer of an eSIM208-1from a source device102-1to a target device102-2based on: i) a trust configuration for the eUICC108-2of the target device102-2that seeks to receive the eSIM208-1from the eUICC108-1of the source device102-1, ii) a trust configuration for the eUICC108-1of the source device102-1on which the eSIM208-1currently resides, and/or iii) a trust configuration of the eSIM208-1to be transferred. A trust configuration can restrict certain eSIM management operations, such as importing, exporting, modifying, enabling, disabling, transferring, etc., for the eSIM208-1to one or more roots of trust. At the source device102-1, a trust configuration of the eUICC108-1can restrict transferal of the eSIM208-1to an eUICC108of a target device102that has an appropriate trust configuration (and therefore can be trusted with the eSIM208-1). A trust configuration can be for an eUICC108and/or for a particular eSIM208on the eUICC108. At the target device102-2, a trust configuration of the eUICC108-2can restrict transferal of eSIMs to only those from an eUICC108of a source device102-1that has an appropriate trust configuration, e.g., from a verifiable, trusted source device102-1. Additionally, and/or alternatively, a source device102-1can seek to ensure that the eSIM208-1is only transferred to a trusted target device102-2on which the eUICC108-2is appropriately configured for use of the eSIM208-1. Similarly, a target device102-2can seek to ensure that the eSIM208-1is only transferred from a trusted source device102-1. A trust configuration can be based on a white list of roots of trust, e.g., enumerated by a set of certificates and/or public keys included in/with certificates obtained from trusted entities. A trust configuration can also be based on a black list of denigrated roots of trust, e.g., a certificate revocation list (CRL). Representative roots of trust can include certificates (and/or associated public keys) from one or more specific network entities illustrated inFIG.1C, such as from an EUM158, a CA164, a subCA166, an SM-DP+154, an SM-DS160, and/or a DLOA registrar162. In some embodiments, an eSIM208, e.g., eSIM208-1, includes its own eSIM trust list404that indicates one or more roots of trust, at least one of which an eUICC108must possess for the eSIM208to be resident on the eUICC108. As illustrated inFIG.4, the eSIM208-1includes an eSIM trust list404indicating two roots of trust, one based on a first certificate associated with a first public key (PK1), and another based on a second certificate associated with a second public key (PK2). The eUICC108-1of the source device102-1includes eUICC trust list402-1that indicates three roots of trust based on three certificates associated with three different public keys, namely PK1, PK2, and PK4. As there is an overlap of at least one of the roots of trust between the eSIM208-1and the eUICC108-1, the eSIM208-1has been previously installed and resides on the eUICC108-1of the source device102-1. To determine whether the eUICC108-2of the target device102-2has a trust configuration that allows for transferal of the eSIM208-1, the source device102-1can ascertain whether the eUICC trust list402-2of the eUICC108-2of the target device102-2includes at least one root of trust that is valid for the eSIM208-1. As illustrated inFIG.4, the eUICC trust list402-2of the target device102-2indicates two roots of trust based on two certificates associated with two different public keys, namely PK1and PK3. As there is one overlapping root of trust, namely PK1, the eUICC108-2of the target device102-2may be eligible to receive transferal of the eSIM208-1from the source device102-1. In some embodiments, the source device102-1and/or the target device102-2obtain, from one or more network-based servers, an eligibility attestation result that attests to whether the eSIM208-1can be transferred to the eUICC108-2of the target device102-2. FIG.5illustrates a flow chart500of an exemplary eligibility checking procedure for transfer of an eSIM208from a source device102-1to a target device102-2based on communication with one or more network servers510. Initially, the target device102-2and source device102-1perform a mutual authentication procedure. The target device eUICC108-2communicates to the source device eUICC108-1a challenge (Challenge_T), via the target device102-2and source device102-1. The source device eUICC108-1responds to the challenge with its own challenge (Challenge_S) returning the received challenge (Challenge_T) accompanied by a signature (Signature_S) generated by the source device eUICC108-1, e.g., based on an eUICC certificate. The target device eUICC108-2authenticates the source device eUICC108-1based on the signature, and upon successful authentication of the source device eUICC108-1replies, to the source device eUICC108-1, with eUICC trust configuration information (eUICCInfo_T) for the target device eUICC108-2accompanied by a signature (Signature_T) generated by the target device eUICC108-2, e.g., based on its own eUICC certificate. The source device eUICC108-1authenticates the target device eUICC108-2based on the received signature, and upon successful authentication indicates to the source device102-1to forward the eUICC trust configuration information (eUICCInfo_T) from the target device eUICC108-2to a network server510to determine whether a trust configuration of the target device eUICC108-2is eligible for transfer of one or more eSIMs208from the source device eUICC108-1. In some embodiments, the network server510can be an SM-DP+154or a DLOA registrar162. The network server510performs an eSIM transfer eligibility check for the target device eUICC108-2and returns, to the source device102-1, an eligibility attestation result (Eligibility Result) that attests to whether the target device eUICC108-2has an appropriate configuration for receiving transferal of one or more eSIMs208. The eligibility result can be accompanied by the eUICC trust configuration information (eUICCInfo_T) and a signature from the network server (e.g., Signature_SMDP or Signature_DLOA). The source device102-1, in some embodiments, can perform an additional eligibility check for transfer of one or more eSIMs208, e.g., based on determination of a validity time period for the eligibility result or based on other compatibility requirements. The source device102-1can forward the eligibility result accompanied by the received signature to the source device eUICC108-1, which can authenticate the eligibility result obtained from the network server510. Upon successful authentication of the eligibility result, the source device eUICC108-1can initiate transfer of one or more eSIMs to the target device eUICC108-2of the target device102-2. In some embodiments, the source device eUICC108-1and/or an eSIM208to be transferred from the source device eUICC108-1can be configured with a designated, trusted network server510(or its trusted root), e.g., a particular SM-DP+154and/or a particular DLOA registrar162. In some embodiments, the target device eUICC108-2can be configured with a designated, trusted network server510(or its trusted root) with which the source device102-1can seek an eligibility result for transfer of one or more eSIMs. The network server510can provide a server attestation about whether the target device eUICC108-2is eligible for transfer of one or more eSIMs208. FIG.6illustrates a flow chart600of another exemplary eligibility checking procedure for transfer of an eSIM208from a source device102-1to a target device102-2based on communication with one or more network servers510. The procedure illustrated inFIG.5includes determining eligibility for eSIM transfer by the network server510during an eSIM transfer. The procedure illustrated inFIG.6allows for determining eligibility for eSIM transfer by the network server510in advance of the eSIM transfer. Thus, a target device eUICC108-2can obtain an eSIM transfer eligibility result and later use that eligibility result during a subsequent eSIM transfer procedure, without requiring communication with the network server510to obtain the eligibility result during the eSIM transfer procedure. The target device eUICC108-2can send a message to a network server510, e.g., SM-DP+154and/or DLOA registrar162, the message including a request for eSIM transfer eligibility and also including eUICC trust configuration information (eUICCInfo_T). In some embodiments, the target device eUICC108-2can be configured with a designated, trusted network server510(or its trusted root) with which the source device eUICC108-2(and/or source device102-1) can seek an eligibility result for transfer of one or more eSIMs208. The network server510performs an eSIM transfer eligibility check for the target device eUICC108-2and returns, to the target device102-1, an eligibility result (Eligibility Result) that attests to whether the target device eUICC108-2has an appropriate configuration for receiving transferal of one or more eSIMs208. The eligibility result can be accompanied by a signature from the network server (e.g., Signature_SMDP or Signature_DLOA). The target device102-2(and/or the target device eUICC108-2) can store the eligibility result (attestation) from the network server510for future use. In some embodiments, the eligibility result includes an indication of a time period of validity for the eligibility result. At a subsequent time, the target device eUICC108-2can seek to transfer one or more eSIMs208from a source device eUICC108-1. The target device102-2and source device102-1perform a mutual authentication procedure. The target device eUICC108-2communicates to the source device eUICC108-1a challenge (Challenge_T), via the target device102-2and source device102-1. The source device eUICC108-1responds to the challenge with its own challenge (Challenge_S) returning the received challenge (Challenge_T) accompanied by a signature (Signature_S) generated by the source device eUICC108-1, e.g., based on an eUICC certificate. The source device eUICC108-1can also include a request for information regarding eSIM transfer eligibility for the target device eUICC108-2. The target device eUICC108-2authenticates the source device eUICC108-1based on the signature, and upon successful authentication of the source device eUICC108-1replies, to the source device eUICC108-1, with eUICC trust configuration information (eUICCInfo_T) for the target device eUICC108-2accompanied by a signature (Signature_T) generated by the target device eUICC108-2, e.g., based on its own eUICC certificate as well as a previously obtained eSIM transfer eligibility result (attestation) accompanied by a signature from the applicable network server510(e.g., Signature_SMDP or Signature_DLOA). The source device eUICC108-1authenticates the target device eUICC108-2based on the received signature, and upon successful authentication, the source device eUICC108-1can authenticate the eligibility result provided by the target device eUICC108-2. The source device102-1, in some embodiments, can perform an additional eligibility check for transfer of one or more eSIMs208, e.g., based on determination of a validity time period for the eligibility result or based on other compatibility requirements Upon successful authentication of the eligibility result, the source device eUICC108-1can initiate transfer of one or more eSIMs to the target device eUICC108-2of the target device102-2. As withFIG.5, the target device eUICC108-2, the source device eUICC108-1, and/or an eSIM208to be transferred from the source device eUICC108-1can be configured with a designated, trusted network server510(or its trusted root), e.g., a particular SM-DP+154and/or a particular DLOA registrar162with which to communicate regarding eSIM transfer eligibility. FIG.7illustrates a flow chart700of another exemplary eligibility checking procedure for transfer of an eSIM208from a source device102-1to a target device102-2based on communication with one or more network servers510. InFIG.7, the target device eUICC108-2obtains an eSIM transfer eligibility result (attestation) from a network server, e.g., SM-DP+154, during an eSIM transfer procedure. The target device eUICC108-2initiates a mutual authentication procedure by sending to the source device eUICC108-1a challenge (Challenge_T). The source device eUICC108-1responds to the challenge with its own challenge (Challenge_S) returning the received challenge (Challenge_T) accompanied by a signature (Signature_S) generated by the source device eUICC108-1, e.g., based on an eUICC certificate. In some embodiments, the source device eUICC108-1includes a request for eSIM transfer eligibility checking from the target device eUICC108-2. The target device eUICC108-2authenticates the source device eUICC108-1based on the signature, and upon successful authentication of the source device eUICC108-1, the target device eUICC108-2sends the challenge from the source device eUICC108-1(Challenge_S) and eUICC trust configuration information (eUICCInfo_T) for the target device eUICC108-2accompanied by a signature (Signature_T) generated by the target device eUICC108-2to a network server, e.g., SM-DP+154, via the target device102, to obtain an eSIM transfer eligibility result (attestation). The SM-DP+154performs an eSIM transfer eligibility check for the target device eUICC108-2and returns, to the target device102-2, the source device challenge (Challenge_S), the eUICC trust configuration information (eUICCInfo_T), the signature from the target device eUICC108-2(Signature_T) and a signature from the SM-DP+154(Signature_SMDP). The target device102-2forwards to the source device eUICC108-1the source device challenge (Challenge_S), the eUICC trust configuration information (eUICCInfo_T), the signature from the target device eUICC108-2(Signature_T) and a signature from the SM-DP+154(Signature_SMDP) received from the SM-DP+154. The source device eUICC108-1authenticates the target device eUICC108-2based on the received signature from the target device eUICC108-2(Signature_T). The source device eUICC108-2further authenticates the SM-DP+154eligibility check based on the received signature from the SM-DP+154(Signature_SMDP). Upon successful authentication, the source device eUICC108-1initiates transfer of one or more eSIMs to the target device eUICC108-2. The source device eUICC108-1can use the target device eUICC information (eUICCInfo_T) to determine whether one or more eSIMs are compatible for transfer to the target device eUICC108-2. FIG.8illustrates a flow chart800of another eligibility checking procedure for transfer of an eSIM208from a source device102-1to a target SM-DP+154with which a target device102-2is associated. The target SM-DP+154initiates a mutual authentication procedure by sending to the source device eUICC108-1a challenge (Challenge_T). The source device eUICC108-1responds to the challenge with its own challenge (Challenge_S) returning the received challenge (Challenge_T) accompanied by a signature (Signature_S) generated by the source device eUICC108-1, e.g., based on an eUICC certificate. The target SM-DP+154authenticates the source device eUICC108-1based on the received signature (Signature_S). Upon successful authentication, the target SM-DP+154sends an eSIM transfer (export) command signed by the target SM-DP+ accompanied by a signature (Signature_T). The source device eUICC108-1authenticates the target SM-DP+154, e.g., based on the signature (Signature_T) and verifies that the target SM-DP+154is eligible for transfer of one or more eSIMs208from the source device eUICC108-1. Eligibility can be determined based on compatibility of the target SM-DP+ with a trust configuration of the source device eUICC108-1(and/or a trust configuration of one or more eSIMs208on the source device eUICC108-1). In some embodiments, the source device eUICC108-1determines whether the target SM-DP+154is included in a white list or not included in a black list. In some embodiments, the source device eUICC108-1performs additional eligibility checking for transfer of one or more eSIMs208to the target SM-DP+154, e.g., based on a validity time period or based on other compatibility requirements. Upon successful authentication, verification, and validity for eligibility to transfer eSIMs to the target SM-DP+154, the source device eUICC108-1initiates transfer of one or more eSIMs to the target SM-DP+154. In some embodiments, an eSIM208includes trust configuration information regarding one or more trusted SM-DP+154to which the eSIM208can be transferred. In some embodiments, the trust configuration information is included in a trusted certificate chain. In some embodiments, the eSIM208indicates that a particular SM-DP+154from which the eSIM208was originally downloaded can be trusted for later transfer back. Representative Embodiments In some embodiments, a method for eSIM transfer eligibility checking includes a target device102-2: i) providing, to a network server510, a) a request for an eSIM transfer eligibility attestation and b) trust configuration information of an eUICC108-2of the target device102-2; ii) obtaining, from the network server510, an eSIM transfer eligibility attestation result and a network server generated signature; iii) receiving, from a source device102-1, a request for eSIM transfer eligibility checking; iv) providing, to the source device102-1, the eSIM transfer eligibility attestation result and the network server generated signature; and v) upon successful authentication of eSIM transfer eligibility, performing an eSIM transfer of one or more eSIMs208from an eUICC108-1of the source device102-1to the eUICC108-2of the target device102-2. In some embodiments, the network server510includes a subscription manager data preparation (SM-DP+) server154. In some embodiments, the network server includes a digital letter of approval (DLOA) server162. In some embodiments, the eSIM transfer eligibility attestation result includes an indication of a time period for which the transfer eligibility attestation result is valid. In some embodiments, the trust configuration information of the eUICC108-2of the target device102-2includes a white list of trusted entities and/or a black list of untrusted entities. In some embodiments, the eUICC108-2of the target device102-2includes one or more certificates and one or more public keys extracted from signed and verified certificates provided by one or more trusted entities included in the white list of trusted entities. In some embodiments, the authentication of eSIM transfer eligibility includes a determination by the source device102-1and/or by an eUICC108-1included in the source device102-1whether an eUICC certification configuration or a root of trust configuration of the target device102-2is compatible with the one or more eSIMs208to transfer from the eUICC108-1of the source device102-1to the eUICC108-2of the target device102-2. In some embodiments, the successful authentication of eSIM transfer eligibility includes a determination that a trust configuration of the eUICC108-2of the target device102-2and a trust configuration of the one or more eSIMs208to be transferred includes at least one common root of trust. In some embodiments, a method for eSIM transfer eligibility checking includes a source device102-1: i) performing an authentication procedure with a target device102-2; ii) obtaining, from the target device102-2, trust configuration information of an eUICC108-2of the target device102-2; iii) providing, to a network server510, the trust configuration information of the eUICC108-2of the target device102-2; iv) obtaining, from the network server510, an eSIM transfer eligibility attestation result and a network generated signature; v) determining eSIM transfer eligibility for transfer of one or more eSIMs208from an eUICC108-1of the source device102-1to the eUICC108-2of the target device102-2; and vi) upon successful authentication of eSIM transfer eligibility, performing an eSIM transfer of the one or more eSIMs208from the eUICC108-1of the source device102-1to the eUICC108-2of the target device102-2. In some embodiments, the network server510includes a subscription manager data preparation (SM-DP+) server154. In some embodiments, the network server510includes a digital letter of approval (DLOA) server162. In some embodiments, the eSIM transfer eligibility attestation result includes an indication of a time period for which the transfer eligibility attestation result is valid. In some embodiments, determining the eSIM transfer eligibility includes determining whether transfer of the one or more eSIMs208occurs within the time period for which the transfer eligibility attestation result is valid. In some embodiments, the trust configuration information of the eUICC108-2of the target device102-2includes a white list of trusted entities and/or a black list of untrusted entities. In some embodiments, the eUICC108-2of the target device102-2includes one or more certificates and one or more public keys extracted from signed and verified certificates provided by one or more trusted entities included in the white list of trusted entities. In some embodiments, the authentication of eSIM transfer eligibility includes a determination by the source device102-1and/or by the eUICC108-2of the source device102-1whether an eUICC certification configuration or a root of trust configuration of the target device102-2is compatible with the one or more eSIMs208to transfer from the eUICC108-1of the source device102-1to the eUICC108-2of the target device102-2. In some embodiments, the successful authentication of eSIM transfer eligibility includes a determination that a trust configuration of the eUICC108-2of the target device102-2and a trust configuration of the one or more eSIMs208to be transferred includes at least one common root of trust. In some embodiments, a method for eSIM transfer eligibility checking includes a network server510: i) receiving, from a source device102-1or a target device102-2, trust configuration information of an eUICC108-2of the target device102-2; ii) performing an eSIM transfer eligibility check for the eUICC108-2of the target device102-2based on the trust configuration information; and iii) providing, to the source device102-1or the target device102-2, an eSIM transfer eligibility attestation result that indicates whether the eUICC108-2of the target device102-2has an appropriate configuration for receiving transfer of one or more eSIMs208. In some embodiments, the network server510includes a subscription manager data preparation (SM-DP+) server154or a digital letter of approval (DLOA) server162. In some embodiments, the eSIM transfer eligibility attestation result includes an indication of a time period for which the transfer eligibility attestation result is valid. In some embodiments, an apparatus configured for eSIM transfer eligibility checking in a target device102-2includes one or more processors104communicatively coupled to a memory106storing instructions that, when executed by the one or more processors104, cause the target device102-2to perform actions of a method as described herein. In some embodiments, an apparatus configured for eSIM transfer eligibility checking in a source device102-1includes one or more processors104communicatively coupled to a memory106storing instructions that, when executed by the one or more processors104, cause the source device102-1to perform actions of a method as described herein. In some embodiments, an apparatus configured for eSIM transfer eligibility checking in a network server510includes one or more processors communicatively coupled to a memory storing instructions that, when executed by the one or more processors, cause the network server510to perform actions of a method as described herein. In some embodiments, a source device102-1configured for eSIM transfer eligibility checking includes wireless circuitry308including one or more antennas and one or more processors104communicatively coupled to the wireless circuitry308and to a memory106storing instructions that, when executed by the one or more processors104, cause the source device102-1to perform actions of a method as described herein. In some embodiments, a target device102-2configured for eSIM transfer eligibility checking includes wireless circuitry308including one or more antennas and one or more processors104communicatively coupled to the wireless circuitry308and to a memory106storing instructions that, when executed by the one or more processors104, cause the target device102-2to perform actions of a method as described herein. In some embodiments, a network server510configured for eSIM transfer eligibility checking includes wireless circuitry including one or more antennas and one or more processors communicatively coupled to the wireless circuitry and to a memory storing instructions that, when executed by the one or more processors, cause the network server510to perform actions of a method as described herein. In some embodiments, a system configured for eSIM transfer eligibility checking includes a source device102-1, a target device102-2, and a network server510each configured to perform respective actions of a method as described herein. Representative Exemplary Apparatus FIG.9illustrates in block diagram format an exemplary computing device900that can be used to implement the various components and techniques described herein, according to some embodiments. In particular, the detailed view of the exemplary computing device900illustrates various components that can be included in the source device102-1and/or the target device102-2. As shown inFIG.9, the computing device900can include one or more processors902that represent microprocessors or controllers for controlling the overall operation of computing device900. In some embodiments, the computing device900can also include a user input device908that allows a user of the computing device900to interact with the computing device900. For example, in some embodiments, the user input device908can take a variety of forms, such as a button, keypad, dial, touch screen, audio input interface, visual/image capture input interface, input in the form of sensor data, etc. In some embodiments, the computing device900can include a display910(screen display) that can be controlled by the processor(s)902to display information to the user (for example, information relating to incoming, outgoing, or active communication sessions). A data bus916can facilitate data transfer between at least a storage device940, the processor(s)902, and a controller913. The controller913can be used to interface with and control different equipment through an equipment control bus914. The computing device900can also include a network/bus interface911that couples to a data link912. In the case of a wireless connection, the network/bus interface911can include wireless circuitry, such as a wireless transceiver and/or baseband processor. The computing device900can also include a secure element924. The secure element924can include an eUICC108. The computing device900also includes a storage device940, which can include a single storage or a plurality of storages (e.g., hard drives), and includes a storage management module that manages one or more partitions within the storage device940. In some embodiments, storage device940can include flash memory, semiconductor (solid state) memory or the like. The computing device900can also include a Random-Access Memory (RAM)920and a Read-Only Memory (ROM)922. The ROM922can store programs, utilities or processes to be executed in a non-volatile manner. The RAM920can provide volatile data storage, and stores instructions related to the operation of the computing device900. Wireless Terminology In accordance with various embodiments described herein, the terms “wireless communication device,” “wireless device,” “mobile device,” “mobile station,” and “user equipment” (UE) may be used interchangeably herein to describe one or more common consumer electronic devices that may be capable of performing procedures associated with various embodiments of the disclosure. In accordance with various implementations, any one of these consumer electronic devices may relate to: a cellular phone or a smart phone, a tablet computer, a laptop computer, a notebook computer, a personal computer, a netbook computer, a media player device, an electronic book device, a MiFi® device, a wearable computing device, as well as any other type of electronic computing device having wireless communication capability that can include communication via one or more wireless communication protocols such as used for communication on: a wireless wide area network (WWAN), a wireless metro area network (WMAN) a wireless local area network (WLAN), a wireless personal area network (WPAN), a near field communication (NFC), a cellular wireless network, a fourth generation (4G) LTE, LTE Advanced (LTE-A), and/or 5G or other present or future developed advanced cellular wireless networks. The wireless communication device, in some embodiments, can also operate as part of a wireless communication system, which can include a set of client devices, which can also be referred to as stations, client wireless devices, or client wireless communication devices, interconnected to an access point (AP), e.g., as part of a WLAN, and/or to each other, e.g., as part of a WPAN and/or an “ad hoc” wireless network. In some embodiments, the client device can be any wireless communication device that is capable of communicating via a WLAN technology, e.g., in accordance with a wireless local area network communication protocol. In some embodiments, the WLAN technology can include a Wi-Fi (or more generically a WLAN) wireless communication subsystem or radio, the Wi-Fi radio can implement an Institute of Electrical and Electronics Engineers (IEEE) 802.11 technology, such as one or more of: IEEE 802.11a; IEEE 802.11b; IEEE 802.11g; IEEE 802.11-2007; IEEE 802.11n; IEEE 802.11-2012; IEEE 802.11ac; or other present or future developed IEEE 802.11 technologies. Additionally, it should be understood that the UEs described herein may be configured as multi-mode wireless communication devices that are also capable of communicating via different third generation (3G) and/or second generation (2G) RATs. In these scenarios, a multi-mode user equipment (UE) can be configured to prefer attachment to LTE networks offering faster data rate throughput, as compared to other 3G legacy networks offering lower data rate throughputs. For instance, in some implementations, a multi-mode UE may be configured to fall back to a 3G legacy network, e.g., an Evolved High Speed Packet Access (HSPA+) network or a Code Division Multiple Access (CDMA) 2000 Evolution-Data Only (EV-DO) network, when LTE and LTE-A networks are otherwise unavailable. It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users. The various aspects, embodiments, implementations or features of the described embodiments can be used separately or in any combination. Various aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software. The described embodiments can also be embodied as computer readable code on a non-transitory computer readable medium. The non-transitory computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the non-transitory computer readable medium include read-only memory, random-access memory, CD-ROMs, HDDs, DVDs, magnetic tape, and optical data storage devices. The non-transitory computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of specific embodiments are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the described embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings.
44,453
11943625
DETAILED DESCRIPTION In the present specification, “A or B” may mean “only A”, “only B” or “both A and B”. In other words, in the present specification, “A or B” may be interpreted as “A and/or B”. For example, in the present specification, “A, B, or C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, C”. A slash (/) or comma used in the present specification may mean “and/or”. For example, “A/B” may mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”. For example, “A, B, C” may mean “A, B, or C”. In the present specification, “at least one of A and B” may mean “only A”, “only B”, or “both A and B”. In addition, in the present specification, the expression “at least one of A or B” or “at least one of A and/or B” may be interpreted as “at least one of A and B”. In addition, in the present specification, “at least one of A, B, and C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, and C”. In addition, “at least one of A, B, or C” or “at least one of A, B, and/or C” may mean “at least one of A, B, and C”. In addition, a parenthesis used in the present specification may mean “for example”. Specifically, when indicated as “control information (EHT-signal)”, it may denote that “EHT-signal” is proposed as an example of the “control information”. In other words, the “control information” of the present specification is not limited to “EHT-signal”, and “EHT-signal” may be proposed as an example of the “control information”. In addition, when indicated as “control information (i.e., EHT-signal)”, it may also mean that “EHT-signal” is proposed as an example of the “control information”. Technical features described individually in one figure in the present specification may be individually implemented, or may be simultaneously implemented. The following example of the present specification may be applied to various wireless communication systems. For example, the following example of the present specification may be applied to a wireless local area network (WLAN) system. For example, the present specification may be applied to the IEEE 802.11a/g/n/ac standard or the IEEE 802.11ax standard. In addition, the present specification may also be applied to the newly proposed EHT standard or IEEE 802.11be standard. In addition, the example of the present specification may also be applied to a new WLAN standard enhanced from the EHT standard or the IEEE 802.11be standard. In addition, the example of the present specification may be applied to a mobile communication system. For example, it may be applied to a mobile communication system based on long term evolution (LTE) depending on a 3 rd generation partnership project (3GPP) standard and based on evolution of the LTE. In addition, the example of the present specification may be applied to a communication system of a 5G NR standard based on the 3GPP standard. Hereinafter, in order to describe a technical feature of the present specification, a technical feature applicable to the present specification will be described. FIG.1shows an example of a transmitting apparatus and/or receiving apparatus of the present specification. In the example ofFIG.1, various technical features described below may be performed.FIG.1relates to at least one station (STA). For example, STAs110and120of the present specification may also be called in various terms such as a mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile subscriber unit, or simply a user. The STAs110and120of the present specification may also be called in various terms such as a network, a base station, a node-B, an access point (AP), a repeater, a router, a relay, or the like. The STAs110and120of the present specification may also be referred to as various names such as a receiving apparatus, a transmitting apparatus, a receiving STA, a transmitting STA, a receiving device, a transmitting device, or the like. For example, the STAs110and120may serve as an AP or a non-AP. That is, the STAs110and120of the present specification may serve as the AP and/or the non-AP. The STAs110and120of the present specification may support various communication standards together in addition to the IEEE 802.11 standard. For example, a communication standard (e.g., LTE, LTE-A, 5G NR standard) or the like based on the 3GPP standard may be supported. In addition, the STA of the present specification may be implemented as various devices such as a mobile phone, a vehicle, a personal computer, or the like. In addition, the STA of the present specification may support communication for various communication services such as voice calls, video calls, data communication, and self-driving (autonomous-driving), or the like. The STAs110and120of the present specification may include a medium access control (MAC) conforming to the IEEE 802.11 standard and a physical layer interface for a radio medium. The STAs110and120will be described below with reference to a sub-figure (a) ofFIG.1. The first STA110may include a processor111, a memory112, and a transceiver113. The illustrated process, memory, and transceiver may be implemented individually as separate chips, or at least two blocks/functions may be implemented through a single chip. The transceiver113of the first STA performs a signal transmission/reception operation. Specifically, an IEEE 802.11 packet (e.g., IEEE 802.11a/b/g/n/ac/ax/be, etc.) may be transmitted/received. For example, the first STA110may perform an operation intended by an AP. For example, the processor111of the AP may receive a signal through the transceiver113, process a reception (RX) signal, generate a transmission (TX) signal, and provide control for signal transmission. The memory112of the AP may store a signal (e.g., RX signal) received through the transceiver113, and may store a signal (e.g., TX signal) to be transmitted through the transceiver. For example, the second STA120may perform an operation intended by a non-AP STA. For example, a transceiver123of a non-AP performs a signal transmission/reception operation. Specifically, an IEEE 802.11 packet (e.g., IEEE 802.11a/b/g/n/ac/ax/be packet, etc.) may be transmitted/received. For example, a processor121of the non-AP STA may receive a signal through the transceiver123, process an RX signal, generate a TX signal, and provide control for signal transmission. A memory122of the non-AP STA may store a signal (e.g., RX signal) received through the transceiver123, and may store a signal (e.g., TX signal) to be transmitted through the transceiver. For example, an operation of a device indicated as an AP in the specification described below may be performed in the first STA110or the second STA120. For example, if the first STA110is the AP, the operation of the device indicated as the AP may be controlled by the processor111of the first STA110, and a related signal may be transmitted or received through the transceiver113controlled by the processor111of the first STA110. In addition, control information related to the operation of the AP or a TX/RX signal of the AP may be stored in the memory112of the first STA110. In addition, if the second STA120is the AP, the operation of the device indicated as the AP may be controlled by the processor121of the second STA120, and a related signal may be transmitted or received through the transceiver123controlled by the processor121of the second STA120. In addition, control information related to the operation of the AP or a TX/RX signal of the AP may be stored in the memory122of the second STA120. For example, in the specification described below, an operation of a device indicated as a non-AP (or user-STA) may be performed in the first STA110or the second STA120. For example, if the second STA120is the non-AP, the operation of the device indicated as the non-AP may be controlled by the processor121of the second STA120, and a related signal may be transmitted or received through the transceiver123controlled by the processor121of the second STA120. In addition, control information related to the operation of the non-AP or a TX/RX signal of the non-AP may be stored in the memory122of the second STA120. For example, if the first STA110is the non-AP, the operation of the device indicated as the non-AP may be controlled by the processor111of the first STA110, and a related signal may be transmitted or received through the transceiver113controlled by the processor111of the first STA110. In addition, control information related to the operation of the non-AP or a TX/RX signal of the non-AP may be stored in the memory112of the first STA110. In the specification described below, a device called a (transmitting/receiving) STA, a first STA, a second STA, a STA1, a STA2, an AP, a first AP, a second AP, an AP1, an AP2, a (transmitting/receiving) terminal, a (transmitting/receiving) device, a (transmitting/receiving) apparatus, a network, or the like may imply the STAs110and120ofFIG.1. For example, a device indicated as, without a specific reference numeral, the (transmitting/receiving) STA, the first STA, the second STA, the STA1, the STA2, the AP, the first AP, the second AP, the AP1, the AP2, the (transmitting/receiving) terminal, the (transmitting/receiving) device, the (transmitting/receiving) apparatus, the network, or the like may imply the STAs110and120ofFIG.1. For example, in the following example, an operation in which various STAs transmit/receive a signal (e.g., a PPDU) may be performed in the transceivers113and123ofFIG.1. In addition, in the following example, an operation in which various STAs generate a TX/RX signal or perform data processing and computation in advance for the TX/RX signal may be performed in the processors111and121ofFIG.1. For example, an example of an operation for generating the TX/RX signal or performing the data processing and computation in advance may include: 1) an operation of determining/obtaining/configuring/computing/decoding/encoding bit information of a sub-field (SIG, STF, LTF, Data) included in a PPDU; 2) an operation of determining/configuring/obtaining a time resource or frequency resource (e.g., a subcarrier resource) or the like used for the sub-field (SIG, STF, LTF, Data) included the PPDU; 3) an operation of determining/configuring/obtaining a specific sequence (e.g., a pilot sequence, an STF/LTF sequence, an extra sequence applied to SIG) or the like used for the sub-field (SIG, STF, LTF, Data) field included in the PPDU; 4) a power control operation and/or power saving operation applied for the STA; and 5) an operation related to determining/obtaining/configuring/decoding/encoding or the like of an ACK signal. In addition, in the following example, a variety of information used by various STAs for determining/obtaining/configuring/computing/decoding/decoding a TX/RX signal (e.g., information related to a field/subfield/control field/parameter/power or the like) may be stored in the memories112and122ofFIG.1. The aforementioned device/STA of the sub-figure (a) ofFIG.1may be modified as shown in the sub-figure (b) ofFIG.1. Hereinafter, the STAs110and120of the present specification will be described based on the sub-figure (b) ofFIG.1. For example, the transceivers113and123illustrated in the sub-figure (b) ofFIG.1may perform the same function as the aforementioned transceiver illustrated in the sub-figure (a) ofFIG.1. For example, processing chips114and124illustrated in the sub-figure (b) ofFIG.1may include the processors111and121and the memories112and122. The processors111and121and memories112and122illustrated in the sub-figure (b) ofFIG.1may perform the same function as the aforementioned processors111and121and memories112and122illustrated in the sub-figure (a) ofFIG.1. A mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile subscriber unit, a user, a user STA, a network, a base station, a Node-B, an access point (AP), a repeater, a router, a relay, a receiving unit, a transmitting unit, a receiving STA, a transmitting STA, a receiving device, a transmitting device, a receiving apparatus, and/or a transmitting apparatus, which are described below, may imply the STAs110and120illustrated in the sub-figure (a)/(b) ofFIG.1, or may imply the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. That is, a technical feature of the present specification may be performed in the STAs110and120illustrated in the sub-figure (a)/(b) ofFIG.1, or may be performed only in the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. For example, a technical feature in which the transmitting STA transmits a control signal may be understood as a technical feature in which a control signal generated in the processors111and121illustrated in the sub-figure (a)/(b) ofFIG.1is transmitted through the transceivers113and123illustrated in the sub-figure (a)/(b) ofFIG.1. Alternatively, the technical feature in which the transmitting STA transmits the control signal may be understood as a technical feature in which the control signal to be transferred to the transceivers113and123is generated in the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. For example, a technical feature in which the receiving STA receives the control signal may be understood as a technical feature in which the control signal is received by means of the transceivers113and123illustrated in the sub-figure (a) ofFIG.1. Alternatively, the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal received in the transceivers113and123illustrated in the sub-figure (a) ofFIG.1is obtained by the processors111and121illustrated in the sub-figure (a) ofFIG.1. Alternatively, the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal received in the transceivers113and123illustrated in the sub-figure (b) ofFIG.1is obtained by the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. Referring to the sub-figure (b) ofFIG.1, software codes115and125may be included in the memories112and122. The software codes115and126may include instructions for controlling an operation of the processors111and121. The software codes115and125may be included as various programming languages. The processors111and121or processing chips114and124ofFIG.1may include an application-specific integrated circuit (ASIC), other chipsets, a logic circuit and/or a data processing device. The processor may be an application processor (AP). For example, the processors111and121or processing chips114and124ofFIG.1may include at least one of a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), and a modulator and demodulator (modem). For example, the processors111and121or processing chips114and124ofFIG.1may be SNAPDRAGON™ series of processors made by Qualcomm®, EXYNOS™ series of processors made by Samsung®, A series of processors made by Apple®, HELIO™ series of processors made by MediaTek®, ATOM™ series of processors made by Intel® or processors enhanced from these processors. In the present specification, an uplink may imply a link for communication from a non-AP STA to an SP STA, and an uplink PPDU/packet/signal or the like may be transmitted through the uplink. In addition, in the present specification, a downlink may imply a link for communication from the AP STA to the non-AP STA, and a downlink PPDU/packet/signal or the like may be transmitted through the downlink. FIG.2is a conceptual view illustrating the structure of a wireless local area network (WLAN). An upper part ofFIG.2illustrates the structure of an infrastructure basic service set (BSS) of institute of electrical and electronic engineers (IEEE) 802.11. Referring the upper part ofFIG.2, the wireless LAN system may include one or more infrastructure BSSs200and205(hereinafter, referred to as BSS). The BSSs200and205as a set of an AP and a STA such as an access point (AP)225and a station (STA1)200-1which are successfully synchronized to communicate with each other are not concepts indicating a specific region. The BSS205may include one or more STAs205-1and205-2which may be joined to one AP230. The BSS may include at least one STA, APs providing a distribution service, and a distribution system (DS)210connecting multiple APs. The distribution system210may implement an extended service set (ESS)240extended by connecting the multiple BSSs200and205. The ESS240may be used as a term indicating one network configured by connecting one or more APs225or230through the distribution system210. The AP included in one ESS240may have the same service set identification (SSID). A portal220may serve as a bridge which connects the wireless LAN network (IEEE 802.11) and another network (e.g., 802.X). In the BSS illustrated in the upper part ofFIG.2, a network between the APs225and230and a network between the APs225and230and the STAs200-1,205-1, and205-2may be implemented. However, the network is configured even between the STAs without the APs225and230to perform communication. A network in which the communication is performed by configuring the network even between the STAs without the APs225and230is defined as an Ad-Hoc network or an independent basic service set (IBSS). A lower part ofFIG.2illustrates a conceptual view illustrating the IBSS. Referring to the lower part ofFIG.2, the IBSS is a BSS that operates in an Ad-Hoc mode. Since the IBSS does not include the access point (AP), a centralized management entity that performs a management function at the center does not exist. That is, in the IBSS, STAs250-1,250-2,250-3,255-4, and255-5are managed by a distributed manner. In the IBSS, all STAs250-1,250-2,250-3,255-4, and255-5may be constituted by movable STAs and are not permitted to access the DS to constitute a self-contained network. FIG.3illustrates a general link setup process. In S310, a STA may perform a network discovery operation. The network discovery operation may include a scanning operation of the STA. That is, to access a network, the STA needs to discover a participating network. The STA needs to identify a compatible network before participating in a wireless network, and a process of identifying a network present in a particular area is referred to as scanning. Scanning methods include active scanning and passive scanning. FIG.3illustrates a network discovery operation including an active scanning process. In active scanning, a STA performing scanning transmits a probe request frame and waits for a response to the probe request frame in order to identify which AP is present around while moving to channels. A responder transmits a probe response frame as a response to the probe request frame to the STA having transmitted the probe request frame. Here, the responder may be a STA that transmits the last beacon frame in a BSS of a channel being scanned. In the BSS, since an AP transmits a beacon frame, the AP is the responder. In an IBSS, since STAs in the IBSS transmit a beacon frame in turns, the responder is not fixed. For example, when the STA transmits a probe request frame via channel1and receives a probe response frame via channel1, the STA may store BSS-related information included in the received probe response frame, may move to the next channel (e.g., channel2), and may perform scanning (e.g., transmits a probe request and receives a probe response via channel2) by the same method. Although not shown inFIG.3, scanning may be performed by a passive scanning method. In passive scanning, a STA performing scanning may wait for a beacon frame while moving to channels. A beacon frame is one of management frames in IEEE 802.11 and is periodically transmitted to indicate the presence of a wireless network and to enable the STA performing scanning to find the wireless network and to participate in the wireless network. In a BSS, an AP serves to periodically transmit a beacon frame. In an IBSS, STAs in the IBSS transmit a beacon frame in turns. Upon receiving the beacon frame, the STA performing scanning stores information related to a BSS included in the beacon frame and records beacon frame information in each channel while moving to another channel. The STA having received the beacon frame may store BSS-related information included in the received beacon frame, may move to the next channel, and may perform scanning in the next channel by the same method. After discovering the network, the STA may perform an authentication process in S320. The authentication process may be referred to as a first authentication process to be clearly distinguished from the following security setup operation in S340. The authentication process in S320may include a process in which the STA transmits an authentication request frame to the AP and the AP transmits an authentication response frame to the STA in response. The authentication frames used for an authentication request/response are management frames. The authentication frames may include information related to an authentication algorithm number, an authentication transaction sequence number, a status code, a challenge text, a robust security network (RSN), and a finite cyclic group. The STA may transmit the authentication request frame to the AP. The AP may determine whether to allow the authentication of the STA based on the information included in the received authentication request frame. The AP may provide the authentication processing result to the STA via the authentication response frame. When the STA is successfully authenticated, the STA may perform an association process in S330. The association process includes a process in which the STA transmits an association request frame to the AP and the AP transmits an association response frame to the STA in response. The association request frame may include, for example, information related to various capabilities, a beacon listen interval, a service set identifier (SSID), a supported rate, a supported channel, RSN, a mobility domain, a supported operating class, a traffic indication map (TIM) broadcast request, and an interworking service capability. The association response frame may include, for example, information related to various capabilities, a status code, an association ID (AID), a supported rate, an enhanced distributed channel access (EDCA) parameter set, a received channel power indicator (RCPI), a received signal-to-noise indicator (RSNI), a mobility domain, a timeout interval (association comeback time), an overlapping BSS scanning parameter, a TIM broadcast response, and a QoS map. In S340, the STA may perform a security setup process. The security setup process in S340may include a process of setting up a private key through four-way handshaking, for example, through an extensible authentication protocol over LAN (EAPOL) frame. FIG.4illustrates an example of a PPDU used in an IEEE standard. As illustrated, various types of PHY protocol data units (PPDUs) are used in IEEE a/g/n/ac standards. Specifically, an LTF and a STF include a training signal, a SIG-A and a SIG-B include control information for a receiving STA, and a data field includes user data corresponding to a PSDU (MAC PDU/aggregated MAC PDU). FIG.4also includes an example of an HE PPDU according to IEEE 802.11ax. The HE PPDU according toFIG.4is an illustrative PPDU for multiple users. An HE-SIG-B may be included only in a PPDU for multiple users, and an HE-SIG-B may be omitted in a PPDU for a single user. As illustrated inFIG.4, the HE-PPDU for multiple users (MUs) may include a legacy-short training field (L-STF), a legacy-long training field (L-LTF), a legacy-signal (L-SIG), a high efficiency-signal A (HE-SIG A), a high efficiency-signal-B (HE-SIG B), a high efficiency-short training field (HE-STF), a high efficiency-long training field (HE-LTF), a data field (alternatively, an MAC payload), and a packet extension (PE) field. The respective fields may be transmitted for illustrated time periods (i.e., 4 or 8 μs). Hereinafter, a resource unit (RU) used for a PPDU is described. An RU may include a plurality of subcarriers (or tones). An RU may be used to transmit a signal to a plurality of STAs according to OFDMA. Further, an RU may also be defined to transmit a signal to one STA. An RU may be used for an STF, an LTF, a data field, or the like. FIG.5illustrates a layout of resource units (RUs) used in a band of 20 MHz. As illustrated inFIG.5, resource units (RUs) corresponding to different numbers of tones (i.e., subcarriers) may be used to form some fields of an HE-PPDU. For example, resources may be allocated in illustrated RUs for an HE-STF, an HE-LTF, and a data field. As illustrated in the uppermost part ofFIG.5, a 26-unit (i.e., a unit corresponding to 26 tones) may be disposed. Six tones may be used for a guard band in the leftmost band of the MHz band, and five tones may be used for a guard band in the rightmost band of the 20 MHz band. Further, seven DC tones may be inserted in a center band, that is, a DC band, and a 26-unit corresponding to 13 tones on each of the left and right sides of the DC band may be disposed. A 26-unit, a 52-unit, and a 106-unit may be allocated to other bands. Each unit may be allocated for a receiving STA, that is, a user. The layout of the RUs inFIG.5may be used not only for a multiple users (MUs) but also for a single user (SU), in which case one 242-unit may be used and three DC tones may be inserted as illustrated in the lowermost part ofFIG.5. AlthoughFIG.5proposes RUs having various sizes, that is, a 26-RU, a 52-RU, a 106-RU, and a 242-RU, specific sizes of RUs may be extended or increased. Therefore, the present embodiment is not limited to the specific size of each RU (i.e., the number of corresponding tones). FIG.6illustrates a layout of RUs used in a band of 40 MHz. Similarly toFIG.5in which RUs having various sizes are used, a 26-RU, a 52-RU, a 106-RU, a 242-RU, a 484-RU, and the like may be used in an example ofFIG.6. Further, five DC tones may be inserted in a center frequency, 12 tones may be used for a guard band in the leftmost band of the 40 MHz band, and 11 tones may be used for a guard band in the rightmost band of the 40 MHz band. As illustrated inFIG.6, when the layout of the RUs is used for a single user, a 484-RU may be used. The specific number of RUs may be changed similarly toFIG.5. FIG.7illustrates a layout of RUs used in a band of 80 MHz. Similarly toFIG.5andFIG.6in which RUs having various sizes are used, a 26-RU, a 52-RU, a 106-RU, a 242-RU, a 484-RU, a 996-RU, and the like may be used in an example ofFIG.7. Further, seven DC tones may be inserted in the center frequency, 12 tones may be used for a guard band in the leftmost band of the 80 MHz band, and 11 tones may be used for a guard band in the rightmost band of the 80 MHz band. In addition, a 26-RU corresponding to 13 tones on each of the left and right sides of the DC band may be used. As illustrated inFIG.7, when the layout of the RUs is used for a single user, a 996-RU may be used, in which case five DC tones may be inserted. The RU described in the present specification may be used in uplink (UL) communication and downlink (DL) communication. For example, when UL-MU communication which is solicited by a trigger frame is performed, a transmitting STA (e.g., an AP) may allocate a first RU (e.g., 26/52/106/242-RU, etc.) to a first STA through the trigger frame, and may allocate a second RU (e.g., 26/52/106/242-RU, etc.) to a second STA. Thereafter, the first STA may transmit a first trigger-based PPDU based on the first RU, and the second STA may transmit a second trigger-based PPDU based on the second RU. The first/second trigger-based PPDU is transmitted to the AP at the same (or overlapped) time period. For example, when a DL MU PPDU is configured, the transmitting STA (e.g., AP) may allocate the first RU (e.g., 26/52/106/242-RU, etc.) to the first STA, and may allocate the second RU (e.g., 26/52/106/242-RU, etc.) to the second STA. That is, the transmitting STA (e.g., AP) may transmit HE-STF, HE-LTF, and Data fields for the first STA through the first RU in one MU PPDU, and may transmit HE-STF, HE-LTF, and Data fields for the second STA through the second RU. Information related to a layout of the RU may be signaled through HE-SIG-B. FIG.8illustrates a structure of an HE-SIG-B field. As illustrated, an HE-SIG-B field810includes a common field820and a user-specific field830. The common field820may include information commonly applied to all users (i.e., user STAs) which receive SIG-B. The user-specific field830may be called a user-specific control field. When the SIG-B is transferred to a plurality of users, the user-specific field830may be applied only any one of the plurality of users. As illustrated inFIG.8, the common field820and the user-specific field830may be separately encoded. The common field820may include RU allocation information of N*8 bits. For example, the RU allocation information may include information related to a location of an RU. For example, when a 20 MHz channel is used as shown inFIG.5, the RU allocation information may include information related to a specific frequency band to which a specific RU (26-RU/52-RU/106-RU) is arranged. An example of a case in which the RU allocation information consists of 8 bits is as follows. TABLE 18 bits indicesNumber(B7 B6 B5 B4ofB3 B2 B1 B0)#1#2#3#4#5#6#7#8#9entries0000000026262626262626262610000000126262626262626521000000102626262626522626100000011262626262652521000001002626522626262626100000101262652262626521000001102626522652262610000011126265226525210000100052262626262626261 As shown the example ofFIG.5, up to nine 26-RUs may be allocated to the 20 MHz channel. When the RU allocation information of the common field820is set to “00000000” as shown in Table 1, the nine 26-RUs may be allocated to a corresponding channel (i.e., 20 MHz). In addition, when the RU allocation information of the common field820is set to “00000001” as shown in Table 1, seven 26-RUs and one 52-RU are arranged in a corresponding channel. That is, in the example ofFIG.5, the 52-RU may be allocated to the rightmost side, and the seven 26-RUs may be allocated to the left thereof. The example of Table 1 shows only some of RU locations capable of displaying the RU allocation information. For example, the RU allocation information may include an example of Table 2 below. TABLE 28 bits indices(B7 B6 B5NumberB4 B3 B2ofB1 B0)#1#2#3#4#5#6#7#8#9entries01000y2y1y01062626262626801001y2y1y0106262626528 “01000y2y1y0” relates to an example in which a 106-RU is allocated to the leftmost side of the 20 MHz channel, and five 26-RUs are allocated to the right side thereof. In this case, a plurality of STAs (e.g., user-STAs) may be allocated to the 106-RU, based on a MU-MIMO scheme. Specifically, up to 8 STAs (e.g., user-STAs) may be allocated to the 106-RU, and the number of STAs (e.g., user-STAs) allocated to the 106-RU is determined based on 3-bit information (y2y1y0). For example, when the 3-bit information (y2y1y0) is set to N, the number of STAs (e.g., user-STAs) allocated to the 106-RU based on the MU-MIMO scheme may be N+1. In general, a plurality of STAs (e.g., user STAs) different from each other may be allocated to a plurality of RUs. However, the plurality of STAs (e.g., user STAs) may be allocated to one or more RUs having at least a specific size (e.g.,106subcarriers), based on the MU-MIMO scheme. As shown inFIG.8, the user-specific field830may include a plurality of user fields. As described above, the number of STAs (e.g., user STAs) allocated to a specific channel may be determined based on the RU allocation information of the common field820. For example, when the RU allocation information of the common field820is “00000000”, one user STA may be allocated to each of nine 26-RUs (e.g., nine user STAs may be allocated). That is, up to 9 user STAs may be allocated to a specific channel through an OFDMA scheme. In other words, up to 9 user STAs may be allocated to a specific channel through a non-MU-MIMO scheme. For example, when RU allocation is set to “01000y2y1y0”, a plurality of STAs may be allocated to the 106-RU arranged at the leftmost side through the MU-MIMO scheme, and five user STAs may be allocated to five 26-RUs arranged to the right side thereof through the non-MU MIMO scheme. This case is specified through an example ofFIG.9. FIG.9illustrates an example in which a plurality of user STAs are allocated to the same RU through a MU-MIMO scheme. For example, when RU allocation is set to “01000010” as shown inFIG.9, a 106-RU may be allocated to the leftmost side of a specific channel, and five 26-RUs may be allocated to the right side thereof. In addition, three user STAs may be allocated to the 106-RU through the MU-MIMO scheme. As a result, since eight user STAs are allocated, the user-specific field830of HE-SIG-B may include eight user fields. The eight user fields may be expressed in the order shown inFIG.9. In addition, as shown inFIG.8, two user fields may be implemented with one user block field. The user fields shown inFIG.8andFIG.9may be configured based on two formats. That is, a user field related to a MU-MIMO scheme may be configured in a first format, and a user field related to a non-MIMO scheme may be configured in a second format. Referring to the example ofFIG.9, a user field1to a user field3may be based on the first format, and a user field4to a user field8may be based on the second format. The first format or the second format may include bit information of the same length (e.g., 21 bits). Each user field may have the same size (e.g., 21 bits). For example, the user field of the first format (the first of the MU-MIMO scheme) may be configured as follows. For example, a first bit (i.e., B0-B10) in the user field (i.e., 21 bits) may include identification information (e.g., STA-ID, partial AID, etc.) of a user STA to which a corresponding user field is allocated. In addition, a second bit (i.e., B11-B14) in the user field (i.e., 21 bits) may include information related to a spatial configuration. In addition, a third bit (i.e., B15-18) in the user field (i.e., 21 bits) may include modulation and coding scheme (MCS) information. The MCS information may be applied to a data field in a PPDU including corresponding SIG-B. An MCS, MCS information, an MCS index, an MCS field, or the like used in the present specification may be indicated by an index value. For example, the MCS information may be indicated by an index 0 to an index 11. The MCS information may include information related to a constellation modulation type (e.g., BPSK, QPSK, 16-QAM, 64-QAM, 256-QAM, 1024-QAM, etc.) and information related to a coding rate (e.g., ½, ⅔, ¾, ⅚e, etc.). Information related to a channel coding type (e.g., LCC or LDPC) may be excluded in the MCS information. In addition, a fourth bit (i.e., B19) in the user field (i.e., 21 bits) may be a reserved field. In addition, a fifth bit (i.e., B20) in the user field (i.e., 21 bits) may include information related to a coding type (e.g., BCC or LDPC). That is, the fifth bit (i.e., B20) may include information related to a type (e.g., BCC or LDPC) of channel coding applied to the data field in the PPDU including the corresponding SIG-B. The aforementioned example relates to the user field of the first format (the format of the MU-MIMO scheme). An example of the user field of the second format (the format of the non-MU-MIMO scheme) is as follows. A first bit (e.g., B0-B10) in the user field of the second format may include identification information of a user STA. In addition, a second bit (e.g., B11-B13) in the user field of the second format may include information related to the number of spatial streams applied to a corresponding RU. In addition, a third bit (e.g., B14) in the user field of the second format may include information related to whether a beamforming steering matrix is applied. A fourth bit (e.g., B15-B18) in the user field of the second format may include modulation and coding scheme (MCS) information. In addition, a fifth bit (e.g., B19) in the user field of the second format may include information related to whether dual carrier modulation (DCM) is applied. In addition, a sixth bit (i.e., B20) in the user field of the second format may include information related to a coding type (e.g., BCC or LDPC). Hereinafter, a PPDU transmitted/received in a STA of the present specification will be described. FIG.10illustrates an example of a PPDU used in the present specification. The PPDU ofFIG.10may be called in various terms such as an EHT PPDU, a TX PPDU, an RX PPDU, a first type or N-th type PPDU, or the like. For example, in the present specification, the PPDU or the EHT PPDU may be called in various terms such as a TX PPDU, a RX PPDU, a first type or N-th type PPDU, or the like. In addition, the EHT PPDU may be used in an EHT system and/or a new WLAN system enhanced from the EHT system. The PPDU ofFIG.10may indicate the entirety or part of a PPDU type used in the EHT system. For example, the example ofFIG.10may be used for both of a single-user (SU) mode and a multi-user (MU) mode. In other words, the PPDU ofFIG.10may be a PPDU for one receiving STA or a plurality of receiving STAs. When the PPDU ofFIG.10is used for a trigger-based (TB) mode, the EHT-SIG ofFIG.10may be omitted. In other words, an STA which has received a trigger frame for uplink-MU (UL-MU) may transmit the PPDU in which the EHT-SIG is omitted in the example ofFIG.10. InFIG.10, an L-STF to an EHT-LTF may be called a preamble or a physical preamble, and may be generated/transmitted/received/obtained/decoded in a physical layer. A subcarrier spacing of the L-STF, L-LTF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields ofFIG.10may be determined as 312.5 kHz, and a subcarrier spacing of the EHT-STF, EHT-LTF, and Data fields may be determined as 78.125 kHz. That is, a tone index (or subcarrier index) of the L-STF, L-LTF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields may be expressed in unit of 312.5 kHz, and a tone index (or subcarrier index) of the EHT-STF, EHT-LTF, and Data fields may be expressed in unit of 78.125 kHz. In the PPDU ofFIG.10, the L-LTE and the L-STF may be the same as those in the conventional fields. The L-SIG field ofFIG.10may include, for example, bit information of 24 bits. For example, the 24-bit information may include a rate field of 4 bits, a reserved bit of 1 bit, a length field of 12 bits, a parity bit of 1 bit, and a tail bit of 6 bits. For example, the length field of 12 bits may include information related to a length or time duration of a PPDU. For example, the length field of 12 bits may be determined based on a type of the PPDU. For example, when the PPDU is a non-HT, HT, VHT PPDU or an EHT PPDU, a value of the length field may be determined as a multiple of 3. For example, when the PPDU is an HE PPDU, the value of the length field may be determined as “a multiple of 3”+1 or “a multiple of 3”+2. In other words, for the non-HT, HT, VHT PPDI or the EHT PPDU, the value of the length field may be determined as a multiple of 3, and for the HE PPDU, the value of the length field may be determined as “a multiple of 3”+1 or “a multiple of 3”+2. For example, the transmitting STA may apply BCC encoding based on a ½ coding rate to the 24-bit information of the L-SIG field. Thereafter, the transmitting STA may obtain a BCC coding bit of 48 bits. BPSK modulation may be applied to the 48-bit coding bit, thereby generating 48 BPSK symbols. The transmitting STA may map the 48 BPSK symbols to positions except for a pilot subcarrier{subcarrier index −21, −7, +7, +21} and a DC subcarrier{subcarrier index 0}. As a result, the 48 BPSK symbols may be mapped to subcarrier indices−26 to −22, −20 to −8, −6 to −1, +1 to +6, +8 to +20, and +22 to +26. The transmitting STA may additionally map a signal of {−1, −1, −1, 1} to a subcarrier index{−28, −27, +27, +28}. The aforementioned signal may be used for channel estimation on a frequency domain corresponding to −28, −27, +27, +281. The transmitting STA may generate an RL-SIG generated in the same manner as the L-SIG. BPSK modulation may be applied to the RL-SIG. The receiving STA may know that the RX PPDU is the HE PPDU or the EHT PPDU, based on the presence of the RL-SIG. A universal SIG (U-SIG) may be inserted after the RL-SIG ofFIG.10. The U-SIB may be called in various terms such as a first SIG field, a first SIG, a first type SIG, a control signal, a control signal field, a first (type) control signal, or the like. The U-SIG may include information of N bits, and may include information for identifying a type of the EHT PPDU. For example, the U-SIG may be configured based on two symbols (e.g., two contiguous OFDM symbols). Each symbol (e.g., OFDM symbol) for the U-SIG may have a duration of 4 μs. Each symbol of the U-SIG may be used to transmit the 26-bit information. For example, each symbol of the U-SIG may be transmitted/received based on 52 data tomes and 4 pilot tones. Through the U-SIG (or U-SIG field), for example, A-bit information (e.g., 52 un-coded bits) may be transmitted. A first symbol of the U-SIG may transmit first X-bit information (e.g., 26 un-coded bits) of the A-bit information, and a second symbol of the U-SIB may transmit the remaining Y-bit information (e.g. 26 un-coded bits) of the A-bit information. For example, the transmitting STA may obtain 26 un-coded bits included in each U-SIG symbol. The transmitting STA may perform convolutional encoding (i.e., BCC encoding) based on a rate of R=½ to generate 52-coded bits, and may perform interleaving on the 52-coded bits. The transmitting STA may perform BPSK modulation on the interleaved 52-coded bits to generate 52 BPSK symbols to be allocated to each U-SIG symbol. One U-SIG symbol may be transmitted based on 65 tones (subcarriers) from a subcarrier index −28 to a subcarrier index+28, except for a DC index 0. The 52 BPSK symbols generated by the transmitting STA may be transmitted based on the remaining tones (subcarriers) except for pilot tones, i.e., tones −21, −7, +7, +21. For example, the A-bit information (e.g., 52 un-coded bits) generated by the U-SIG may include a CRC field (e.g., a field having a length of 4 bits) and a tail field (e.g., a field having a length of 6 bits). The CRC field and the tail field may be transmitted through the second symbol of the U-SIG. The CRC field may be generated based on 26 bits allocated to the first symbol of the U-SIG and the remaining 16 bits except for the CRC/tail fields in the second symbol, and may be generated based on the conventional CRC calculation algorithm. In addition, the tail field may be used to terminate trellis of a convolutional decoder, and may be set to, for example, “000000”. The A-bit information (e.g., 52 un-coded bits) transmitted by the U-SIG (or U-SIG field) may be divided into version-independent bits and version-dependent bits. For example, the version-independent bits may have a fixed or variable size. For example, the version-independent bits may be allocated only to the first symbol of the U-SIG, or the version-independent bits may be allocated to both of the first and second symbols of the U-SIG. For example, the version-independent bits and the version-dependent bits may be called in various terms such as a first control bit, a second control bit, or the like. For example, the version-independent bits of the U-SIG may include a PHY version identifier of 3 bits. For example, the PHY version identifier of 3 bits may include information related to a PHY version of a TX/RX PPDU. For example, a first value of the PHY version identifier of 3 bits may indicate that the TX/RX PPDU is an EHT PPDU. In other words, when the transmitting STA transmits the EHT PPDU, the PHY version identifier of 3 bits may be set to a first value. In other words, the receiving STA may determine that the RX PPDU is the EHT PPDU, based on the PHY version identifier having the first value. For example, the version-independent bits of the U-SIG may include a UL/DL flag field of 1 bit. A first value of the UL/DL flag field of 1 bit relates to UL communication, and a second value of the UL/DL flag field relates to DL communication. For example, the version-independent bits of the U-SIG may include information related to a TXOP length and information related to a BSS color ID. For example, when the EHT PPDU is divided into various types (e.g., various types such as an EHT PPDU related to an SU mode, an EHT PPDU related to a MU mode, an EHT PPDU related to a TB mode, an EHT PPDU related to extended range transmission, or the like), information related to the type of the EHT PPDU may be included in the version-dependent bits of the U-SIG. For example, the U-SIG may include: 1) a bandwidth field including information related to a bandwidth; 2) a field including information related to an MCS scheme applied to EHT-SIG; 3) an indication field including information regarding whether a dual subcarrier modulation (DCM) scheme is applied to EHT-SIG; 4) a field including information related to the number of symbol used for EHT-SIG; 5) a field including information regarding whether the EHT-SIG is generated across a full band; 6) a field including information related to a type of EHT-LTF/STF; and 7) information related to a field indicating an EHT-LTF length and a CP length. Preamble puncturing may be applied to the PPDU ofFIG.10. The preamble puncturing implies that puncturing is applied to part (e.g., a secondary 20 MHz band) of the full band. For example, when an 80 MHz PPDU is transmitted, an STA may apply puncturing to the secondary 20 MHz band out of the 80 MHz band, and may transmit a PPDU only through a primary 20 MHz band and a secondary 40 MHz band. For example, a pattern of the preamble puncturing may be configured in advance. For example, when a first puncturing pattern is applied, puncturing may be applied only to the secondary 20 MHz band within the 80 MHz band. For example, when a second puncturing pattern is applied, puncturing may be applied to only any one of two secondary 20 MHz bands included in the secondary 40 MHz band within the 80 MHz band. For example, when a third puncturing pattern is applied, puncturing may be applied to only the secondary 20 MHz band included in the primary 80 MHz band within the 160 MHz band (or 80+80 MHz band). For example, when a fourth puncturing is applied, puncturing may be applied to at least one 20 MHz channel not belonging to a primary 40 MHz band in the presence of the primary 40 MHz band included in the 80 MHz band within the 160 MHz band (or 80+80 MHz band). Information related to the preamble puncturing applied to the PPDU may be included in U-SIG and/or EHT-SIG. For example, a first field of the U-SIG may include information related to a contiguous bandwidth, and second field of the U-SIG may include information related to the preamble puncturing applied to the PPDU. For example, the U-SIG and the EHT-SIG may include the information related to the preamble puncturing, based on the following method. When a bandwidth of the PPDU exceeds 80 MHz, the U-SIG may be configured individually in unit of 80 MHz. For example, when the bandwidth of the PPDU is 160 MHz, the PPDU may include a first U-SIG for a first 80 MHz band and a second U-SIG for a second 80 MHz band. In this case, a first field of the first U-SIG may include information related to a 160 MHz bandwidth, and a second field of the first U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the first 80 MHz band. In addition, a first field of the second U-SIG may include information related to a 160 MHz bandwidth, and a second field of the second U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the second 80 MHz band. Meanwhile, an EHT-SIG contiguous to the first U-SIG may include information related to a preamble puncturing applied to the second 80 MHz band (i.e., information related to a preamble puncturing pattern), and an EHT-SIG contiguous to the second U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the first 80 MHz band. Additionally or alternatively, the U-SIG and the EHT-SIG may include the information related to the preamble puncturing, based on the following method. The U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) for all bands. That is, the EHT-SIG may not include the information related to the preamble puncturing, and only the U-SIG may include the information related to the preamble puncturing (i.e., the information related to the preamble puncturing pattern). The U-SIG may be configured in unit of 20 MHz. For example, when an 80 MHz PPDU is configured, the U-SIG may be duplicated. That is, four identical U-SIGs may be included in the 80 MHz PPDU. PPDUs exceeding an 80 MHz bandwidth may include different U-SIGs. The EHT-SIG ofFIG.10may include control information for the receiving STA. The EHT-SIG may be transmitted through at least one symbol, and one symbol may have a length of 4 μs. Information related to the number of symbols used for the EHT-SIG may be included in the U-SIG. The EHT-SIG may include a technical feature of the HE-SIG-B described with reference toFIG.8andFIG.9. For example, the EHT-SIG may include a common field and a user-specific field as in the example ofFIG.8. The common field of the EHT-SIG may be omitted, and the number of user-specific fields may be determined based on the number of users. As in the example ofFIG.8, the common field of the EHT-SIG and the user-specific field of the EHT-SIG may be individually coded. One user block field included in the user-specific field may include information for two users, but a last user block field included in the user-specific field may include information for one user. That is, one user block field of the EHT-SIG may include up to two user fields. As in the example ofFIG.9, each user field may be related to MU-MIMO allocation, or may be related to non-MU-MIMO allocation. As in the example ofFIG.8, the common field of the EHT-SIG may include a CRC bit and a tail bit. A length of the CRC bit may be determined as 4 bits. A length of the tail bit may be determined as 6 bits, and may be set to ‘000000’. As in the example ofFIG.8, the common field of the EHT-SIG may include RU allocation information. The RU allocation information may imply information related to a location of an RU to which a plurality of users (i.e., a plurality of receiving STAs) are allocated. The RU allocation information may be configured in unit of 8 bits (or N bits), as in Table 1. A mode in which the common field of the EHT-SIG is omitted may be supported. The mode in the common field of the EHT-SIG is omitted may be called a compressed mode. When the compressed mode is used, a plurality of users (i.e., a plurality of receiving STAs) may decode the PPDU (e.g., the data field of the PPDU), based on non-OFDMA. That is, the plurality of users of the EHT PPDU may decode the PPDU (e.g., the data field of the PPDU) received through the same frequency band. Meanwhile, when a non-compressed mode is used, the plurality of users of the EHT PPDU may decode the PPDU (e.g., the data field of the PPDU), based on OFDMA. That is, the plurality of users of the EHT PPDU may receive the PPDU (e.g., the data field of the PPDU) through different frequency bands. The EHT-SIG may be configured based on various MCS schemes. As described above, information related to an MCS scheme applied to the EHT-SIG may be included in U-SIG. The EHT-SIG may be configured based on a DCM scheme. For example, among N data tones (e.g., 52 data tones) allocated for the EHT-SIG, a first modulation scheme may be applied to half of consecutive tones, and a second modulation scheme may be applied to the remaining half of the consecutive tones. That is, a transmitting STA may use the first modulation scheme to modulate specific control information through a first symbol and allocate it to half of the consecutive tones, and may use the second modulation scheme to modulate the same control information by using a second symbol and allocate it to the remaining half of the consecutive tones. As described above, information (e.g., a 1-bit field) regarding whether the DCM scheme is applied to the EHT-SIG may be included in the U-SIG. An HE-STF ofFIG.10may be used for improving automatic gain control estimation in a multiple input multiple output (MIMO) environment or an OFDMA environment. An HE-LTF ofFIG.10may be used for estimating a channel in the MIMO environment or the OFDMA environment. Information related to a type of STF and/or LTF (information related to a GI applied to LTF is also included) may be included in a SIG-A field and/or SIG-B field or the like ofFIG.10. A PPDU (e.g., EHT-PPDU) ofFIG.10may be configured based on the example ofFIG.5andFIG.6. For example, an EHT PPDU transmitted on a 20 MHz band, i.e., a 20 MHz EHT PPDU, may be configured based on the RU ofFIG.5. That is, a location of an RU of EHT-STF, EHT-LTF, and data fields included in the EHT PPDU may be determined as shown inFIG.5. An EHT PPDU transmitted on a 40 MHz band, i.e., a 40 MHz EHT PPDU, may be configured based on the RU ofFIG.6. That is, a location of an RU of EHT-STF, EHT-LTF, and data fields included in the EHT PPDU may be determined as shown inFIG.6. Since the RU location ofFIG.6corresponds to 40 MHz, a tone-plan for 80 MHz may be determined when the pattern ofFIG.6is repeated twice. That is, an 80 MHz EHT PPDU may be transmitted based on a new tone-plan in which not the RU ofFIG.7but the RU ofFIG.6is repeated twice. When the pattern ofFIG.6is repeated twice, 23 tones (i.e., 11 guard tones+12 guard tones) may be configured in a DC region. That is, a tone-plan for an 80 MHz EHT PPDU allocated based on OFDMA may have 23 DC tones. Unlike this, an 80 MHz EHT PPDU allocated based on non-OFDMA (i.e., a non-OFDMA full bandwidth 80 MHz PPDU) may be configured based on a 996-RU, and may include 5 DC tones, 12 left guard tones, and 11 right guard tones. A tone-plan for 160/240/320 MHz may be configured in such a manner that the pattern ofFIG.6is repeated several times. The PPDU ofFIG.10may be determined (or identified) as an EHT PPDU based on the following method. A receiving STA may determine a type of an RX PPDU as the EHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the EHT PPDU: 1) when a first symbol after an L-LTF signal of the RX PPDU is a BPSK symbol; 2) when RL-SIG in which the L-SIG of the RX PPDU is repeated is detected; and 3) when a result of applying “modulo 3” to a value of a length field of the L-SIG of the RX PPDU is detected as “0”. When the RX PPDU is determined as the EHT PPDU, the receiving STA may detect a type of the EHT PPDU (e.g., an SU/MU/Trigger-based/Extended Range type), based on bit information included in a symbol after the RL-SIG ofFIG.10. In other words, the receiving STA may determine the RX PPDU as the EHT PPDU, based on: 1) a first symbol after an L-LTF signal, which is a BPSK symbol; 2) RL-SIG contiguous to the L-SIG field and identical to L-SIG; 3) L-SIG including a length field in which a result of applying “modulo 3” is set to “0”; and 4) a 3-bit PHY version identifier of the aforementioned U-SIG (e.g., a PHY version identifier having a first value). For example, the receiving STA may determine the type of the RX PPDU as the EHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the HE PPDU: 1) when a first symbol after an L-LTF signal is a BPSK symbol; 2) when RL-SIG in which the L-SIG is repeated is detected; and 3) when a result of applying “modulo 3” to a value of a length field of the L-SIG is detected as “1” or “2”. For example, the receiving STA may determine the type of the RX PPDU as a non-HT, HT, and VHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the non-HT, HT, and VHT PPDU: 1) when a first symbol after an L-LTF signal is a BPSK symbol; and 2) when RL-SIG in which L-SIG is repeated is not detected. In addition, even if the receiving STA detects that the RL-SIG is repeated, when a result of applying “modulo 3” to the length value of the L-SIG is detected as “0”, the RX PPDU may be determined as the non-HT, HT, and VHT PPDU. In the following example, a signal represented as a (TX/RX/UL/DL) signal, a (TX/RX/UL/DL) frame, a (TX/RX/UL/DL) packet, a (TX/RX/UL/DL) data unit, (TX/RX/UL/DL) data, or the like may be a signal transmitted/received based on the PPDU ofFIG.10. The PPDU ofFIG.10may be used to transmit/receive frames of various types. For example, the PPDU ofFIG.10may be used for a control frame. An example of the control frame may include a request to send (RTS), a clear to send (CTS), a power save-poll (PS-poll), BlockACKReq, BlockAck, a null data packet (NDP) announcement, and a trigger frame. For example, the PPDU ofFIG.10may be used for a management frame. An example of the management frame may include a beacon frame, a (re-)association request frame, a (re-)association response frame, a probe request frame, and a probe response frame. For example, the PPDU ofFIG.10may be used for a data frame. For example, the PPDU ofFIG.10may be used to simultaneously transmit at least two or more of the control frames, the management frame, and the data frame. FIG.11illustrates an example of a modified transmission device and/or receiving device of the present specification. Each device/STA of the sub-figure (a)/(b) ofFIG.1may be modified as shown inFIG.11. A transceiver630ofFIG.11may be identical to the transceivers113and123ofFIG.1. The transceiver630ofFIG.11may include a receiver and a transmitter. A processor610ofFIG.11may be identical to the processors111and121ofFIG.1. Alternatively, the processor610ofFIG.11may be identical to the processing chips114and124ofFIG.1. A memory620ofFIG.11may be identical to the memories112and122ofFIG.1. Alternatively, the memory620ofFIG.11may be a separate external memory different from the memories112and122ofFIG.1. Referring toFIG.11, a power management module611manages power for the processor610and/or the transceiver630. A battery612supplies power to the power management module611. A display613outputs a result processed by the processor610. A keypad614receives inputs to be used by the processor610. The keypad614may be displayed on the display613. A SIM card615may be an integrated circuit which is used to securely store an international mobile subscriber identity (IMSI) and its related key, which are used to identify and authenticate subscribers on mobile telephony devices such as mobile phones and computers. Referring toFIG.11, a speaker640may output a result related to a sound processed by the processor610. A microphone641may receive an input related to a sound to be used by the processor610. 1. Spatial Reuse (SR) Behavior In 802.11ax wireless LAN systems, SR operation is a method of improving spectral efficiency by increasing the number of parallel transmissions. Carrier Sense Threshold (CST) adjustment for interBSS transmission detected through SR operation may be performed. CST coordination is achieved through two mechanisms: i) Overlapping Basic Service Set Packet Detect (OBSS PD)-based SR, and ii) Parametrized Spatial Reuse (PSR). The main difference between the two mechanisms lies in the degree of collaboration between the BSSs to identify SR-based opportunities. Both mechanisms include Transmission Power Control (TPC) to limit further interference generated by simultaneous transmissions. SR operation is introduced as a mechanism to increase the number of stored transmissions and spectral efficiency in OBSS. In some cases, dynamic sensitivity and transmit power tuning have been shown to significantly improve network performance and contribute to reducing the impact of the well-known hidden/exposed device problem. However, in some cases, modifying the CST or transmit power may exacerbate the hidden/exposed device problem by creating flow starvation and asymmetry. FIG.12is a chart showing the effect of increasing and decreasing transmit power and sensitivity in a WLAN. For example, increasing the sensitivity can contribute to more frequent access to the channel because the carrier sense (CS) area is reduced. However, this may lead to observing a higher number of collisions with hidden nodes. In addition, a more robust Modulation and Coding Scheme (MCS) is required because a more aggressive channel access policy may expose the receiver to higher levels of interference. SR operation relies on dynamic Clear Channel Assessment/Carrier Sense (CCA/CS) coordination to increase the number of transmit opportunities (TXOPs) in OBSS. The CCA/CS mechanism is triggered on a Wi-Fi device when it detects the preamble of another device transmission. A detected transmission (exceeding the physical sensitivity threshold) may not decode properly if the received signal is poor. In contrast, for decoded transmissions that exceed the CCA/CS threshold, the physical or virtual carrier sensing action sets the medium in use. The capture effect is also used when detecting multiple signals, so operation can be locked to the strongest signal without experiencing packet collisions. FIG.13is an example illustrating a CS area in a WLAN system. The aforementioned concept is illustrated inFIG.13. InFIG.13, the AP A in the middle can detect a received signal higher than the receiver sensitivity of the antenna, but can only decode signals above the CCA/CS threshold. In addition, channel utilization is improved because AP B transmission can be ignored using the OBSS/PD threshold due to the 11ax SR operation. In addition, transmit power limiting is applied in the case of a TXOP sensed using the OBSS/PD threshold. InFIG.13, transmit power is fixed and all devices use the same frequency channel. 1.1 OBSS PD-Based SR Upon receiving a PPDU, the MAC layer of a specific device receives notification from the PHY. At this time, the node inspects the frame and determines whether the PPDU is an Intra-BSS frame or an Inter-BSS frame among various operations. By quickly identifying the source of an ongoing transmission, a HE STA can improve the probability of accessing a channel using an appropriate OBSS/PD value. 802.11ax defines a set of rules to limit the OBSS/PD threshold, and the upper limit is as follows. ORSS/PD)≤max(OBSS/PDmin, min(OBSS/PDmax,OBSS/PDmin+(TX_PWRref−TX_PWR))), Here, OBSS/PDminand OBSS/PDmaxare −82 dBm and −62 dBm, respectively, and the reference power TX PWRrefis 21 dBm or 25 dBm depending on the capability of the device. TX PWR means the transmit power at the antenna connector in dBm of the HE node that identifies the SR-based TXOP. FIG.14is a graph showing adjustment rules for OBSS/PD and transmit power. Along with sensitivity adjustment, SR operations include transmit power limiting for all transmissions that occur as a result of a sensed SR TXOP (i.e., after ignoring inter-BSS frames given via OBSS/PD-based SR operations). The maximum allowable transmit power (TX PWRmax) is defined as: TX_PWRmax−TX_PWRref−(OBSS/PD−OBSS/PDmin) The previous equation holds for OBSS/PDmax>=OBSS/PD>OBSS/PDmin. Otherwise, the maximum transmit power is not limited. By applying power limiting, the OBSS/PD value aims to reduce the effect of simultaneous transmission caused by SR. Simply put, the higher the OBSS/PD threshold (more inter-BSS transmissions can be ignored), the lower the transmit power (less interference must be generated). The transmit power limit lasts until the end of the SR TXOP identified by the HE node, which begins when the backoff reaches zero. This period depends on the active transmission period used to detect the SR TXOP. 1.2 Parametrized Spatial Reuse (PSR) PSR operation is defined as an alternative to OBSS/PD based SR for TB transmission. A node using a PSR opportunity identifies the PSR opportunity in the sensed TB transmission. On the other hand, the opportunist performs TB transmission and finds a transmission holder indicating support for PSR operation in the header of TF (Trigger Frame). To identify a PSR opportunity, the opportunist must check whether the TB PPDU following a given TF packet can be ignored. To do so, the opportunist's intended transmit power must not exceed the requirement imposed by the transmit holder (encapsulated in the PSR_INPUT parameter). If the opportunist checks the PSR value of the detected TF and confirms that the intended transmit power is acceptable, it is transmitted during the duration of the TB PPDU(s) (indicated in the Common Info field). In particular, the intended transmit power must be less than the PSR value measured in the legacy portion of the TF (i.e., the PHY header) minus the Received Power Level (RPL). The PSR value is calculated as follows. PSR=TX PWRAP+IAPmax where TX PWRAP is the normalized transmit power in dBm at the output of the antenna connector and I{circumflex over ( )}max_AP is the normalized value in dB that captures the maximum allowed interference at the transmit holder. In particular, I{circumflex over ( )}max_AP is calculated by subtracting the minimum SNR that gives 10% PER from the target RSSI indicated in TF (based on the highest MCS used for UL HE TB PPDU transmission). A safety margin (set in the AP) is also included to not exceed 5 dB. 2. Trigger Frame and SR FIG.15shows an operation according to UL-MU. As shown, a transmitting STA (e.g., AP) may perform channel access through contending (i.e., backoff operation) and transmit a trigger frame1030. That is, the transmitting STA (e.g., AP) may transmit a PPDU including a trigger frame1030. When a PPDU including a trigger frame is received, a TB (trigger-based) PPDU is transmitted after a delay equal to SIFS. The TB PPDUs1041and1042may be transmitted in the same time zone and transmitted from a plurality of STAs (e.g., user STAs) for which AIDs are indicated in the trigger frame1030. The ACK frame1050for the TB PPDU may be implemented in various forms. Specific characteristics of the trigger frame are described with reference toFIGS.16to19. Even when UL-MU communication is used, an orthogonal frequency division multiple access (OFDMA) technique or a MU MIMO technique may be used, and OFDMA and MU MIMO techniques may be used simultaneously. FIG.16shows an example of a common information field of a trigger frame. FIG.17shows another example of a common information field of a trigger frame. FIG.16shows an HE variant of a common information field, andFIG.17shows an EHT variant of a common information field. That is, the trigger frame may include a common information field corresponding to the HE variant and/or a common information field corresponding to the EHT variant. FIG.18shows a format of a UL Spatial Reuse subfield. Referring toFIGS.16and17, when the trigger frame requests the HE TB PPDU, the UL Spatial Reuse subfield of the common information field delivers a value to be included in the Spatial Reuse field in the HE-SIG-A field of the requested HE TB PPDU. In the UL Spatial Reuse subfield, each Spatial Reuse n subfield (1<=n<=4) is set to the same value as the corresponding subfield in the HE-SIG-A field of the HE TB PPDU. Spatial Reuse 1, Spatial Reuse 2, Spatial Reuse 3, and Spatial Reuse 4 fields included in the HE-SIG-A field of the HE TB PPDU are defined as follows. Each Spatial Reuse field consists of 4 bits. Each Spatial Reuse field included in the HE-SIG-A field of the HE TB PPDU indicates whether a specific spatial reuse mode is allowed in a subband of the PPDU while the PPDU is being transmitted, and indicates a value used to determine the limit on transmission power of a Parameterized Spatial Reuse Transmission (PSRT) PPDU when PSR reuse is allowed. First, if the Bandwidth field indicates 20 MHz, 40 MHz or 80 MHz, the Spatial Reuse 1 field is applied to the first 20 MHz subband. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse 1 field is applied to the first 40 MHz subband of the 160 MHz operating band. The Spatial Reuse 1 field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse 1 field refers to the first value in the TXVECTOR parameter SPATIAL_REUSE when present. Second, if the bandwidth field indicates 40 MHz or 80 MHz, the Spatial Reuse 2 field is applied to the second 20 MHz subband. If the channel width in which the STA operates is 20 MHz, the Spatial Reuse 2 field is set to the same value as the Spatial Reuse 1 field. If the channel width in which the STA operates is 40 MHz in the 2.4 GHz band, the Spatial Reuse 2 field is set to the same value as the Spatial Reuse 1 field. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse 2 field is applied to the second 40 MHz subband of the 160 MHz operating band. The Spatial Reuse 2 field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse 2 field refers to the second value in the TXVECTOR parameter SPATIAL_REUSE when present. Thirdly, if the bandwidth field indicates 80 MHz, the Spatial Reuse 3 field is applied to the third 20 MHz subband. If the channel width in which the STA operates is 20 MHz or 40 MHz, the Spatial Reuse 3 field is set to the same value as the Spatial Reuse 1 field. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse 3 field is applied to the third 40 MHz subband of the 160 MHz operating band. If the channel width in which the STA operates is 80+80 MHz, the Spatial Reuse 3 field is set to the same value as the Spatial Reuse 1 field. The Spatial Reuse 3 field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse 3 field refers to the third value in the TXVECTOR parameter SPATIAL_REUSE when present. Fourth, if the bandwidth field indicates 80 MHz, the Spatial Reuse 4 field is applied to the fourth 20 MHz subband. If the channel width in which the STA operates is 20 MHz, the Spatial Reuse 4 field is set to the same value as the Spatial Reuse 1 field. If the channel width in which the STA operates is 40 MHz, the Spatial Reuse 4 field is set to the same value as the Spatial Reuse 2 field. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse 4 field is applied to the fourth 40 MHz subband of the 160 MHz operating band. If the channel width in which the STA operates is 80+80 MHz, the Spatial Reuse 4 field is set to the same value as the Spatial Reuse 2 field. The Spatial Reuse 4 field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse 4 field refers to the fourth value in the TXVECTOR parameter SPATIAL_REUSE when present. TABLE 3ValueMeaning0PSR_DISALLOW1PSR = −80 dBm2PSR = −74 dBm3PSR = −68 dBm4PSR = −62 dBm5PSR = −56 dBm6PSR = −50 dBm7PSR = −47 dBm8PSR = −44 dBm9PSR = −41 dBm10PSR = −38 dBm11PSR = −35 dBm12PSR = −32 dBm13PSR = −29 dBm14PSR ≥ −26 dBm15PSR_AND_NON_SRG_OBSS_PD_PROHIBITED The four Spatial Reuse 1, 2, 3, and 4 fields are arranged in order of frequency as follows. In the case of 20 MHz, one Spatial Reuse field corresponds to the entire 20 MHz (the other 3 Spatial Reuse fields show the same value). The Spatial Reuse field applies only to the MHz used for transmission. In the case of 40 MHz, there are two Spatial Reuse fields including a Spatial Reuse 3 field having the same value as the Spatial Reuse 1 field and a Spatial Reuse 4 field having the same value as the Spatial Reuse 2 field. Each pair of Spatial Reuse fields applies only to the corresponding 20 MHz used for transmission. In the case of 80 MHz, there are four Spatial Reuse fields, one for each 20 MHz subchannelIn the case of OFDMA transmission of a given BW, each Spatial Reuse field corresponding to a 20 MHz subband is also applicable to the 242-tone RUs aligned closest to the frequency of the 20 MHz subband described above (in the tone plan for that BW). The correspondence from Spatial Reuse field to 242-tone RU is also applied to all RUs within 242 ton RU. The above also shows that it implies that the 20 MHz OBSS STA uses the Spatial Reuse field corresponding to its own 20 MHz channel, the 40 MHz OBSS STA located in the lower frequency half of the 80 MHz BSS uses the values of the Spatial Reuse 1 field and Spatial Reuse 2 field, and the 40 MHz OBSS STA located at the upper frequency half of the 80 MHz BSS uses Spatial Reuse 3 field and Spatial Reuse 4 field values. For 160 MHz and 80+80 MHz, there are four Spatial Reuse fields, one for each 40 MHz subchannelIn the case of OFDMA transmission of a given BW, each Spatial Reuse field corresponding to a 40 MHz subband can also be applied to the 484-tone RU aligned closest to the frequency of the aforementioned 40 MHz subband. The correspondence from Spatial Reuse field to 484-tone RU is also applied to all RUs within 484-tone RU. The table below shows an example of encoding a Spatial Reuse field for HE SU PPDU, HE ER SU PPDU, and HE MU PPDU. TABLE 4ValueMeaning0PSR_DISALLOW1-12Reserved13SR_RESTRICTED14SR_DELAYED15PSR_AND_NON_SRG_OBSS_PD_PROHIBITED Returning toFIG.18again, when the trigger frame requests the EHT TB PPDU, each Spatial Reuse n subfield (1<=n<=4) of the Common Info field is a Spatial Reuse 1 subfield or Spatial Reuse 2 subfield of the Special User Info field. determined based on one of the fields. FIG.19shows an example of a Special User Info field format. If the Special User Info field is included in the trigger frame, the Special User Info Field Present subfield of the EHT variant of the Common Info Field is set to 0, otherwise it is set to 1. The Special User Info field is identified by an AID12 value of 2007 and is optionally present in a trigger frame generated by the EHT AP. The Special User Info field, if present, is located immediately after the Common Info field of the trigger frame, conveys the nonderived subfield of the U-SIG field of the requested EHT TB PPDU, and the Special User Info Field of the Common Info field Present Subfield is set to 0. The existence of the Special User Info field in the trigger frame is indicated by B55 of the Common Info field in the trigger frame. B55 is set to 1 to indicate that there is no Special User Info field in the trigger frame, and is set to 0 to indicate that the Special User Info field exists in the trigger frame right after the Common Info field. The Spatial Reuse n subfield (1<=n<=2) ofFIG.19is set to the same value as the corresponding Spatial Reuse subfield in the U-SIG field of the EHT TB PPDU. Spatial Reuse 1 and Spatial Reuse 2 fields included in the U-SIG field of the EHT TB PPDU are defined as follows. Each Spatial Reuse field consists of 4 bits. Each Spatial Reuse field included in the U-SIG field of the EHT TB PPDU indicates whether a specific spatial reuse mode is allowed in a subband of the PPDU while the PPDU is being transmitted, and indicates a value used to determine the transmission power limit of the PSRT PPDU when PSR reuse is allowed. First, if the bandwidth field indicates 20 MHz or 40 MHz, the Spatial Reuse 1 field is applied to the first 20 MHz subband. If the bandwidth field indicates 80 MHz, the Spatial Reuse 1 field is applied to each 20 MHz subchannel of the first 40 MHz subband within the 80 MHz operating band. If the bandwidth field indicates 160 MHz, the Spatial Reuse 1 field is applied to each 20 MHz subchannel of the first 80 MHz subband within the 160 MHz operating band. If the bandwidth field indicates 320 MHz-1 or 320 MHz-2, the Spatial Reuse 1 field is applied to each 20 MHz subchannel of the first 160 MHz subband within the 320 MHz operating band. The Spatial Reuse 1 field is set to the SPATIAL_REUSE(1) parameter of TXVECTOR including the Spatial Reuse field encoding value for the HE TB PPDU as shown in Table 3 above. Second, if the bandwidth field indicates 20 MHz, the Spatial Reuse 2 field is set to the same value as the Spatial Reuse 1 field, and disregarded if dot11EHTBaseLineFeaturesImplementedOnly is true. If the bandwidth field indicates 40 MHz, the Spatial Reuse 2 field is applied to the second 20 MHz subband. When operating in the 2.4 GHz band, the Spatial Reuse 2 field is set to the same value as the Spatial Reuse 1 field. If the bandwidth field indicates 80 MHz, the Spatial Reuse 2 field is applied to each 20 MHz subchannel of the second 40 MHz subband within the 80 MHz operating band. If the bandwidth field indicates 160 MHz, the Spatial Reuse 2 field is applied to each 20 MHz subchannel of the second 80 MHz subband within the 160 MHz operating band. If the bandwidth field indicates 320 MHz-1 or 320 MHz-2, the Spatial Reuse 2 field is applied to each 20 MHz subchannel of the second 160 MHz subband within the 320 MHz operating band. The Spatial Reuse 2 field is set to the SPATIAL_REUSE(2) parameter of TXVECTOR including the Spatial Reuse field encoding value for the HE TB PPDU as shown in Table 3 above. 3. Embodiments Applicable to this Specification In the WLAN 802.11be system, transmission of increased streams is considered by using a wider band than the existing 802.11ax or using more antennas to increase peak throughput. In addition, the present specification also considers a method of aggregating and using various bands/links. Meanwhile, in order to reduce interference between BSSs, spatial reuse can be used in the same way as 802.11ax, and the present specification proposes a configuration of a spatial reuse field of an EHT TB PPDU. The EHT trigger frame reuses the structure of the HE Trigger frame for backward compatibility with 802.11ax, and instead, the EHT Common Info field and EHT User Info field for the EHT TB PPDU can be configured. The Special User Info field is a User Info field that does not deliver user-specific information and delivers extended common information that is not provided in the Common Info field. When the Special User Info field is included in the trigger frame, the Special User Info field flag subfield of the EHT variant of the Common Info field is set to 0, and when the Special User Info field is not included in the trigger frame, the Special User Info field flag subfield field is set to 1. The Special User Info field is identified by an AID12 value of 2007 and is optionally present in a trigger frame generated by the EHT AP. If the Special User Info field exists, it is located immediately after the Common Info field of the Trigger frame and transmits a nonderived subfield of the U-SIG field of the requested EHT TB PPDU, and the Special User Info Field Flag subfield of the Common Info field is set to 0. The existence of the Special User Info field in the trigger frame is indicated by B55 of the Common Info field in the trigger frame. B55 is set to 1 to indicate that there is no Special User Info field in the Trigger frame, and is set to 0 to indicate that the Special User Info field exists in the Trigger frame immediately after the Common Info field. 19, in the Special User Info field, the AID12 subfield consists of 12 bits, the PHY Version ID subfield consists of 3 bits, the UL Bandwidth Extension subfield consists of 2 bits, Spatial Reuse 1 subfield consists of 4 bits, the Spatial Reuse 2 subfield consists of 4 bits, the U-SIG Disregard And Validate subfield consists of 12 bits, and the Reserved subfield consists of 3 bits. The PHY Version ID subfield indicates the Wi-Fi version after EHT and EHT. For EHT, the PHY Version ID subfield is set to 0. The UL Bandwidth Extension subfield indicates the bandwidth of the TB PPDU requested from the EHT STA addressed together with the UL BW subfield of the Common Info field (i.e., the bandwidth of the U SIG field of the EHT TB PPDU). The UL bandwidth extension subfields are defined in the table below. TABLE 5Bandwidth for HEUL BandwidthBandwidth for EHTUL BWTB PPDU (MHz)ExtensionTB PPDU (MHz)0200200201Reserved0202Reserved0203Reserved1400401401Reserved1402Reserved1403Reserved2800802801Reserved2802Reserved2803Reserved31600Reserved3160116031602320-131603320-2 The following shows an example of the configuration of the UL BW and UL BW Extension fields when an Aggregated-PPDU (A-PPDU) in which HE Sub-PPDU and EHT Sub-PPDU are mixed is triggered. TABLE 6Bandwidth for HEUL BandwidthBandwidth for EHTUL BWTB PPDU (MHz)ExtensionTB PPDU (MHz)0200200201Reserved0202Reserved0203Reserved1400401401Reserved1402Reserved1403Reserved28008028011602802320-12803320-23800803160116031602320-131603320-2 The UL BW and UL BW Extension fields may be configured in a manner different from the above table. Spatial Reuse 1 and 2 subfields are set to the same values as Spatial Reuse 1 and 2 subfields of the U-SIG field of the EHT TB PPDU, which are values for specific channels according to BW and will be described in more detail below. The U-SIG Disregard And Validate subfield is set to a value copied as it is in the Reserved field in the U-SIG of the EHT TB PPDU. Reserved subfield 3 bits can be reserved or used for other purposes. FIG.20shows an example of an EHT User Info field format. Referring toFIG.20, a PS160 field indicates an RU and a multi-resource unit (MRU) allocated to an STA along with a RU Allocation field. FIG.10shows the structure of a representative EHT PPDU. It can be used for SU and MU transmission, and EHT-SIG may not be included when TB PPDU is transmitted. Universal-SIG (U-SIG) includes a version independent field and a version dependent field. EHT-SIG can carry various common information and user specific information. The bandwidth can be indicated using the bandwidth field, which can be included in U-SIG version independent. The corresponding field may consist of 3 bits and may contain only bandwidth information without including information on the preamble puncturing pattern. In addition, puncturing information may be carried in other fields of U-SIG or specific fields of EHT-SIG. In addition, the version independent field may include a 3-bit version identifier indicating a Wi-Fi version after 802.11be and 802.11be, a 1-bit DL/UL field, BSS color, TXOP duration, etc., and the version dependent field may include information such as PPDU type. In addition, U-SIG is jointly encoded with two symbols and consists of 52 data tones and 4 pilot tones for each 20 MHz. Also, it is modulated in the same way as HE-SIG-A. That is, it is modulated at BPSK ½ code rate. Also, EHT-SIG can be encoded as a variable MCS, and as in the existing 802.11ax, 1 2 1 2 . . . in units of 20 MHz. It may have a structure (may be composed of other structures, for example, 1 2 3 4 . . . or 1 2 1 2 3 4 3 4 . . . ), may also be configured in units of 80 MHz, and in a bandwidth of 80 MHz or higher, the EHT-SIG may be duplicated in units of 80 MHz. Spatial Reuse can be used to reduce interference with OBSS. This specification particularly proposes a configuration of a spatial reuse field in the EHT TB PPDU. In the EHT TB PPDU, the spatial reuse field may be located in a U-SIG version dependent field and may be composed of 4 fields as in 802.11ax, and each field may use 4 bits. The meaning of each entry expressed by each 4 bits may be the same as that described above or may have a different meaning. Alternatively, each field may use a different number of bits. Also, in the EHT TB PPDU, the spatial reuse field may consist of 2 fields instead of 4 fields. The following is a configuration of a representative U-SIG field of the EHT TB PPDU. TABLE 7Two partsNumberof U-SIGBitFieldof bitsDescriptionU-SIG1B0-B2PHY Version3Differentiate between differentIdentifierPHY clauses:Set to 0 for EHT.Values 1-7 are Validate ifdot11EHTBaseLineFeaturesIm-plementedOnly equals true.B3-B5BW3Set to 0 for 20 MHz.Set to 1 for 40 MHz.Set to 2 for 80 MHz.Set to 3 for 160 MHz.Set to 4 for 320 MHz-1.Set to 5 for 320 MHz-2B6UL/DL1Set to 1 to indicate that the PPDUis addressed to the AP.B7-B12BSS Color6An identifier of the BSS. See theTXVECTOR parameter BSS_COLOR.B13-B19TXOP7Set to 127 to indicate no durationinformation if the TXVECTORparameter TXOP_DURATION isUNSPECIFIED. Set to a value lessthan 127 to indicate durationinformation for NAV setting andprotection of the TXOP as follows:If the TXVECTOR parameterTXOP_DURATION is less than 512,then B13 is set to 0 and B14-B19is set to floor(TXOP_DURATION/8).Otherwise, B13 is set to 1 andB14-B19 is set tofloor((TXOP_DURATION512)/128),B20-B25Disregard6Set to a value indicated in B25-B30 of the U-SIG Disregard andValidate subfield in the SpecialUser Info field in the Trigger frameand Disregard ifdot11EHTBaseLineFeaturesIm-plementedOnly equals to true. SeeTable 9- 29j4 (Mapping fromSpecial User Info field to U-SIG-1and U-SIG-2 fields in the EHT TBPPDU)U-SIG2B0-B1PPDU Type And2Set to a value of 0 for a TB PPDU.CompressedFor further clarification on allModevalues of this field, refer toCombination of UL/DL and PPDUType And Compression Modefield. Undefined values of this fieldare Validate ifdot11EHTBaseLineFeaturesIm-plementedOnly equals true.B2Validate1Set to a value indicated in B31 ofthe U-SIG Disregard and Validatesubfield in the Special User Infofield in the Trigger frame andValidate ifdot11EHTBaseLineFeaturesIm-plementedOnly equals true.B3-B6Spatial Reuse 14Indicates whether or not specificspatial reuse modes are allowed ina subband of the PPDU during thetransmission of this PPDU, and ifPSR spatial reuse is allowed,indicates a value that is used todetermine a limit on the transmitpower of the PSRT PPDU. If theBandwidth field indicates 20 MHzor 40 MHz, then this field appliesto the first 20 MHz subband. If theBandwidth field indicates 80 MHz,then this field applies to each 20MHz subchannel of the first 40MHz subband within the 80 MHzoperating band. If the Bandwidthfield indicates 160 MHz, then thisfield applies to each 20 MHzsubchannel of the first 80 MHzsubband within the 160 MHzoperating band. If the Bandwidthfield indicates 320 MHz-1 or 320MHz-2, then this field applies toeach 20 MHz subchannel of thefirst 160 MHz subband within the320 MHz operating band.B7-B10Spatial Reuse 24Indicates whether or not specificspatial reuse modes are allowed ina subband of the PPDU during thetransmission of this PPDU, and ifPSR spatial reuse is allowed,indicates a value that is used todetermine a limit on the transmitpower of the PSRT PPDU. If theBandwidth field indicates 20 MHz,this field is set to the same valueas the Spatial Reuse 1 field, andDisregard ifdot11EHTBaseLineFeaturesIm-plementedOnly equals true. If theBandwidth field indicates 40 MHz,this field applies to the second 20MHz subband. If operating in the2.4 GHz band, this field is set tothe same value as the SpatialReuse 1 field. If the Bandwidthfield indicates 80 MHz, then thisfield applies to each 20 MHzsubchannel of the second 40 MHzsubband within the 80 MHzoperating band. If the Bandwidthfield indicates 160 MHz, then thisfield applies to each 20 MHzsubchannel of the second 80 MHzsubband within the 160 MHzoperating band. If the Bandwidthfield indicates 320 MHz-1 or 320MHz-2, then this field applies toeach 20 MHz subchannel of thesecond 160 MHz subband withinthe 320 MHz operating band.B11-B15Disregard5Set to a value indicated in B32-B36 of the U-SIG Disregard andValidate subfield in the SpecialUser Info field in the Trigger frameand Disregard ifdot11EHTBaseLineFeaturesIm-plementedOnly equals true.B16-B19CRC4CRC for bits 0-41 of the USIG field.Bits 0-41 of the U-SIG fieldcorrespond to bits 0-25 of USIG-1field followed by bits 0-15 of U-SIG-2 fieldB20-B25Tail6Used to terminate the trellis of theconvolutional decoder. Set to 0.Total # of Bits in U-SIG52 The above U-SIG field can be configured by copying the field of the trigger frame as it is. This specification proposes a method of configuring 4 Spatial Reuse fields of the Common Info field and 2 Spatial Reuse fields of the EHT Common Info field (or Special Info field) considering the case where the trigger frame triggers the HE TB PPDU, EHT TB PPDU, or TB A-PPDU. Here, it is assumed that the trigger frame is an EHT trigger frame capable of triggering all HE TB PPDUs, EHT TB PPDUs, or TB A-PPDUs. In addition, it is assumed that the Common Info field of the Trigger frame is a HE/EHT variant Common Info field, and the EHT Common Info field of the Trigger frame is assumed to be a Special Info field. The structure of the EHT Trigger frame, HE TB PPDU, and EHT TB PPDU is as follows. The EHT Trigger frame consists of a HE/EHT variant Common Info field, (Special User Info field) and a HE/EHT variant User Info field. The EHT variant Common Info field includes 4 Spatial Reuse fields, and the 4 Spatial Reuse fields are applied to each of 4 subchannels and are defined for SR (Spatial Reuse) of the OBSS HE STA. The Special User Info field exists when AID=2007, includes two Spatial Reuse fields, the two Spatial Reuse fields are duplicated to the two Spatial Reuse fields in the U-SIG of the EHT TB PPDU and are defined for the SR of the OBSS EHT STA. As described above, the bandwidth of the EHT TB PPDU is indicated through the 2-bit UL BW field in the EHT variant Common Info field and the 2-bit UL Bandwidth Extension subfield in the Special User Info field. Among the UL HE-SIG-A2 Reserved subfields in the HE variant Common Info field, B54 and B55 are used as HE/EHT P160 and Special User Info Field Flag subfields in the EHT variant Common Info field, respectively (seeFIGS.16and17). The HE/EHT P160 subfield indicates whether the primary160is a HE TB PPDU (set to 1) or an EHT TB PPDU (set to 0). The Special User Info Field Flag subfield indicates whether the Special User Info field exists (set to 0) or not (set to 1). That is, B54 and B55 of the UL HE-SIG-A2 Reserved subfields were originally set to 11, but when the EHT Trigger frame triggers the EHT TB PPDU, B54 and B55 are set to 00. The HE TB PPDU includes 4 Spatial Reuse fields in HE-SIG-A. The EHT TB PPDU includes two Spatial Reuse fields in the U-SIG. For the two Spatial Reuse fields included in the U-SIG, the values of the two Spatial Reuse fields of the Special User Info field are duplicated. 3.1. When Trigger Frame Triggers HE TB PPDU Only A trigger frame may be configured simply like an existing HE trigger frame without the EHT Common Info field and the EHT User Info field. In this case, the UL BW indicates the BW of the HE TB PPDU, and accordingly, 4 Spatial Reuse fields can also be set in the same way as in the existing 802.11ax, and this can be used to configure the Spatial Reuse field in HE-SIG-A when HE TB PPDU is transmitted. That is, 4 Spatial Reuse fields in the Common Info field and 4 Spatial Reuse fields in the HE TB PPDU may be set as shown in Appendix 1 described later. 3.2. When Trigger Frame Triggers Only EHT TB PPDU When the trigger frame triggers only the EHT TB PPDU, the UL BW of the Common Info field may be set to a specific value to indicate the BW of the EHT TB PPDU. If the OBSS HE STA and the non-associated HE STA BW) can be used to determine the BW of the TB PPDU. (It may vary depending on the UL BW configuration, but in the UL BW configuration example above, the same BW can be determined when the 20/40/80/160 MHz EHT TB PPDU is triggered. But if 320 MHz EHT TB PPDU is triggered, the UL BW can be determined as 160 MHz). Therefore, since the OBSS HE STA and the non-associated HE STA can perform Spatial Reuse using the 4 Spatial Reuse fields of the Common Info field, four Spatial Reuse fields in Common Info field of Trigger Frame need to be set to specific values. In the example of the UL BW and UL Bandwidth Extension subfields above, the 4 Spatial Reuse fields in the Common Info field are the BW indicated by the UL BW (20/40/80/160 MHz), it can be set like the existing 802.11ax Trigger frame (even if it is not the above example, when the 20/40/80/160 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is the same case). Like Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. Basically, this may be a value regardless of the configuration of the Spatial Reuse field in the U-SIG when transmitting the EHT TB PPDU, but as shown in Appendix 3 described later, the Spatial Reuse field in the U-SIG of the EHT TB PPDU can be configured using the four Spatial Reuse fields in the Common Info field. In this case, two Spatial Reuse fields in the EHT Common Info field (Special User Info field) may be set identically (In other words, the method of configuring the Spatial Reuse field in U-SIG of EHT TB PPDU in Appendix 3 is applied as it is to the composition of two Spatial Reuse fields in EHT Common Info field, and this value is may be used for setting fields) or reserved. Four Spatial Reuse fields in the Common Info field can be set as follows according to the BW (20/40/80/160 MHz) indicated in the UL BW in the example of UL BW and UL Bandwidth Extension subfield above, when 20/40/80/160 MHz EHT TB PPDU is triggered. (Even if it is not the above example, when the 20/40/80/160 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is the same) The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. When the 20 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to the same value, and one of these two values can be duplicated to set the same value in all four fields in the Common Info field. When the 40 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to each 20 MHz. These values can be copied and set as they are in the corresponding 20 MHz field among the 4 Spatial Reuse fields in the Common Info field. In other words, the value of the first field of the two Spatial Reuse fields in the EHT Common Info field can be duplicated to the first and third values of the four Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be duplicated to the second and fourth values among the four Spatial Reuse fields in the Common Info field. When the 80 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to each 40 MHz, and these values are duplicated as they are and the first two fields among the four Spatial Reuse fields in the Common Info field can be set as the first value of the two Spatial Reuse fields in the EHT Common Info field. The last two fields among the four Spatial Reuse fields in the Common Info field can be set as the last values among the two Spatial Reuse fields in the EHT Common Info field. In addition, in order to correct the value according to the BW difference (or according to the normalization difference), after adding or subtracting a specific dBm value to the meaning of the value (i.e., PSR value in dBm), it can be changed to a value corresponding to a maximum dBm value that is smaller than or equal to this value. In this case, it may be desirable to compensate by subtracting 6 (or 20 log 2) dB in particular. Even if the channel size corresponding to each spatial reuse field value is different, if normalization is applied to the same channel size (for example, normalization per 20 MHz), it is not necessary to correct when copying and setting, and this is the same in various situations below. When the 160 MHz EHT TB PPDU is triggered, the 2 Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 80 MHz each, and these values are copied as they are and the first two fields among the 4 Spatial Reuse fields in the Common Info field can be set as the first value among the two Spatial Reuse fields in the EHT Common Info field, and the last two fields among the four Spatial Reuse fields in the Common Info field can be set as the last value among the two Spatial Reuse fields in the EHT Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dBm value may be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to a maximum dBm value that is smaller than or equal to this value. In this case, it may be desirable to compensate by subtracting 6 (or 20 log 2) dB in particular. However, if the values of the two Spatial Reuse fields in the EHT Common Info field are normalized to a 20 MHz channel and the values of the four Spatial Reuse fields in the Common Info field are simply normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The 4 Spatial Reuse fields in the Common Info field can be set as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted below 160 MHz. In this case, 4 Spatial Reuse fields in the Common Info field may be set as 160 MHz in Appendix 1 described later. However, 160 MHz may be 160 MHz including a channel through which a trigger frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set identically (In other words, it is set to one of the four Spatial Reuse field values. For example, it can be set to the largest or smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field can be set as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted below 160 MHz. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are each set to a spatial reuse value corresponding to 160 MHz. Among the two Spatial Reuse fields in the EHT Common Info field, the values corresponding to 160 MHz including the channel through which the Trigger Frame is transmitted can be copied and the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the difference in BW (or according to the difference in normalization), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value (set the 4 values the same). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log 4) in particular. However, if the values of the 2 Spatial Reuse fields in the EHT Common Info field are normalized to a 20 MHz channel and the values of the 4 Spatial Reuse fields in the Common Info field are simply normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The 4 Spatial Reuse fields in the Common Info field can be set as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. Like 160 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 160 MHz may be one of Primary 160 MHz and Secondary 160 MHz (or low 160 MHz and high 160 MHz). For example, it can be simply Primary 160 MHz. Alternatively, each Spatial Reuse value (or PSR value, the same applies below) among Primary 160 MHz and Secondary 160 MHz (or low 160 MHz and high 160 MHz) can be set to a larger or smaller Spatial Reuse value of 160 MHz. Or, it can be set to a Spatial Reuse value of 160 MHz with a smaller or larger value among the minimum or maximum value of the four 40 MHz Spatial Reuse values within the Primary 160 MHz (or low 160 MHz) and the minimum or maximum value of the four Spatial Reuse values within the Secondary 160 MHz (or high 160 MHz). Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set identically or reserved. (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field are triggered when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. According to the BW (160 MHz) indicated in the UL BW, it can be set as another example as follows. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel There are four 40 MHz Spatial Reuse values each within Primary 160 MHz and Secondary 160 MHz (or low 160 MHz and high 160 MHz), the Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 40 MHz at the same location within two 160 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 40 MHz Spatial Reuse value of Primary 160 MHz (or low 160 MHz) and the lowest 40 MHz Spatial Reuse value of Secondary 160 MHz (or high 160 MHz). The second Spatial Reuse field in the Common Info field can be set by comparing the Spatial Reuse value of the second low 40 MHz of the Primary 160 MHz (or low 160 MHz) and the Spatial Reuse value of the second low 40 MHz of the Secondary 160 MHz (or high 160 MHz). The third Spatial Reuse field in the Common Info field can be set by comparing the Spatial Reuse value of the second high 40 MHz of the Primary 160 MHz (or low 160 MHz) and the Spatial Reuse value of the second high 40 MHz of the Secondary 160 MHz (or high 160 MHz). The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 40 MHz Spatial Reuse value of Primary 160 MHz (or low 160 MHz) and the highest 40 MHz Spatial Reuse value of Secondary 160 MHz (or high 160 MHz). The 4 Spatial Reuse fields in the Common Info field cab be set as another example as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are each set to a spatial reuse value corresponding to 160 MHz. By copying the larger or smaller value among the two Spatial Reuse fields in the EHT Common Info field, the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value (The four values are set equal). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log 4) in particular. However, if the values of the 2 Spatial Reuse fields in the EHT Common Info field are normalized to a 20 MHz channel and the values of the 4 Spatial Reuse fields in the Common Info field simply mean values normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The four Spatial Reuse fields in the Common Info field can be set as follows when a 160 MHz EHT TB PPDU is triggered with a configuration of UL BW and UL Bandwidth Extension subfields and the BW indicated in the UL BW is 80 MHz. It is assumed that the transmission BW of the Trigger Frame is different from that of the EHT TB PPDU and is transmitted below 80 MHz. Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be 80 MHz including a channel through which a trigger frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 80 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field can be configured as follows when a 160 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the Trigger Frame is different from that of the EHT TB PPDU and is transmitted below 80 MHz. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 160 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 80 MHz respectively. Among the two Spatial Reuse fields in the EHT Common Info field, the values corresponding to 80 MHz including the channel through which the trigger frame is transmitted can be copied and the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (The four values can be set the same). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log 4) in particular. The 4 Spatial Reuse fields in the Common Info field can be configured as follows when a 160 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be one of Primary and Secondary 80 MHz (or low 80 MHz and high 80 MHz). For example, it may simply be Primary 80 MHz. Alternatively, each Spatial Reuse value of Primary 80 MHz and Secondary MHz (or low 80 MHz and high 80 MHz) can be set to a Spatial Reuse value of 80 MHz having a larger or smaller value. Or, it can be set to a Spatial Reuse value of 80 MHz with a smaller or larger value among the minimum or maximum value of the four 20 MHz Spatial Reuse values within the Primary 80 MHz (or low 80 MHz) and the minimum or maximum value of the four 20 MHz Spatial Reuse values within the Secondary 80 MHz (or high 80 MHz). Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set identically (That is, it can be set to one of the four Spatial Reuse field values. For example, it can be set to the largest or smallest value. This value corresponds to the corresponding 80 MHz Spatial Reuse field in the U-SIG of the EHT TB PPDU. may be used for settings) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field can be configured as follows as another example when a 160 MHz EHT TB PPDU is triggered with a configuration of UL BW and UL Bandwidth Extension subfields and the BW indicated in the UL BW is 80 MHz. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. There are four 20 MHz Spatial Reuse values each within Primary 80 MHz and Secondary 80 MHz (or low 80 MHz and high 80 MHz), The Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 20 MHz at the same location within two 80 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 20 MHz Spatial Reuse value of Primary (or low 80 MHz) and the lowest 20 MHz Spatial Reuse value of Secondary 80 MHz (or high 80 MHz). The second Spatial Reuse field in the Common Info field can be set by comparing the second low 20 MHz Spatial Reuse value of Primary 80 MHz (or low 80 MHz) and the second low 20 MHz Spatial Reuse value of Secondary 80 MHz (or high 80 MHz). The third Spatial Reuse field in the Common Info field can be set by comparing the second high Spatial Reuse value of Primary 80 MHz (or low 80 MHz) and the second high 20 MHz Spatial Reuse value of Secondary 80 MHz (or high 80 MHz). The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 20 MHz Spatial Reuse value of Primary 80 MHz (or low 80 MHz) and the highest 20 MHz Spatial Reuse value of Secondary (or high 80 MHz). The 4 Spatial Reuse fields in the Common Info field can be configured as follows as another example when a 160 MHz EHT TB PPDU is triggered with a configuration of UL BW and UL Bandwidth Extension subfields and the BW indicated in the UL BW is 80 MHz. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 160 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 80 MHz respectively. By copying the larger or smaller value among the two Spatial Reuse fields in the EHT Common Info field, the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (The four values can be set the same). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log 4) in particular. However, if the values of the 2 Spatial Reuse fields in the EHT Common Info field are normalized to a channel and the values of the 4 Spatial Reuse fields in the Common Info field are simply normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the Trigger Frame is different from that of the EHT TB PPDU and is transmitted below 80 MHz. Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be 80 MHz including a channel through which a trigger frame is transmitted. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted between 80 MHz and 160 MHz. Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be one of two 80 MHz channels within a 160 MHz channel including a channel through which a trigger frame is transmitted. Alternatively, each Spatial Reuse value of two 80 MHz channels in a 160 MHz channel including a channel through which a trigger frame is transmitted may be set to a larger or smaller 80 MHz Spatial Reuse value. Or It can be set to a Spatial Reuse value of 80 MHz with a smaller or larger value among the minimum or maximum value of the four 20 MHz Spatial Reuse values within the first 80 MHz and the minimum or maximum value of the four Spatial Reuse values within the second 80 MHz of the 160 MHz channel including the channel through which the Trigger Frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belongs can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz to which MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field belongs may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, the fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belong can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU belong. Four Spatial Reuse fields in the Common Info field are configured as UL BW and UL Bandwidth Extension subfields, and as another example when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz, it can be set as follows. It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted between 80 MHz and 160 MHz. There are four 20 MHz Spatial Reuse values in the first 80 MHz and the second 80 MHz of the 160 MHz channel that includes the channel through which the Trigger Frame is transmitted. The Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 20 MHz at the same location within two 80 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 20 MHz Spatial Reuse value of the first 80 MHz and the lowest 20 MHz Spatial Reuse value of the second 80 MHz among the 160 MHz channels including the channel through which the Trigger Frame is transmitted. The second Spatial Reuse field in the Common Info field can be set by comparing the second low 20 MHz Spatial Reuse value of the first 80 MH of the 160 MHz channel including the channel through which the trigger frame is transmitted and the second low 20 MHz Spatial Reuse value of the second 80 MHz. The third Spatial Reuse field in the Common Info field can be set by comparing the second high 20 MHz Spatial Reuse value of the first 80 MHz and the second high 20 MHz Spatial Reuse value of the second 80 MHz among the 160 MHz channels including the channel through which the Trigger Frame is transmitted. The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 20 MHz Spatial Reuse value of the first 80 MHz and the highest MHz Spatial Reuse value of the second 80 MHz among the 160 MHz channels including the channel through which the trigger frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belongs can be set to one of the values of 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz to which 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field belongs may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, the fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belong can be set to appropriate Spatial Reuse values, and these values can be used to set fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU belong. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted below 160 MHz. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 160 MHz respectively. Among the 2 Spatial Reuse fields in the EHT Common Info field, the value corresponding to 160 MHz including the channel through which the Trigger Frame is transmitted can be copied and the corresponding values can be set identically to the 4 fields in the Common Info field. (Four spatial reuse fields represent 80 MHz, each corresponding to 20 MHz, and a 160 MHz spatial reuse value may be set as it is). In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (4 values set equal). In this case, it may be desirable to compensate by subtracting 18 dB (or 20 log 8) in particular. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be one of two 80 MHz (or lowest 80 MHz and second lowest 80 MHz and second highest 80 MHz and highest 80 MHz) of Primary 80 MHz, Secondary 80 MHz, and Secondary 160 MHz. For example, it may simply be Primary 80 MHz. Or each Spatial Reuse value may be set to a Spatial Reuse value of 80 MHz having a larger or smaller value among two 80 MHz of Primary 80 MHz, Secondary 80 MHz, and Secondary 160 MHz or lowest 80 MHz and second lowest 80 MHz and second highest 80 MHz and highest 80 MHz. Or it can be set to a Spatial Reuse value of 80 MHz having a smaller or larger value among the minimum or maximum values of the four 20 MHz Spatial Reuse values within the high 80 MHz (or highest 80 MHz) among the minimum or maximum value of four 20 MHz Spatial Reuse values within Primary 80 MHz (or lowest 80 MHz) and the minimum or maximum value of four 20 MHz Spatial Reuse values within Secondary 80 MHz (or second lowest 80 MHz) and the lower 80 MHz (or second lowest 80 MHz) of Secondary 160 MHz highest 80 MHz) of the minimum or maximum of the four 20 MHz Spatial Reuse values and the Secondary 160 MHz. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belongs can be set to one of the values of 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz to which 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field belongs may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, the fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belong can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU belong. Four Spatial Reuse fields in the Common Info field are configured as UL BW and UL Bandwidth Extension subfields, and as another example when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz, it can be set as follows. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. There are four 20 MHz Spatial Reuse values within each of the two 80 MHz (or lowest 80 MHz and second lowest 80 MHz and second highest 80 MHz and highest 80 MHz) of Primary 80 MHz, Secondary 80 MHz, and Secondary 160 MHz. The Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 20 MHz at the same location within four 80 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 20 MHz Spatial Reuse value of the low 80 MHz (or second highest 80 MHz) and the lowest 20 MHz Spatial Reuse value of the high 80 MHz (or highest 80 MHz) of Secondary 160 MHz among the lowest 20 MHz Spatial Reuse value of Primary 80 MHz (or lowest 80 MHz), the lowest 20 MHz Spatial Reuse value of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. The second Spatial Reuse field in the Common Info field can be set by comparing the second low 20 MHz Spatial Reuse value of the low 80 MHz (or second highest 80 MHz) and the second low 20 MHz Spatial Reuse value of the high 80 MHz (or highest 80 MHz) of the Secondary 160 MHz among the second lowest Spatial Reuse value of 20 MHz of Primary 80 MHz (or lowest 80 MHz) and the second lowest Spatial Reuse value of 20 MHz of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. The third Spatial Reuse field in the Common Info field can be set by comparing the Spatial Reuse value of the second high 20 MHz of the low 80 MHz (or the second highest 80 MHz) and the Spatial Reuse of the second high 20 MHz of the high 80 MHz (or the highest 80 MHz) of the Secondary 160 MHz among the second highest Spatial Reuse value of 20 MHz of Primary 80 MHz (or lowest 80 MHz) and the second highest Spatial Reuse value of 20 MHz of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 20 MHz Spatial Reuse value of the low 80 MHz (or second highest 80 MHz) and the highest 20 MHz Spatial Reuse value of the high 80 MHz (or highest 80 MHz) of the Secondary 160 MHz among the highest 20 MHz Spatial Reuse value of Primary 80 MHz (or lowest 80 MHz) and the highest 20 MHz Spatial Reuse value of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. Four Spatial Reuse fields in the Common Info field are configured as UL BW and UL Bandwidth Extension subfields, and as another example when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz, it can be set as follows. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are each set to a spatial reuse value corresponding to 160 MHz. By copying the larger or smaller value of the 2 Spatial Reuse fields in the EHT Common Info field, the corresponding values can be set identically to the 4 fields in the Common Info field. (Four spatial reuse fields represent 80 MHz, each corresponding to 20 MHz, and a 160 MHz spatial reuse value may be set as it is). In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (The four values can be set the same). In this case, it may be particularly desirable to correct by subtracting 18 dB (20 log 8). The 4 Spatial Reuse fields in the Common Info field consist of UL BW and UL Bandwidth Extension subfields, 80 MHz (or W MHz, W is 80, 40 or 20) EHT TB PPDU is triggered and the BW indicated in UL BW is 160 MHz (or 2*W MHz, where W is 80, 40 or 20), it can be set as follows. 4 Spatial Reuse fields in the Common Info field can be set like 160 MHz (or 2*W MHz, W is 80, 40 or 20) in Appendix 1 described later. However, the actual Spatial Reuse value can be set only for 80 MHz (or W MHz, where W is 80, 40, or 20) where the actual EHT TB PPDU is transmitted. For other 80 MHz (or W MHz, W is 80, 40 or 20), any Spatial Reuse value can be set. However, since it is a part where actual signals are not transmitted, it may be desirable to set it to a large Spatial Reuse value. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, among the values of the four Spatial Reuse fields in the Common Info field, two 40 MHz (or two W/2 MHz for W can be set to 80 or 40, and one 20 MHz for W can be set using the 20) Spatial Reuse value. In this case, two Spatial Reuse fields in the EHT Common Info field may also be set identically (That is, two 40 MHz (or two W/2 MHz for W) corresponding to 80 MHz (or W MHz, W is 80, 40, or 20) used for transmission of EHT TB PPDU among the values of the four Spatial Reuse fields80or40, one 20 MHz for W can be set using the 20) Spatial Reuse value. This value may be used to configure the U-SIG Spatial Reuse field of the EHT TB PPDU) or reserved. The 4 Spatial Reuse fields in the Common Info field consist of UL BW and UL Bandwidth Extension subfields, 80 MHz (or W MHz, W is 80, 40 or 20) EHT TB PPDU is triggered and the BW indicated in UL BW is 160 MHz (or 2*W MHz, where W is 80, 40 or 20), another example may be set as follows. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when the 80 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to each 40 MHz. These values can be copied and set as they are in the corresponding 40 MHz field among the 4 Spatial Reuse fields in the Common Info field. For example, if the 80 MHz EHT TB PPDU corresponds to the lower frequency of the 160 MHz channel, the value of the first field of the two Spatial Reuse fields in the EHT Common Info field can be copied to the first value of the four Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the second value among the four Spatial Reuse fields in the Common Info field. If the 80 MHz EHT TB PPDU corresponds to the high frequency of the 160 MHz channel, the value of the first field of the two Spatial Reuse fields in the EHT Common Info field can be copied to the third value of the four Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the fourth value among the four Spatial Reuse fields in the Common Info field. Among the four Spatial Reuse fields in the Common Info field that do not apply, the values of the two Spatial Reuse fields that do not apply can be set to specific values (preferably set to a high value), and for ease of implementation, the values of the two Spatial Reuse fields in the EHT Common Info field can be used. In other words, the value of the first field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the first and third values of the 4 Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the second and fourth values among the four Spatial Reuse fields in the Common Info field. Alternatively, the 4 Spatial Reuse fields in the Common Info field may simply be set according to the BW (EHT TB PPDU BW) indicated in the UL BW and UL Bandwidth Extension subfields for spatial reuse of EHT STAs. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB PPDU is transmitted. That is, as shown in Appendix 2 described later, 4 Spatial Reuse fields in the Common Info field can be set, and the Spatial Reuse field in the U-SIG of the EHT TB PPDU can be configured. In this case, the two Spatial Reuse fields in the EHT Common Info field are set identically (i.e., the method of configuring the Spatial Reuse field in the U-SIG of the EHT TB PPDU in Appendix 2 is equivalent to the configuration of the two Spatial Reuse fields in the EHT Common Info field). This value may be used to configure the Spatial Reuse field in the U-SIG of the EHT TB PPDU) or may be reserved. Or the 4 Spatial Reuse fields in the Common Info field can be set to a value (0) that disallows spatial reuse or a value (15) that prohibits spatial reuse regardless of the BW of the simply triggered EHT TB PPDU or the BW indicated in the UL BW. The reason is that in order for the OBSS HE STA to perform spatial reuse, it is impossible to obtain BSS color information from the EHT TB PPDU in terms of the 802.11ax spec. In the SR value, PSR_Disallow (value=0) disables SR, but OBSS PD (Preamble Detection) is available. PSR_AND_NON_SRG_OBSS_PD_PROHIBITED (value=15) disables not only SR but also OBSS PD. The dB value can be defined the same as the existing 802.11ax (see Table 3). The two Spatial Reuse fields in the EHT Common Info field can be set according to the BW (EHT TB PPDU BW) indicated in the UL BW and UL Bandwidth Extension subfields in addition to the setting method suggested above. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set and a Spatial Reuse field in the U-SIG of the EHT TB PPDU can be configured. 3.3. When Triggering TB A-PPDU FIG.21shows an example of transmitting a TB A-PPDU. A TB A-PPDU (Trigger Based Aggregated-PPDU) is a PPDU in which an EHT TB PPDU and a HE TB PPDU are simultaneously transmitted by a trigger frame. As shown inFIG.21, the trigger frame can trigger EHT TB PPDU and HE TB PPDU, and TB A-PPDU can be transmitted simultaneously by one STA by aggregating EHT TB PPDU and HE TB PPDU. Alternatively, the TB A-PPDU may be an aggregate of the EHT TB PPDU and the HE TB PPDU, and the EHT TB PPDU or HE TB PPDU may be transmitted by a plurality of STAs. As described above, in the trigger frame triggering the TB A-PPDU, 4 spatial reuse fields for the HE TB PPDU and 2 spatial reuse fields for the EHT TB PPDU may exist. The four spatial reuse fields can be set to a value for the bandwidth of only the HE TB PPDU (i.e., considering only the bandwidth through which the HE TB PPDU is transmitted regardless of the entire bandwidth of the TB A-PPDU), the two spatial reuse fields may be set to a value considering the bandwidth of only the EHT TB PPDU or the entire bandwidth. 4 Spatial Reuse fields in the Common Info field can be set like the existing 802.11ax Trigger frame according to the BW (HE TB Sub-PPDU BW) indicated in the UL BW. This can be used to configure the Spatial Reuse field in HE-SIG-A when HE TB PPDU is transmitted. That is, as shown in Appendix 1 described later, four Spatial Reuse fields in the Common Info field can be set and a Spatial Reuse field in the HE TB Sub-PPDU can be configured. Two Spatial Reuse fields in the EHT Common Info field can be set according to the BW (EHT TB Sub-PPDU BW or A-PPDU BW) indicated in the UL BW and UL BW Extension subfields. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set, and Spatial Reuse field in U-SIG of EHT TB Sub-PPDU may be configured. It may be preferable that it is set to the Spatial Reuse value of the indicated BW. Alternatively, two Spatial Reuse fields in the EHT Common Info field are used when the BW indicated in the UL BW and UL BW Extension subfields is the EHT TB Sub-PPDU BW, it is not set according to the corresponding BW, but can be set according to the entire BW of the A-PPDU. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB Sub-PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set and a Spatial Reuse field in the U-SIG of the EHT TB Sub-PPDU can be configured. This may be desirable because it is a spatial reuse value considering the BW of all A-PPDUs actually transmitted, but problems may occur depending on the value of the BW indicator of the TB PPDU. Alternatively, two Spatial Reuse fields in the EHT Common Info field are used when the BW indicated in the UL BW and UL BW Extension subfields is A-PPDU BW, it is not set according to the corresponding BW, but can be set according to the EHT TB Sub-PPDU BW. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB Sub-PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set and a Spatial Reuse field in the U-SIG of the EHT TB Sub-PPDU can be configured. This is a Spatial Reuse value that considers only the BW of the EHT TB Sub-PPDU. It has a small resolution and can be good for performance. However, problems may occur depending on the BW indicator value of the TB PPDU. In all of the above proposals, when setting the Spatial Reuse field by comparing several Spatial Reuse values, it may be desirable to set it to a small value. The reason for this is that if the Spatial Reuse value is set to a large value, the adjacent OBSS transmits with high power, resulting in interference with a power greater than the allowable interference power. In all the above proposals, if a specific Spatial Reuse value is copied and set to a specific Spatial Reuse value, if there is a difference in BW, the meaning (dBm value) by adding or subtracting a specific dbm value, and then changing it to a value that corresponds to the maximum dbm value that is less than or equal to this value. Even if different Spatial Reuse fields have values corresponding to different channel sizes, if normalization is applied to the same channel size, it is not necessary to make additional corrections when copying and setting. In Appendices 1, 2, and 3 described later, regardless of the channel size to which each Spatial Reuse value corresponds, the value can be normalized to a 20 MHz channel. For example, the Spatial Reuse value corresponding to 40 MHz can be normalized to 20 MHz by subtracting 6 (or2010g2) from the corresponding PSR value (in dBm, that is, the value calculated based on 40 MHz) before normalization, and then converted to the corresponding Spatial Reuse value. As another example, the Spatial Reuse value corresponding to 80 MHz is normalized to 20 MHz by subtracting 12 (or 20 log 4) from the corresponding PSR value (in dBm, that is, the value calculated based on 80 MHz) before normalization, and then it can be set to the corresponding Spatial Reuse value. As another example, the Spatial Reuse value corresponding to 160 MHz is normalized to 20 MHz by subtracting 18 (or 20 log 8) from the corresponding PSR value (in dBm, that is, the value calculated based on 160 MHz) before normalization, and then it can be set to the corresponding Spatial Reuse value. Appendix 1 4 Spatial Reuse Fields in Common Info Field of Trigger Frame i) 20 MHz: The four spatial reuse fields may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel. ii) 40 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower MHz subchannel. Also, when transmitting a 2.4 GHz band TB PPDU, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. iii) 80 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower 20 MHz subchannel. Spatial reuse field3: In general, this may mean a spatial reuse value of a second higher 20 MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. iv) 160 MHz: Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: In general, this may mean a spatial reuse value of a second lower MHz subchannel. Spatial reuse field3: This may generally mean a spatial reuse value of a second higher MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. 4 Spatial Reuse fields in HE-SIG-A of HE TB (Sub-)PPDU Copy the 4 Spatial Reuse fields in the Trigger frame above as they are. Appendix 2 4 Spatial Reuse Fields in Common Info Field of Trigger Frame i) 20 MHz: The four spatial reuse subfields may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel. Alternatively, spatial reuse 3 and 4 may be reserved. ii) 40 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower MHz subchannel. Also, when transmitting a 2.4 GHz band TB PPDU, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved. iii) 80 MHz: Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved. or Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower MHz subchannel. Spatial reuse field3: In general, this may mean a spatial reuse value of a second higher MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. iv) 160 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved. or Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: In general, this may mean a spatial reuse value of a second lower MHz subchannel. Spatial reuse field3: This may generally mean a spatial reuse value of a second higher MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. v) 320 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 160 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 160 MHz subchannel. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved. or Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: In general, this may mean a spatial reuse value of a second lower MHz subchannel. Spatial reuse field3: This may generally mean a spatial reuse value of a second higher MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. 2 Spatial Reuse fields in U-SIG of EHT TB (Sub-)PPDU i) 20 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields1and2of the trigger frame as they are. That is, it may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel. ii) 40 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields1and2of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. In addition, even when the TB PPDU is transmitted in the 2.4 GHz band, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. iii) 80 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields1and2of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 40 MHz subchannel Or The two spatial reuse fields can be configured by copying spatial reuse fields1and3of the trigger frame as they are or copying fields2and4as they are. Alternatively, you can select and copy one of the two values in each field as shown below. The selection criterion may be a large or small value. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest or second lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest or second highest 20 MHz subchannel. or The two spatial reuse fields can be defined differently for each 40 MHz (that is, the U-SIG configuration can be different for each 40 MHz). At 40 MHz, spatial reuse fields3and4of the trigger frame can be copied and configured as they are. That is, it may be as follows. Spatial reuse field1at a low 40 MHz: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2at a low 40 MHz: This may generally mean a spatial reuse value of a second low 20 MHz subchannel. Spatial reuse field1at high 40 MHz: This may generally mean a spatial reuse value of a second high 20 MHz subchannel. Spatial reuse field2at high 40 MHz: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. iv) 160 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields1and2of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. or The two spatial reuse fields can be configured by copying spatial reuse fields1and3of the trigger frame as they are or copying fields2and4as they are. Alternatively, you can select and copy one of the two values in each field as shown below. The selection criterion may be a large or small value. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest or second lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean the spatial reuse value of the highest or second highest 40 MHz subchannel. or The two spatial reuse fields can be defined differently for each 80 MHz (that is, the U-SIG configuration can be different for each 80 MHz). At 80 MHz, spatial reuse fields3and4of the trigger frame can be copied and configured as they are. That is, it may be as follows. Spatial reuse field1at a low 80 MHz: This may generally mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2at a low 80 MHz: This may generally mean a spatial reuse value of a second low 40 MHz subchannel. Spatial reuse field1at high 80 MHz: This may generally mean a spatial reuse value of a second high 40 MHz subchannel. Spatial reuse field2at high 80 MHz: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. v) 320 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields1and2of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 160 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 160 MHz subchannel. or The two spatial reuse fields can be configured by copying spatial reuse fields1and3of the trigger frame as they are or copying fields2and4as they are. Alternatively, you can select and copy one of the two values in each field as shown below. The selection criterion may be a large or small value. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest or second lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest or second highest 80 MHz subchannel Or The two spatial reuse fields can be defined differently for each 160 MHz (i.e., the U-SIG configuration can be different for each 160 MHz), and at a lower 160 MHz, the spatial reuse fields1and2of the trigger frame can be copied and configured as they are, at a high frequency of 160 MHz, spatial reuse fields3and4of the trigger frame can be copied and configured as they are. That is, it may be as follows. Spatial reuse field1at a low 160 MHz: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2at a low 160 MHz: This may generally mean a spatial reuse value of a second low 80 MHz subchannel. Spatial reuse field1at high 160 MHz: This may generally mean a spatial reuse value of a second high 80 MHz subchannel. Spatial reuse field2at high 160 MHz: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. Appendix 3 2 Spatial Reuse fields in EHT Common Info field of Trigger frame. i) 20 MHz: The two spatial reuse fields may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel. ii) 40 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. In addition, even when the TB PPDU is transmitted in the 2.4 GHz band, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. iii) 80 MHz: Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. iv) 160 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. v) 320 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 160 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 160 MHz subchannel. 2 Spatial Reuse fields in U-SIG of EHT TB (Sub-)PPDU Copy the 2 Spatial Reuse fields in the Trigger frame above as they are. FIG.22is a process flow diagram illustrating the operation of the transmission device according to the present embodiment. The example ofFIG.22may be performed by a transmitting STA or a transmitting device (AP and/or non-AP STA). Some of each step (or detailed sub-steps to be described later) in the example ofFIG.22may be omitted or changed. Through step S2210, the transmitting device (transmitting STA) may obtain information about the above-described tone plan. As described above, the information about the tone plan includes the size and location of the RU, control information related to the RU, information about a frequency band including the RU, information about an STA receiving the RU, and the like. Through step S2220, the transmitting device may configure/generate a PPDU based on the acquired control information. A step of configuring/generating the PPDU may include a step of configuring/generating each field of the PPDU. That is, step S2220includes a step of configuring the EHT-SIG field including control information about the tone plan. That is, step S2220may include a step of configuring a field including control information (e.g. N bitmaps) indicating the size/position of the RU and/or a step of configuring a field including an identifier of an STA (e.g. AID) receiving the RU. Also, step S2220may include a step of generating an STF/LTF sequence transmitted through a specific RU. The STF/LTF sequence may be generated based on a preset STF generation sequence/LTF generation sequence. Also, step S2220may include a step of generating a data field (i.e., MPDU) transmitted through a specific RU. The transmitting device may transmit the PPDU constructed through step S2220to the receiving device based on step S2230. While performing step S2230, the transmitting device may perform at least one of operations such as CSD, Spatial Mapping, IDFT/IFFT operation, and GI insertion. A signal/field/sequence constructed according to the present specification may be transmitted in the form ofFIG.10. FIG.23is a process flow diagram illustrating the operation of the receiving device according to the present embodiment. The aforementioned PPDU may be received according to the example ofFIG.22. The example ofFIG.23may be performed by a receiving STA or a receiving device (AP and/or non-AP STA). Some of each step (or detailed sub-steps to be described later) in the example ofFIG.23may be omitted. The receiving device (receiving STA) may receive all or part of the PPDU through step S2310. The received signal may be in the form ofFIG.10. The sub-step of step S2310may be determined based on step S2230ofFIG.22. That is, in step S2310, an operation of restoring the result of the CSD, Spatial Mapping, IDFT/IFFT operation, and GI insertion operation applied in step S2230may be performed. In step S2320, the receiving device may perform decoding on all/part of the PPDU. Also, the receiving device may obtain control information related to a tone plan (i.e., RU) from the decoded PPDU. More specifically, the receiving device may decode the L-SIG and EHT-SIG of the PPDU based on the legacy STF/LTF and obtain information included in the L-SIG and EHT SIG fields. Information on various tone plans (i.e., RUs) described in this specification may be included in the EHT-SIG, and the receiving STA may obtain information on the tone plan (i.e., RU) through the EHT-SIG. In step S2330, the receiving device may decode the remaining part of the PPDU based on information about the tone plan (i.e., RU) acquired through step S2320. For example, the receiving STA may decode the STF/LTF field of the PPDU based on information about one plan (i.e., RU). In addition, the receiving STA may decode the data field of the PPDU based on information about the tone plan (i.e., RU) and obtain the MPDU included in the data field. In addition, the receiving device may perform a processing operation of transferring the data decoded through step S2330to a higher layer (e.g., MAC layer). In addition, when generation of a signal is instructed from the upper layer to the PHY layer in response to data transmitted to the upper layer, a subsequent operation may be performed. Hereinafter, the above-described embodiment will be described with reference toFIG.1toFIG.23. FIG.24is a flowchart illustrating a procedure for configuring a trigger frame and a TB PPDU supporting spatial reuse by an AP according to the present embodiment. The example ofFIG.24may be performed in a network environment in which a next generation WLAN system (IEEE 802.11be or EHT WLAN system) is supported. The next generation wireless LAN system is a WLAN system that is enhanced from an 802.11ax system and may, therefore, satisfy backward compatibility with the 802.11ax system. The example ofFIG.24is performed by a transmitting STA, and the transmitting STA may correspond to an access point (AP). A receiving STA ofFIG.24may correspond to a non-AP STA. This embodiment proposes a method for configuring a trigger frame and a TB PPDU simultaneously supporting spatial reuse of an 802.11ax (or HE) WLAN system and an 802.11be (or EHT) WLAN system. In step S2410, a transmitting station (STA) transmits a trigger frame to a receiving STA. In step S2420, the transmitting STA receives a Trigger Based Physical Protocol Data Unit (TB PPDU) from the receiving STA through a preset frequency band. The trigger frame includes a common information field and a special user information field. The common information field includes first to fourth spatial reuse fields. The special user information field includes fifth and sixth spatial reuse fields. This embodiment assumes a situation in which the trigger frame triggers the EHT TB PPDU. The common information field is an EHT variant Common Info field, and includes four spatial reuse fields (HSR1, HSR2, HSR3, and HSR4). The four spatial reuse fields HSR1, HSR2, HSR3, and HSR4 are defined for spatial reuse of the OBSS HE STA. The special user information field is included in the trigger frame when an association identifier (AID) is 2007, and includes two spatial reuse fields (ESR1 and ESR2). The two spatial reuse fields (ESR1 and ESR2) are defined for spatial reuse of the OBSS EHT STA. When the preset frequency band is a 20 MHz band, the first to fourth spatial reuse fields are set to a value of the fifth spatial reuse field (HSR1=HSR2=HSR3=HSR4=ESR1). The OBSS HE STA may determine that the trigger frame triggers a 20 MHz HE TB PPDU. When the preset frequency band is a 40 MHz band, the first and third spatial reuse fields are set to a value of the fifth spatial reuse field, and the second and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR3=ESR1/HSR2=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 40 MHz HE TB PPDU. When the preset frequency band is an 80 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers an 80 MHz HE TB PPDU. When the preset frequency band is a 160 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. When the preset frequency band is a 320 MHz band, the first to fourth spatial reuse fields are set to a smaller value among the values of the fifth and sixth spatial reuse fields (HSR1=HSR2=HSR3=HSR4=min(ESR1,ESR2)). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. Since the OBSS HE STA can operate on one of the two 160 MHz channels through which the EHT TB PPDU is transmitted, the HSR value must be a value that can represent both of the two 160 MHz channels. At this time, setting the HSR value to a value of a weak channel is preferable because it can reduce interference by lowering the transmit power of the OBSS STA. That is, this embodiment proposes a method in which four spatial reuse fields (HSR1, HSR2, HSR3, HSR4) are set based on two spatial reuse fields (ESR1, ESR2) in the Special User Info field in the common information field (EHT variant Common Info field) in each frequency band. The band (or channel) through which the trigger frame is transmitted is the same as the band (or channel) through which the TB PPDU is transmitted. When the preset frequency band is the 20 MHz band, the values of the first to fourth spatial reuse fields may be spatial reuse values for the 20 MHz band. That is, the first to fourth spatial reuse fields may include the same spatial reuse value for the 20 MHz band. The spatial reuse value for the 20 MHz band may be a value used to calculate transmit power accessible by the OBSS HE STA for the 20 MHz band. When the preset frequency band is the 40 MHz band, the values of the first and third spatial reuse fields may be spatial reuse values for a first 20 MHz subchannel having a low frequency in the 40 MHz band, and the values of the second and fourth spatial reuse fields may be spatial reuse values for a second 20 MHz subchannel having a high frequency in the 40 MHz band. When the TB PPDU is transmitted in a 2.4 GHz band, the spatial reuse value for the second 20 MHz subchannel may be set equal to the spatial reuse value for the first 20 MHz subchannel. The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. If the preset frequency band is the 80 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 20 MHz subchannel having the lowest frequency in the 80 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 20 MHz subchannel having a second lowest frequency in the 80 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 20 MHz subchannel having a second highest frequency in the 80 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 20 MHz subchannel having the highest frequency in the 80 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 40 MHz subchannel having a low frequency in the 80 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 40 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. The spatial reuse value for the third 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 20 MHz subchannel. The spatial reuse value for the fourth 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth MHz subchannel. When the preset frequency band is the 160 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 40 MHz subchannel having the lowest frequency in the 160 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel having a second lowest frequency in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel having a second highest frequency in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest frequency in the 160 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 80 MHz subchannel having a low frequency in the 160 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 80 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. When the preset frequency band is the 320 MHz band, Since the OBSS HE STA can only decode the first bandwidth field (2-bit UL BW subfield) described later (the second bandwidth field (2-bit UL Bandwidth Extension subfield) cannot be interpreted), it may interpret the preset frequency band as a 160 MHz band. Accordingly, the OBSS HE STA interprets the value of the first spatial reuse field as the lowest spatial reuse value for the first 40 MHz subchannel in the 160 MHz band (where it is located), interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel that is second lowest in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel that is second highest in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest value in the 160 MHz band. However, the AP sets the first spatial reuse field to a value of a spatial reuse field representing the first 40 MHz subchannel having the lowest frequency within each 160 MHz channel of the 320 MHz band, sets the second spatial reuse field to a value of a spatial reuse field representing a second 40 MHz subchannel having a second lowest frequency in each 160 MHz channel of the 320 MHz band, sets the third spatial reuse field to a value of a spatial reuse field representing a third 40 MHz subchannel having a second highest frequency within each 160 MHz channel of the 320 MHz band, and sets the fourth spatial reuse field to a value of a spatial reuse field representing a fourth 40 MHz subchannel having the highest frequency within each 160 MHz channel of the 320 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. The common information field may include a first bandwidth field, and the special user information field includes a second bandwidth field. A bandwidth of the preset frequency band may be set based on the first and second bandwidth fields. For example, when the first bandwidth field is set to 0 and the second bandwidth field is set to 0, the preset frequency band may be 20 MHz. When the first bandwidth field is set to 1 and the second bandwidth field is set to 0, the preset frequency band may be 40 MHz. When the first bandwidth field is set to 2 and the second bandwidth field is set to 0, the preset frequency band may be 80 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 1, the preset frequency band may be 160 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 2, the preset frequency band may be 320-1 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 3, the preset frequency band may be 320-2 MHz. It is assumed that the TB PPDU is an EHT TB PPDU. The first bandwidth field is a field indicating the bandwidth of the HE TB PPDU. By using the first and second bandwidth fields together, the bandwidth of the EHT TB PPDU can also be indicated. The TB PPDU may include a Universal-Signal (U-SIG) field. The U-SIG field may include seventh and eighth spatial reuse fields. The seventh spatial reuse field may be configured by duplicating the fifth spatial reuse field. The eighth spatial reuse field may be configured by duplicating the sixth spatial reuse field. Values of the seventh and eighth spatial reuse fields may be normalized values for each 20 MHz subchannel Since the seventh spatial reuse field duplicates the fifth spatial reuse field and the eighth spatial reuse field duplicates the sixth spatial reuse field, values of the fifth and sixth spatial reuse fields may also be normalized values for each 20 MHz subchannel. Accordingly, the values of the first to fourth spatial reuse fields may also be normalized values for each 20 MHz subchannel. For example, when the preset frequency band is an 80 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 40 MHz subband in the MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 40 MHz subband in the 80 MHz band. When the preset frequency band is a 160 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 80 MHz subband in the 160 MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 80 MHz subband in the 160 MHz band. When the preset frequency band is a 320 MHz-1 or 320 MHz-2 band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band. The first to eighth spatial reuse fields each consist of 4 bits and may use the same value as the value defined in the 802.11ax wireless LAN system (see Table 3). According to this embodiment, the transmitting STA informs the OBSS STA of an interference power value that is allowable for a specific band (or specific channel) through a spatial reuse value, and the OBSS STA derives transmit power using the interference power value and the value of the AP TX Power subfield, and transmits a signal by performing spatial reuse in the specific band (or specific channel). Since the OBSS STA performs spatial reuse, the transmitting STA may not receive interference due to the OBSS STA when receiving the TB PPDU. That is, the present embodiment has an effect of improving throughput and efficiency by enabling spatial reuse of the OBSS STA and stably using transmission resources for a specific band without collision. The trigger frame is divided into a HE variant case and an EHT variant case, and a common information field and a user information field may be configured differently (SeeFIGS.16and17for the common information field, andFIG.20for the user information field). The TB PPDU may be an EHT TB PPDU. The EHT TB PPDU may include a Legacy-Short Training Field (L-STF), a Legacy-Long Training Field (L-LTF), a Legacy-Signal (L-SIG), a Repeated L-SIG (RL-SIG), a Universal-Signal (U-SIG), a EHT-STF and EHT-LTFs, and a data field. That is, the EHT TB PPDU is defined in a format excluding EHT-SIG from the EHT MU PPDU. Also, the TB PPDU may be a TB Trigger Based Aggregated-Physical Protocol Data Unit (A-PPDU) in which a High Efficiency (HE) TB PPDU and an Extreme High Throughput (EHT) TB PPDU are aggregated. FIG.25is a flowchart illustrating a procedure for configuring a trigger frame and a TB PPDU supporting spatial reuse by an STA according to the present embodiment. The example ofFIG.25may be performed in a network environment in which a next generation WLAN system (IEEE 802.11be or EHT WLAN system) is supported. The next generation wireless LAN system is a WLAN system that is enhanced from an 802.11ax system and may, therefore, satisfy backward compatibility with the 802.11ax system. The example ofFIG.25may be performed by a receiving STA, and the receiving STA may correspond to a non-AP STA. A transmitting STA ofFIG.25may correspond to an access point (AP). This embodiment proposes a method for configuring a trigger frame and a TB PPDU simultaneously supporting spatial reuse of an 802.11ax (or HE) WLAN system and an 802.11be (or EHT) WLAN system. In step S2510, a receiving station (STA) receives a trigger frame from a transmitting STA. In step S2520, the receiving STA transmits a Trigger Based Physical Protocol Data Unit (TB PPDU) to the transmitting STA through a preset frequency band. The trigger frame includes a common information field and a special user information field. The common information field includes first to fourth spatial reuse fields. The special user information field includes fifth and sixth spatial reuse fields. This embodiment assumes a situation in which the trigger frame triggers the EHT TB PPDU. The common information field is an EHT variant Common Info field, and includes four spatial reuse fields (HSR1, HSR2, HSR3, and HSR4). The four spatial reuse fields HSR1, HSR2, HSR3, and HSR4 are defined for spatial reuse of the OBSS HE STA. The special user information field is included in the trigger frame when an association identifier (AID) is 2007, and includes two spatial reuse fields (ESR1 and ESR2). The two spatial reuse fields (ESR1 and ESR2) are defined for spatial reuse of the OBSS EHT STA. When the preset frequency band is a 20 MHz band, the first to fourth spatial reuse fields are set to a value of the fifth spatial reuse field (HSR1=HSR2=HSR3=HSR4=ESR1). The OBSS HE STA may determine that the trigger frame triggers a 20 MHz HE TB PPDU. When the preset frequency band is a 40 MHz band, the first and third spatial reuse fields are set to a value of the fifth spatial reuse field, and the second and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR3=ESR1/HSR2=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 40 MHz HE TB PPDU. When the preset frequency band is an 80 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers an 80 MHz HE TB PPDU. When the preset frequency band is a 160 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. When the preset frequency band is a 320 MHz band, the first to fourth spatial reuse fields are set to a smaller value among the values of the fifth and sixth spatial reuse fields (HSR1=HSR2=HSR3=HSR4=min(ESR1,ESR2)). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. Since the OBSS HE STA can operate on one of the two 160 MHz channels through which the EHT TB PPDU is transmitted, the HSR value must be a value that can represent both of the two 160 MHz channels. At this time, setting the HSR value to a value of a weak channel is preferable because it can reduce interference by lowering the transmit power of the OBSS STA. That is, this embodiment proposes a method in which four spatial reuse fields (HSR1, HSR2, HSR3, HSR4) are set based on two spatial reuse fields (ESR1, ESR2) in the Special User Info field in the common information field (EHT variant Common Info field) in each frequency band. The band (or channel) through which the trigger frame is transmitted is the same as the band (or channel) through which the TB PPDU is transmitted. When the preset frequency band is the 20 MHz band, the values of the first to fourth spatial reuse fields may be spatial reuse values for the 20 MHz band. That is, the first to fourth spatial reuse fields may include the same spatial reuse value for the 20 MHz band. The spatial reuse value for the 20 MHz band may be a value used to calculate transmit power accessible by the OBSS HE STA for the 20 MHz band. When the preset frequency band is the 40 MHz band, the values of the first and third spatial reuse fields may be spatial reuse values for a first 20 MHz subchannel having a low frequency in the 40 MHz band, and the values of the second and fourth spatial reuse fields may be spatial reuse values for a second 20 MHz subchannel having a high frequency in the 40 MHz band. When the TB PPDU is transmitted in a 2.4 GHz band, the spatial reuse value for the second 20 MHz subchannel may be set equal to the spatial reuse value for the first 20 MHz subchannel. The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second 20 MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. If the preset frequency band is the 80 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 20 MHz subchannel having the lowest frequency in the 80 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 20 MHz subchannel having a second lowest frequency in the 80 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 20 MHz subchannel having a second highest frequency in the 80 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 20 MHz subchannel having the highest frequency in the 80 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 40 MHz subchannel having a low frequency in the 80 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 40 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. The spatial reuse value for the third 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 20 MHz subchannel. The spatial reuse value for the fourth 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth MHz subchannel. When the preset frequency band is the 160 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 40 MHz subchannel having the lowest frequency in the 160 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel having a second lowest frequency in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel having a second highest frequency in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest frequency in the 160 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 80 MHz subchannel having a low frequency in the 160 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 80 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. When the preset frequency band is the 320 MHz band, Since the OBSS HE STA can only decode the first bandwidth field (2-bit UL BW subfield) described later (the second bandwidth field (2-bit UL Bandwidth Extension subfield) cannot be interpreted), it may interpret the preset frequency band as a 160 MHz band. Accordingly, the OBSS HE STA interprets the value of the first spatial reuse field as the lowest spatial reuse value for the first 40 MHz subchannel in the 160 MHz band (where it is located), interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel that is second lowest in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel that is second highest in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest value in the 160 MHz band. However, the AP sets the first spatial reuse field to a value of a spatial reuse field representing the first 40 MHz subchannel having the lowest frequency within each 160 MHz channel of the 320 MHz band, sets the second spatial reuse field to a value of a spatial reuse field representing a second 40 MHz subchannel having a second lowest frequency in each 160 MHz channel of the 320 MHz band, sets the third spatial reuse field to a value of a spatial reuse field representing a third 40 MHz subchannel having a second highest frequency within each 160 MHz channel of the 320 MHz band, and sets the fourth spatial reuse field to a value of a spatial reuse field representing a fourth 40 MHz subchannel having the highest frequency within each 160 MHz channel of the 320 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. The common information field may include a first bandwidth field, and the special user information field includes a second bandwidth field. A bandwidth of the preset frequency band may be set based on the first and second bandwidth fields. For example, when the first bandwidth field is set to 0 and the second bandwidth field is set to 0, the preset frequency band may be 20 MHz. When the first bandwidth field is set to 1 and the second bandwidth field is set to 0, the preset frequency band may be 40 MHz. When the first bandwidth field is set to 2 and the second bandwidth field is set to 0, the preset frequency band may be 80 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 1, the preset frequency band may be 160 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 2, the preset frequency band may be 320-1 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 3, the preset frequency band may be 320-2 MHz. It is assumed that the TB PPDU is an EHT TB PPDU. The first bandwidth field is a field indicating the bandwidth of the HE TB PPDU. By using the first and second bandwidth fields together, the bandwidth of the EHT TB PPDU can also be indicated. The TB PPDU may include a Universal-Signal (U-SIG) field. The U-SIG field may include seventh and eighth spatial reuse fields. The seventh spatial reuse field may be configured by duplicating the fifth spatial reuse field. The eighth spatial reuse field may be configured by duplicating the sixth spatial reuse field. Values of the seventh and eighth spatial reuse fields may be normalized values for each MHz subchannel Since the seventh spatial reuse field duplicates the fifth spatial reuse field and the eighth spatial reuse field duplicates the sixth spatial reuse field, values of the fifth and sixth spatial reuse fields may also be normalized values for each 20 MHz subchannel. Accordingly, the values of the first to fourth spatial reuse fields may also be normalized values for each 20 MHz subchannel. For example, when the preset frequency band is an 80 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 40 MHz subband in the MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 40 MHz subband in the 80 MHz band. When the preset frequency band is a 160 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 80 MHz subband in the 160 MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 80 MHz subband in the 160 MHz band. When the preset frequency band is a 320 MHz-1 or 320 MHz-2 band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band. The first to eighth spatial reuse fields each consist of 4 bits and may use the same value as the value defined in the 802.11ax wireless LAN system (see Table 3). According to this embodiment, the transmitting STA informs the OBSS STA of an interference power value that is allowable for a specific band (or specific channel) through a spatial reuse value, and the OBSS STA derives transmit power using the interference power value and the value of the AP TX Power subfield, and transmits a signal by performing spatial reuse in the specific band (or specific channel). Since the OBSS STA performs spatial reuse, the transmitting STA may not receive interference due to the OBSS STA when receiving the TB PPDU. That is, the present embodiment has an effect of improving throughput and efficiency by enabling spatial reuse of the OBSS STA and stably using transmission resources for a specific band without collision. The trigger frame is divided into a HE variant case and an EHT variant case, and a common information field and a user information field may be configured differently (SeeFIGS.16and17for the common information field, andFIG.20for the user information field). The TB PPDU may be an EHT TB PPDU. The EHT TB PPDU may include a Legacy-Short Training Field (L-STF), a Legacy-Long Training Field (L-LTF), a Legacy-Signal (L-SIG), a Repeated L-SIG (RL-SIG), a Universal-Signal (U-SIG), a EHT-STF and EHT-LTFs, and a data field. That is, the EHT TB PPDU is defined in a format excluding EHT-SIG from the EHT MU PPDU. Also, the TB PPDU may be a TB Trigger Based Aggregated-Physical Protocol Data Unit (A-PPDU) in which a High Efficiency (HE) TB PPDU and an Extreme High Throughput (EHT) TB PPDU are aggregated. 4. Device Configuration The technical features of the present disclosure may be applied to various devices and methods. For example, the technical features of the present disclosure may be performed/supported through the device(s) ofFIG.1and/orFIG.11. For example, the technical features of the present disclosure may be applied to only part ofFIG.1and/orFIG.11. For example, the technical features of the present disclosure may be implemented based on the processing chip(s)114and124ofFIG.1, or implemented based on the processor(s)111and121and the memory(s)112and122, or implemented based on the processor610and the memory620ofFIG.11. For example, the device according to the present disclosure receives a trigger frame from a transmitting station (STA); and transmits a Trigger Based Physical Protocol Data Unit (TB PPDU) through a preset frequency band to the transmitting STA. The technical features of the present disclosure may be implemented based on a computer readable medium (CRM). For example, a CRM according to the present disclosure is at least one computer readable medium including instructions designed to be executed by at least one processor. The CRM may store instructions that perform operations including receiving a trigger frame from a transmitting station (STA); and transmitting a Trigger Based Physical Protocol Data Unit (TB PPDU) through a preset frequency band to the transmitting STA. At least one processor may execute the instructions stored in the CRM according to the present disclosure. At least one processor related to the CRM of the present disclosure may be the processor111,121ofFIG.1, the processing chip114,124ofFIG.1, or the processor610ofFIG.11. Meanwhile, the CRM of the present disclosure may be the memory112,122ofFIG.1, the memory620ofFIG.11, or a separate external memory/storage medium/disk. The foregoing technical features of the present specification are applicable to various applications or business models. For example, the foregoing technical features may be applied for wireless communication of a device supporting artificial intelligence (AI). Artificial intelligence refers to a field of study on artificial intelligence or methodologies for creating artificial intelligence, and machine learning refers to a field of study on methodologies for defining and solving various issues in the area of artificial intelligence. Machine learning is also defined as an algorithm for improving the performance of an operation through steady experiences of the operation. An artificial neural network (ANN) is a model used in machine learning and may refer to an overall problem-solving model that includes artificial neurons (nodes) forming a network by combining synapses. The artificial neural network may be defined by a pattern of connection between neurons of different layers, a learning process of updating a model parameter, and an activation function generating an output value. The artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include synapses that connect neurons. In the artificial neural network, each neuron may output a function value of an activation function of input signals input through a synapse, weights, and deviations. A model parameter refers to a parameter determined through learning and includes a weight of synapse connection and a deviation of a neuron. A hyper-parameter refers to a parameter to be set before learning in a machine learning algorithm and includes a learning rate, the number of iterations, a mini-batch size, and an initialization function. Learning an artificial neural network may be intended to determine a model parameter for minimizing a loss function. The loss function may be used as an index for determining an optimal model parameter in a process of learning the artificial neural network. Machine learning may be classified into supervised learning, unsupervised learning, and reinforcement learning. Supervised learning refers to a method of training an artificial neural network with a label given for training data, wherein the label may indicate a correct answer (or result value) that the artificial neural network needs to infer when the training data is input to the artificial neural network. Unsupervised learning may refer to a method of training an artificial neural network without a label given for training data. Reinforcement learning may refer to a training method for training an agent defined in an environment to choose an action or a sequence of actions to maximize a cumulative reward in each state. Machine learning implemented with a deep neural network (DNN) including a plurality of hidden layers among artificial neural networks is referred to as deep learning, and deep learning is part of machine learning. Hereinafter, machine learning is construed as including deep learning. The foregoing technical features may be applied to wireless communication of a robot. Robots may refer to machinery that automatically process or operate a given task with own ability thereof. In particular, a robot having a function of recognizing an environment and autonomously making a judgment to perform an operation may be referred to as an intelligent robot. Robots may be classified into industrial, medical, household, military robots and the like according uses or fields. A robot may include an actuator or a driver including a motor to perform various physical operations, such as moving a robot joint. In addition, a movable robot may include a wheel, a brake, a propeller, and the like in a driver to run on the ground or fly in the air through the driver. The foregoing technical features may be applied to a device supporting extended reality. Extended reality collectively refers to virtual reality (VR), augmented reality (AR), and mixed reality (MR). VR technology is a computer graphic technology of providing a real-world object and background only in a CG image, AR technology is a computer graphic technology of providing a virtual CG image on a real object image, and MR technology is a computer graphic technology of providing virtual objects mixed and combined with the real world. MR technology is similar to AR technology in that a real object and a virtual object are displayed together. However, a virtual object is used as a supplement to a real object in AR technology, whereas a virtual object and a real object are used as equal statuses in MR technology. XR technology may be applied to a head-mount display (HMD), a head-up display (HUD), a mobile phone, a tablet PC, a laptop computer, a desktop computer, a TV, digital signage, and the like. A device to which XR technology is applied may be referred to as an XR device. The claims recited in the present specification may be combined in a variety of ways. For example, the technical features of the method claims of the present specification may be combined to be implemented as a device, and the technical features of the device claims of the present specification may be combined to be implemented by a method. In addition, the technical characteristics of the method claim of the present specification and the technical characteristics of the device claim may be combined to be implemented as a device, and the technical characteristics of the method claim of the present specification and the technical characteristics of the device claim may be combined to be implemented by a method.
203,842
11943626
DETAILED DESCRIPTION In the present specification, “A or B” may mean “only A”, “only B” or “both A and B”. In other words, in the present specification, “A or B” may be interpreted as “A and/or B”. For example, in the present specification, “A, B, or C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, C”. A slash (/) or comma used in the present specification may mean “and/or”. For example, “A/B” may mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”. For example, “A, B, C” may mean “A, B, or C”. In the present specification, “at least one of A and B” may mean “only A”, “only B”, or “both A and B”. In addition, in the present specification, the expression “at least one of A or B” or “at least one of A and/or B” may be interpreted as “at least one of A and B”. In addition, in the present specification, “at least one of A, B, and C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, and C”. In addition, “at least one of A, B, or C” or “at least one of A, B, and/or C” may mean “at least one of A, B, and C”. In addition, a parenthesis used in the present specification may mean “for example”. Specifically, when indicated as “control information (EHT-signal)”, it may denote that “EHT-signal” is proposed as an example of the “control information”. In other words, the “control information” of the present specification is not limited to “EHT-signal”, and “EHT-signal” may be proposed as an example of the “control information”. In addition, when indicated as “control information (i.e., EHT-signal)”, it may also mean that “EHT-signal” is proposed as an example of the “control information”. Technical features described individually in one figure in the present specification may be individually implemented, or may be simultaneously implemented. The following example of the present specification may be applied to various wireless communication systems. For example, the following example of the present specification may be applied to a wireless local area network (WLAN) system. For example, the present specification may be applied to the IEEE 802.11a/g/n/ac standard or the IEEE 802.11ax standard. In addition, the present specification may also be applied to the newly proposed EHT standard or IEEE 802.11be standard. In addition, the example of the present specification may also be applied to a new WLAN standard enhanced from the EHT standard or the IEEE 802.11be standard. In addition, the example of the present specification may be applied to a mobile communication system. For example, it may be applied to a mobile communication system based on long term evolution (LTE) depending on a 3 rd generation partnership project (3GPP) standard and based on evolution of the LTE. In addition, the example of the present specification may be applied to a communication system of a 5G NR standard based on the 3GPP standard. Hereinafter, in order to describe a technical feature of the present specification, a technical feature applicable to the present specification will be described. FIG.1shows an example of a transmitting apparatus and/or receiving apparatus of the present specification. In the example ofFIG.1, various technical features described below may be performed.FIG.1relates to at least one station (STA). For example, STAs110and120of the present specification may also be called in various terms such as a mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile subscriber unit, or simply a user. The STAs110and120of the present specification may also be called in various terms such as a network, a base station, a node-B, an access point (AP), a repeater, a router, a relay, or the like. The STAs110and120of the present specification may also be referred to as various names such as a receiving apparatus, a transmitting apparatus, a receiving STA, a transmitting STA, a receiving device, a transmitting device, or the like. For example, the STAs110and120may serve as an AP or a non-AP. That is, the STAs110and120of the present specification may serve as the AP and/or the non-AP. The STAs110and120of the present specification may support various communication standards together in addition to the IEEE 802.11 standard. For example, a communication standard (e.g., LTE, LTE-A, 5G NR standard) or the like based on the 3GPP standard may be supported. In addition, the STA of the present specification may be implemented as various devices such as a mobile phone, a vehicle, a personal computer, or the like. In addition, the STA of the present specification may support communication for various communication services such as voice calls, video calls, data communication, and self-driving (autonomous-driving), or the like. The STAs110and120of the present specification may include a medium access control (MAC) conforming to the IEEE 802.11 standard and a physical layer interface for a radio medium. The STAs110and120will be described below with reference to a sub-figure (a) ofFIG.1. The first STA110may include a processor111, a memory112, and a transceiver113. The illustrated process, memory, and transceiver may be implemented individually as separate chips, or at least two blocks/functions may be implemented through a single chip. The transceiver113of the first STA performs a signal transmission/reception operation. Specifically, an IEEE 802.11 packet (e.g., IEEE 802.11a/b/g/n/ac/ax/be, etc.) may be transmitted/received. For example, the first STA110may perform an operation intended by an AP. For example, the processor111of the AP may receive a signal through the transceiver113, process a reception (RX) signal, generate a transmission (TX) signal, and provide control for signal transmission. The memory112of the AP may store a signal (e.g., RX signal) received through the transceiver113, and may store a signal (e.g., TX signal) to be transmitted through the transceiver. For example, the second STA120may perform an operation intended by a non-AP STA. For example, a transceiver123of a non-AP performs a signal transmission/reception operation. Specifically, an IEEE 802.11 packet (e.g., IEEE 802.11a/b/g/n/ac/ax/be packet, etc.) may be transmitted/received. For example, a processor121of the non-AP STA may receive a signal through the transceiver123, process an RX signal, generate a TX signal, and provide control for signal transmission. A memory122of the non-AP STA may store a signal (e.g., RX signal) received through the transceiver123, and may store a signal (e.g., TX signal) to be transmitted through the transceiver. For example, an operation of a device indicated as an AP in the specification described below may be performed in the first STA110or the second STA120. For example, if the first STA110is the AP, the operation of the device indicated as the AP may be controlled by the processor111of the first STA110, and a related signal may be transmitted or received through the transceiver113controlled by the processor111of the first STA110. In addition, control information related to the operation of the AP or a TX/RX signal of the AP may be stored in the memory112of the first STA110. In addition, if the second STA120is the AP, the operation of the device indicated as the AP may be controlled by the processor121of the second STA120, and a related signal may be transmitted or received through the transceiver123controlled by the processor121of the second STA120. In addition, control information related to the operation of the AP or a TX/RX signal of the AP may be stored in the memory122of the second STA120. For example, in the specification described below, an operation of a device indicated as a non-AP (or user-STA) may be performed in the first STA110or the second STA120. For example, if the second STA120is the non-AP, the operation of the device indicated as the non-AP may be controlled by the processor121of the second STA120, and a related signal may be transmitted or received through the transceiver123controlled by the processor121of the second STA120. In addition, control information related to the operation of the non-AP or a TX/RX signal of the non-AP may be stored in the memory122of the second STA120. For example, if the first STA110is the non-AP, the operation of the device indicated as the non-AP may be controlled by the processor111of the first STA110, and a related signal may be transmitted or received through the transceiver113controlled by the processor111of the first STA110. In addition, control information related to the operation of the non-AP or a TX/RX signal of the non-AP may be stored in the memory112of the first STA110. In the specification described below, a device called a (transmitting/receiving) STA, a first STA, a second STA, a STA1, a STA2, an AP, a first AP, a second AP, an AP1, an AP2, a (transmitting/receiving) terminal, a (transmitting/receiving) device, a (transmitting/receiving) apparatus, a network, or the like may imply the STAs110and120ofFIG.1. For example, a device indicated as, without a specific reference numeral, the (transmitting/receiving) STA, the first STA, the second STA, the STA1, the STA2, the AP, the first AP, the second AP, the AP1, the AP2, the (transmitting/receiving) terminal, the (transmitting/receiving) device, the (transmitting/receiving) apparatus, the network, or the like may imply the STAs110and120ofFIG.1. For example, in the following example, an operation in which various STAs transmit/receive a signal (e.g., a PPDU) may be performed in the transceivers113and123ofFIG.1. In addition, in the following example, an operation in which various STAs generate a TX/RX signal or perform data processing and computation in advance for the TX/RX signal may be performed in the processors111and121ofFIG.1. For example, an example of an operation for generating the TX/RX signal or performing the data processing and computation in advance may include: 1) an operation of determining/obtaining/configuring/computing/decoding/encoding bit information of a sub-field (SIG, STF, LTF, Data) included in a PPDU; 2) an operation of determining/configuring/obtaining a time resource or frequency resource (e.g., a subcarrier resource) or the like used for the sub-field (SIG, STF, LTF, Data) included the PPDU; 3) an operation of determining/configuring/obtaining a specific sequence (e.g., a pilot sequence, an STF/LTF sequence, an extra sequence applied to SIG) or the like used for the sub-field (SIG, STF, LTF, Data) field included in the PPDU; 4) a power control operation and/or power saving operation applied for the STA; and 5) an operation related to determining/obtaining/configuring/decoding/encoding or the like of an ACK signal. In addition, in the following example, a variety of information used by various STAs for determining/obtaining/configuring/computing/decoding/decoding a TX/RX signal (e.g., information related to a field/subfield/control field/parameter/power or the like) may be stored in the memories112and122ofFIG.1. The aforementioned device/STA of the sub-figure (a) ofFIG.1may be modified as shown in the sub-figure (b) ofFIG.1. Hereinafter, the STAs110and120of the present specification will be described based on the sub-figure (b) ofFIG.1. For example, the transceivers113and123illustrated in the sub-figure (b) ofFIG.1may perform the same function as the aforementioned transceiver illustrated in the sub-figure (a) ofFIG.1. For example, processing chips114and124illustrated in the sub-figure (b) ofFIG.1may include the processors111and121and the memories112and122. The processors111and121and memories112and122illustrated in the sub-figure (b) ofFIG.1may perform the same function as the aforementioned processors111and121and memories112and122illustrated in the sub-figure (a) ofFIG.1. A mobile terminal, a wireless device, a wireless transmit/receive unit (WTRU), a user equipment (UE), a mobile station (MS), a mobile subscriber unit, a user, a user STA, a network, a base station, a Node-B, an access point (AP), a repeater, a router, a relay, a receiving unit, a transmitting unit, a receiving STA, a transmitting STA, a receiving device, a transmitting device, a receiving apparatus, and/or a transmitting apparatus, which are described below, may imply the STAs110and120illustrated in the sub-figure (a)/(b) ofFIG.1, or may imply the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. That is, a technical feature of the present specification may be performed in the STAs110and120illustrated in the sub-figure (a)/(b) ofFIG.1, or may be performed only in the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. For example, a technical feature in which the transmitting STA transmits a control signal may be understood as a technical feature in which a control signal generated in the processors111and121illustrated in the sub-figure (a)/(b) ofFIG.1is transmitted through the transceivers113and123illustrated in the sub-figure (a)/(b) ofFIG.1. Alternatively, the technical feature in which the transmitting STA transmits the control signal may be understood as a technical feature in which the control signal to be transferred to the transceivers113and123is generated in the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. For example, a technical feature in which the receiving STA receives the control signal may be understood as a technical feature in which the control signal is received by means of the transceivers113and123illustrated in the sub-figure (a) ofFIG.1. Alternatively, the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal received in the transceivers113and123illustrated in the sub-figure (a) ofFIG.1is obtained by the processors111and121illustrated in the sub-figure (a) ofFIG.1. Alternatively, the technical feature in which the receiving STA receives the control signal may be understood as the technical feature in which the control signal received in the transceivers113and123illustrated in the sub-figure (b) ofFIG.1is obtained by the processing chips114and124illustrated in the sub-figure (b) ofFIG.1. Referring to the sub-figure (b) ofFIG.1, software codes115and125may be included in the memories112and122. The software codes115and126may include instructions for controlling an operation of the processors111and121. The software codes115and125may be included as various programming languages. The processors111and121or processing chips114and124ofFIG.1may include an application-specific integrated circuit (ASIC), other chipsets, a logic circuit and/or a data processing device. The processor may be an application processor (AP). For example, the processors111and121or processing chips114and124ofFIG.1may include at least one of a digital signal processor (DSP), a central processing unit (CPU), a graphics processing unit (GPU), and a modulator and demodulator (modem). For example, the processors111and121or processing chips114and124ofFIG.1may be SNAPDRAGON™ series of processors made by Qualcomm®, EXYNOS™ series of processors made by Samsung®, A series of processors made by Apple®, HELIO™ series of processors made by MediaTek®, ATOM™ series of processors made by Intel® or processors enhanced from these processors. In the present specification, an uplink may imply a link for communication from a non-AP STA to an SP STA, and an uplink PPDU/packet/signal or the like may be transmitted through the uplink. In addition, in the present specification, a downlink may imply a link for communication from the AP STA to the non-AP STA, and a downlink PPDU/packet/signal or the like may be transmitted through the downlink. FIG.2is a conceptual view illustrating the structure of a wireless local area network (WLAN). An upper part ofFIG.2illustrates the structure of an infrastructure basic service set (BSS) of institute of electrical and electronic engineers (IEEE) 802.11. Referring the upper part ofFIG.2, the wireless LAN system may include one or more infrastructure BSSs200and205(hereinafter, referred to as BSS). The BSSs200and205as a set of an AP and a STA such as an access point (AP)225and a station (STA1)200-1which are successfully synchronized to communicate with each other are not concepts indicating a specific region. The BSS205may include one or more STAs205-1and205-2which may be joined to one AP230. The BSS may include at least one STA, APs providing a distribution service, and a distribution system (DS)210connecting multiple APs. The distribution system210may implement an extended service set (ESS)240extended by connecting the multiple BSSs200and205. The ESS240may be used as a term indicating one network configured by connecting one or more APs225or230through the distribution system210. The AP included in one ESS240may have the same service set identification (SSID). A portal220may serve as a bridge which connects the wireless LAN network (IEEE 802.11) and another network (e.g., 802.X). In the BSS illustrated in the upper part ofFIG.2, a network between the APs225and230and a network between the APs225and230and the STAs200-1,205-1, and205-2may be implemented. However, the network is configured even between the STAs without the APs225and230to perform communication. A network in which the communication is performed by configuring the network even between the STAs without the APs225and230is defined as an Ad-Hoc network or an independent basic service set (IBSS). A lower part ofFIG.2illustrates a conceptual view illustrating the IBSS. Referring to the lower part ofFIG.2, the IBSS is a BSS that operates in an Ad-Hoc mode. Since the IBSS does not include the access point (AP), a centralized management entity that performs a management function at the center does not exist. That is, in the IBSS, STAs250-1,250-2,250-3,255-4, and255-5are managed by a distributed manner. In the IBSS, all STAs250-1,250-2,250-3,255-4, and255-5may be constituted by movable STAs and are not permitted to access the DS to constitute a self-contained network. FIG.3illustrates a general link setup process. In S310, a STA may perform a network discovery operation. The network discovery operation may include a scanning operation of the STA. That is, to access a network, the STA needs to discover a participating network. The STA needs to identify a compatible network before participating in a wireless network, and a process of identifying a network present in a particular area is referred to as scanning. Scanning methods include active scanning and passive scanning. FIG.3illustrates a network discovery operation including an active scanning process. In active scanning, a STA performing scanning transmits a probe request frame and waits for a response to the probe request frame in order to identify which AP is present around while moving to channels. A responder transmits a probe response frame as a response to the probe request frame to the STA having transmitted the probe request frame. Here, the responder may be a STA that transmits the last beacon frame in a BSS of a channel being scanned. In the BSS, since an AP transmits a beacon frame, the AP is the responder. In an IBSS, since STAs in the IBSS transmit a beacon frame in turns, the responder is not fixed. For example, when the STA transmits a probe request frame via channel1and receives a probe response frame via channel1, the STA may store BSS-related information included in the received probe response frame, may move to the next channel (e.g., channel2), and may perform scanning (e.g., transmits a probe request and receives a probe response via channel2) by the same method. Although not shown inFIG.3, scanning may be performed by a passive scanning method. In passive scanning, a STA performing scanning may wait for a beacon frame while moving to channels. A beacon frame is one of management frames in IEEE 802.11 and is periodically transmitted to indicate the presence of a wireless network and to enable the STA performing scanning to find the wireless network and to participate in the wireless network. In a BSS, an AP serves to periodically transmit a beacon frame. In an IBSS, STAs in the IBSS transmit a beacon frame in turns. Upon receiving the beacon frame, the STA performing scanning stores information related to a BSS included in the beacon frame and records beacon frame information in each channel while moving to another channel. The STA having received the beacon frame may store BSS-related information included in the received beacon frame, may move to the next channel, and may perform scanning in the next channel by the same method. After discovering the network, the STA may perform an authentication process in S320. The authentication process may be referred to as a first authentication process to be clearly distinguished from the following security setup operation in S340. The authentication process in S320may include a process in which the STA transmits an authentication request frame to the AP and the AP transmits an authentication response frame to the STA in response. The authentication frames used for an authentication request/response are management frames. The authentication frames may include information related to an authentication algorithm number, an authentication transaction sequence number, a status code, a challenge text, a robust security network (RSN), and a finite cyclic group. The STA may transmit the authentication request frame to the AP. The AP may determine whether to allow the authentication of the STA based on the information included in the received authentication request frame. The AP may provide the authentication processing result to the STA via the authentication response frame. When the STA is successfully authenticated, the STA may perform an association process in S330. The association process includes a process in which the STA transmits an association request frame to the AP and the AP transmits an association response frame to the STA in response. The association request frame may include, for example, information related to various capabilities, a beacon listen interval, a service set identifier (SSID), a supported rate, a supported channel, RSN, a mobility domain, a supported operating class, a traffic indication map (TIM) broadcast request, and an interworking service capability. The association response frame may include, for example, information related to various capabilities, a status code, an association ID (AID), a supported rate, an enhanced distributed channel access (EDCA) parameter set, a received channel power indicator (RCPI), a received signal-to-noise indicator (RSNI), a mobility domain, a timeout interval (association comeback time), an overlapping BSS scanning parameter, a TIM broadcast response, and a QoS map. In S340, the STA may perform a security setup process. The security setup process in S340may include a process of setting up a private key through four-way handshaking, for example, through an extensible authentication protocol over LAN (EAPOL) frame. FIG.4illustrates an example of a PPDU used in an IEEE standard. As illustrated, various types of PHY protocol data units (PPDUs) are used in IEEE a/g/n/ac standards. Specifically, an LTF and a STF include a training signal, a SIG-A and a SIG-B include control information for a receiving STA, and a data field includes user data corresponding to a PSDU (MAC PDU/aggregated MAC PDU). FIG.4also includes an example of an HE PPDU according to IEEE 802.11ax. The HE PPDU according toFIG.4is an illustrative PPDU for multiple users. An HE-SIG-B may be included only in a PPDU for multiple users, and an HE-SIG-B may be omitted in a PPDU for a single user. As illustrated inFIG.4, the HE-PPDU for multiple users (MUs) may include a legacy-short training field (L-STF), a legacy-long training field (L-LTF), a legacy-signal (L-SIG), a high efficiency-signal A (HE-SIG A), a high efficiency-signal-B (HE-SIG B), a high efficiency-short training field (HE-STF), a high efficiency-long training field (HE-LTF), a data field (alternatively, an MAC payload), and a packet extension (PE) field. The respective fields may be transmitted for illustrated time periods (i.e., 4 or 8 μs). Hereinafter, a resource unit (RU) used for a PPDU is described. An RU may include a plurality of subcarriers (or tones). An RU may be used to transmit a signal to a plurality of STAs according to OFDMA. Further, an RU may also be defined to transmit a signal to one STA. An RU may be used for an STF, an LTF, a data field, or the like. FIG.5illustrates a layout of resource units (RUs) used in a band of 20 MHz. As illustrated inFIG.5, resource units (RUs) corresponding to different numbers of tones (i.e., subcarriers) may be used to form some fields of an HE-PPDU. For example, resources may be allocated in illustrated RUs for an HE-STF, an HE-LTF, and a data field. As illustrated in the uppermost part ofFIG.5, a 26-unit (i.e., a unit corresponding to 26 tones) may be disposed. Six tones may be used for a guard band in the leftmost band of the MHz band, and five tones may be used for a guard band in the rightmost band of the 20 MHz band. Further, seven DC tones may be inserted in a center band, that is, a DC band, and a 26-unit corresponding to 13 tones on each of the left and right sides of the DC band may be disposed. A 26-unit, a 52-unit, and a 106-unit may be allocated to other bands. Each unit may be allocated for a receiving STA, that is, a user. The layout of the RUs inFIG.5may be used not only for a multiple users (MUs) but also for a single user (SU), in which case one 242-unit may be used and three DC tones may be inserted as illustrated in the lowermost part ofFIG.5. AlthoughFIG.5proposes RUs having various sizes, that is, a 26-RU, a 52-RU, a 106-RU, and a 242-RU, specific sizes of RUs may be extended or increased. Therefore, the present embodiment is not limited to the specific size of each RU (i.e., the number of corresponding tones). FIG.6illustrates a layout of RUs used in a band of 40 MHz. Similarly toFIG.5in which RUs having various sizes are used, a 26-RU, a 52-RU, a 106-RU, a 242-RU, a 484-RU, and the like may be used in an example ofFIG.6. Further, five DC tones may be inserted in a center frequency, 12 tones may be used for a guard band in the leftmost band of the 40 MHz band, and 11 tones may be used for a guard band in the rightmost band of the 40 MHz band. As illustrated inFIG.6, when the layout of the RUs is used for a single user, a 484-RU may be used. The specific number of RUs may be changed similarly toFIG.5.FIG.7illustrates a layout of RUs used in a band of 80 MHz. Similarly toFIG.5andFIG.6in which RUs having various sizes are used, a 26-RU, a 52-RU, a 106-RU, a 242-RU, a 484-RU, a 996-RU, and the like may be used in an example ofFIG.7. Further, seven DC tones may be inserted in the center frequency, 12 tones may be used for a guard band in the leftmost band of the 80 MHz band, and 11 tones may be used for a guard band in the rightmost band of the 80 MHz band. In addition, a 26-RU corresponding to 13 tones on each of the left and right sides of the DC band may be used. As illustrated inFIG.7, when the layout of the RUs is used for a single user, a 996-RU may be used, in which case five DC tones may be inserted. The RU described in the present specification may be used in uplink (UL) communication and downlink (DL) communication. For example, when UL-MU communication which is solicited by a trigger frame is performed, a transmitting STA (e.g., an AP) may allocate a first RU (e.g., 26/52/106/242-RU, etc.) to a first STA through the trigger frame, and may allocate a second RU (e.g., 26/52/106/242-RU, etc.) to a second STA. Thereafter, the first STA may transmit a first trigger-based PPDU based on the first RU, and the second STA may transmit a second trigger-based PPDU based on the second RU. The first/second trigger-based PPDU is transmitted to the AP at the same (or overlapped) time period. For example, when a DL MU PPDU is configured, the transmitting STA (e.g., AP) may allocate the first RU (e.g., 26/52/106/242-RU. etc.) to the first STA, and may allocate the second RU (e.g., 26/52/106/242-RU, etc.) to the second STA. That is, the transmitting STA (e.g., AP) may transmit HE-STF, HE-LTF, and Data fields for the first STA through the first RU in one MU PPDU, and may transmit HE-STF, HE-LTF, and Data fields for the second STA through the second RU. Information related to a layout of the RU may be signaled through HE-SIG-B. FIG.8illustrates a structure of an HE-SIG-B field. As illustrated, an HE-SIG-B field810includes a common field820and a user-specific field830. The common field820may include information commonly applied to all users (i.e., user STAs) which receive SIG-B. The user-specific field830may be called a user-specific control field. When the SIG-B is transferred to a plurality of users, the user-specific field830may be applied only any one of the plurality of users. As illustrated inFIG.8, the common field820and the user-specific field830may be separately encoded. The common field820may include RU allocation information of N*8 bits. For example, the RU allocation information may include information related to a location of an RU. For example, when a 20 MHz channel is used as shown inFIG.5, the RU allocation information may include information related to a specific frequency band to which a specific RU (26-RU/52-RU/106-RU) is arranged. An example of a case in which the RU allocation information consists of 8 bits is as follows. TABLE 18 bits indicesNumber(B7 B6 B5 B4ofB3 B2 B1 B0)#1#2#3#4#5#6#7#8#9entries0000000026262626262626262610000000126262626262626521000000102626262626522626100000011262626262652521000001002626522626262626100000101262652262626521000001102626522652262610000011126265226525210000100052262626262626261 As shown the example ofFIG.5, up to nine 26-RUs may be allocated to the 20 MHz channel. When the RU allocation information of the common field820is set to “00000000” as shown in Table 1, the nine 26-RUs may be allocated to a corresponding channel (i.e., 20 MHz). In addition, when the RU allocation information of the common field820is set to “00000001” as shown in Table 1, seven 26-RUs and one 52-RU are arranged in a corresponding channel. That is, in the example ofFIG.5, the 52-RU may be allocated to the rightmost side, and the seven 26-RUs may be allocated to the left thereof. The example of Table 1 shows only some of RU locations capable of displaying the RU allocation information. For example, the RU allocation information may include an example of Table 2 below. TABLE 28 bits indices(B7 B6 B5NumberB4 B3 B2ofB1 B0)#1#2#3#4#5#6#7#8#9entries01000y2y1y01062626262626801001y2y1y0106262626528 “01000y2y1y0” relates to an example in which a 106-RU is allocated to the leftmost side of the 20 MHz channel, and five 26-RUs are allocated to the right side thereof. In this case, a plurality of STAs (e.g., user-STAs) may be allocated to the 106-RU, based on a MU-MIMO scheme. Specifically, up to 8 STAs (e.g., user-STAs) may be allocated to the 106-RU, and the number of STAs (e.g., user-STAs) allocated to the 106-RU is determined based on 3-bit information (y2y1y0). For example, when the 3-bit information (y2y1y0) is set to N, the number of STAs (e.g., user-STAs) allocated to the 106-RU based on the MU-MIMO scheme may be N+1. In general, a plurality of STAs (e.g., user STAs) different from each other may be allocated to a plurality of RUs. However, the plurality of STAs (e.g., user STAs) may be allocated to one or more RUs having at least a specific size (e.g.,106subcarriers), based on the MU-MIMO scheme. As shown inFIG.8, the user-specific field830may include a plurality of user fields. As described above, the number of STAs (e.g., user STAs) allocated to a specific channel may be determined based on the RU allocation information of the common field820. For example, when the RU allocation information of the common field820is “00000000”, one user STA may be allocated to each of nine 26-RUs (e.g., nine user STAs may be allocated). That is, up to 9 user STAs may be allocated to a specific channel through an OFDMA scheme. In other words, up to 9 user STAs may be allocated to a specific channel through a non-MU-MIMO scheme. For example, when RU allocation is set to “01000y2y1y0”, a plurality of STAs may be allocated to the 106-RU arranged at the leftmost side through the MU-MIMO scheme, and five user STAs may be allocated to five 26-RUs arranged to the right side thereof through the non-MU MIMO scheme. This case is specified through an example ofFIG.9. FIG.9illustrates an example in which a plurality of user STAs are allocated to the same RU through a MU-MIMO scheme. For example, when RU allocation is set to “01000010” as shown inFIG.9, a 106-RU may be allocated to the leftmost side of a specific channel, and five 26-RUs may be allocated to the right side thereof. In addition, three user STAs may be allocated to the 106-RU through the MU-MIMO scheme. As a result, since eight user STAs are allocated, the user-specific field830of HE-SIG-B may include eight user fields. The eight user fields may be expressed in the order shown inFIG.9. In addition, as shown inFIG.8, two user fields may be implemented with one user block field. The user fields shown inFIG.8andFIG.9may be configured based on two formats. That is, a user field related to a MU-MIMO scheme may be configured in a first format, and a user field related to a non-MIMO scheme may be configured in a second format. Referring to the example ofFIG.9, a user field1to a user field3may be based on the first format, and a user field4to a user field8may be based on the second format. The first format or the second format may include bit information of the same length (e.g., 21 bits). Each user field may have the same size (e.g., 21 bits). For example, the user field of the first format (the first of the MU-MIMO scheme) may be configured as follows. For example, a first bit (i.e., B0-B10) in the user field (i.e., 21 bits) may include identification information (e.g., STA-ID, partial AID, etc.) of a user STA to which a corresponding user field is allocated. In addition, a second bit (i.e., B11-B14) in the user field (i.e., 21 bits) may include information related to a spatial configuration. In addition, a third bit (i.e., B15-18) in the user field (i.e., 21 bits) may include modulation and coding scheme (MCS) information. The MCS information may be applied to a data field in a PPDU including corresponding SIG-B. An MCS, MCS information, an MCS index, an MCS field, or the like used in the present specification may be indicated by an index value. For example, the MCS information may be indicated by an index 0 to an index 11. The MCS information may include information related to a constellation modulation type (e.g., BPSK, QPSK, 16-QAM, 64-QAM, 256-QAM, 1024-QAM, etc.) and information related to a coding rate (e.g., 1/2, 2/3, 3/4, 5/6e, etc.). Information related to a channel coding type (e.g., LCC or LDPC) may be excluded in the MCS information. In addition, a fourth bit (i.e., B19) in the user field (i.e., 21 bits) may be a reserved field. In addition, a fifth bit (i.e., B20) in the user field (i.e., 21 bits) may include information related to a coding type (e.g., BCC or LDPC). That is, the fifth bit (i.e., B20) may include information related to a type (e.g., BCC or LDPC) of channel coding applied to the data field in the PPDU including the corresponding SIG-B. The aforementioned example relates to the user field of the first format (the format of the MU-MIMO scheme). An example of the user field of the second format (the format of the non-MU-MIMO scheme) is as follows. A first bit (e.g., B0-B10) in the user field of the second format may include identification information of a user STA. In addition, a second bit (e.g., B11-B13) in the user field of the second format may include information related to the number of spatial streams applied to a corresponding RU. In addition, a third bit (e.g., B14) in the user field of the second format may include information related to whether a beamforming steering matrix is applied. A fourth bit (e.g., B15-B18) in the user field of the second format may include modulation and coding scheme (MCS) information. In addition, a fifth bit (e.g., B19) in the user field of the second format may include information related to whether dual carrier modulation (DCM) is applied. In addition, a sixth bit (i.e., B20) in the user field of the second format may include information related to a coding type (e.g., BCC or LDPC). Hereinafter, a PPDU transmitted/received in a STA of the present specification will be described. FIG.10illustrates an example of a PPDU used in the present specification. The PPDU ofFIG.10may be called in various terms such as an EHT PPDU, a TX PPDU, an RX PPDU, a first type or N-th type PPDU, or the like. For example, in the present specification, the PPDU or the EHT PPDU may be called in various terms such as a TX PPDU, a RX PPDU, a first type or N-th type PPDU, or the like. In addition, the EHT PPDU may be used in an EHT system and/or a new WLAN system enhanced from the EHT system. The PPDU ofFIG.10may indicate the entirety or part of a PPDU type used in the EHT system. For example, the example ofFIG.10may be used for both of a single-user (SU) mode and a multi-user (MU) mode. In other words, the PPDU ofFIG.10may be a PPDU for one receiving STA or a plurality of receiving STAs. When the PPDU ofFIG.10is used for a trigger-based (TB) mode, the EHT-SIG ofFIG.10may be omitted. In other words, an STA which has received a trigger frame for uplink-MU (UL-MU) may transmit the PPDU in which the EHT-SIG is omitted in the example ofFIG.10. InFIG.10, an L-STF to an EHT-LTF may be called a preamble or a physical preamble, and may be generated/transmitted/received/obtained/decoded in a physical layer. A subcarrier spacing of the L-STF, L-LTF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields ofFIG.10may be determined as 312.5 kHz, and a subcarrier spacing of the EHT-STF, EHT-LTF, and Data fields may be determined as 78.125 kHz. That is, a tone index (or subcarrier index) of the L-STF, L-LTF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields may be expressed in unit of 312.5 kHz, and a tone index (or subcarrier index) of the EHT-STF, EHT-LTF, and Data fields may be expressed in unit of 78.125 kHz. In the PPDU ofFIG.10, the L-LTE and the L-STF may be the same as those in the conventional fields. The L-SIG field ofFIG.10may include, for example, bit information of 24 bits. For example, the 24-bit information may include a rate field of 4 bits, a reserved bit of 1 bit, a length field of 12 bits, a parity bit of 1 bit, and a tail bit of 6 bits. For example, the length field of 12 bits may include information related to a length or time duration of a PPDU. For example, the length field of 12 bits may be determined based on a type of the PPDU. For example, when the PPDU is a non-HT, HT, VHT PPDU or an EHT PPDU, a value of the length field may be determined as a multiple of 3. For example, when the PPDU is an HE PPDU, the value of the length field may be determined as “a multiple of 3”+1 or “a multiple of 3”+2. In other words, for the non-HT, HT, VHT PPDI or the EHT PPDU, the value of the length field may be determined as a multiple of 3, and for the HE PPDU, the value of the length field may be determined as “a multiple of 3”+1 or “a multiple of 3”+2. For example, the transmitting STA may apply BCC encoding based on a 1/2 coding rate to the 24-bit information of the L-SIG field. Thereafter, the transmitting STA may obtain a BCC coding bit of 48 bits. BPSK modulation may be applied to the 48-bit coding bit, thereby generating 48 BPSK symbols. The transmitting STA may map the 48 BPSK symbols to positions except for a pilot subcarrier{subcarrier index −21, −7, +7, +21} and a DC subcarrier{subcarrier index 0}. As a result, the 48 BPSK symbols may be mapped to subcarrier indices −26 to −22, −20 to −8, −6 to −1, +1 to +6, +8 to +20, and +22 to +26. The transmitting STA may additionally map a signal of {−1, −1, −1, 1} to a subcarrier index{−28, −27, +27, +28}. The aforementioned signal may be used for channel estimation on a frequency domain corresponding to −28, −27, +27, +281. The transmitting STA may generate an RL-SIG generated in the same manner as the L-SIG. BPSK modulation may be applied to the RL-SIG. The receiving STA may know that the RX PPDU is the HE PPDU or the EHT PPDU, based on the presence of the RL-SIG. A universal SIG (U-SIG) may be inserted after the RL-SIG ofFIG.10. The U-SIB may be called in various terms such as a first SIG field, a first SIG, a first type SIG, a control signal, a control signal field, a first (type) control signal, or the like. The U-SIG may include information of N bits, and may include information for identifying a type of the EHT PPDU. For example, the U-SIG may be configured based on two symbols (e.g., two contiguous OFDM symbols). Each symbol (e.g., OFDM symbol) for the U-SIG may have a duration of 4 μs. Each symbol of the U-SIG may be used to transmit the 26-bit information. For example, each symbol of the U-SIG may be transmitted/received based on 52 data tomes and 4 pilot tones. Through the U-SIG (or U-SIG field), for example, A-bit information (e.g., 52 un-coded bits) may be transmitted. A first symbol of the U-SIG may transmit first X-bit information (e.g., 26 un-coded bits) of the A-bit information, and a second symbol of the U-SIB may transmit the remaining Y-bit information (e.g. 26 un-coded bits) of the A-bit information. For example, the transmitting STA may obtain 26 un-coded bits included in each U-SIG symbol. The transmitting STA may perform convolutional encoding (i.e., BCC encoding) based on a rate of R=1/2 to generate 52-coded bits, and may perform interleaving on the 52-coded bits. The transmitting STA may perform BPSK modulation on the interleaved 52-coded bits to generate 52 BPSK symbols to be allocated to each U-SIG symbol. One U-SIG symbol may be transmitted based on 65 tones (subcarriers) from a subcarrier index −28 to a subcarrier index +28, except for a DC index 0. The 52 BPSK symbols generated by the transmitting STA may be transmitted based on the remaining tones (subcarriers) except for pilot tones, i.e., tones −21, −7, +7, +21. For example, the A-bit information (e.g., 52 un-coded bits) generated by the U-SIG may include a CRC field (e.g., a field having a length of 4 bits) and a tail field (e.g., a field having a length of 6 bits). The CRC field and the tail field may be transmitted through the second symbol of the U-SIG. The CRC field may be generated based on 26 bits allocated to the first symbol of the U-SIG and the remaining 16 bits except for the CRC/tail fields in the second symbol, and may be generated based on the conventional CRC calculation algorithm. In addition, the tail field may be used to terminate trellis of a convolutional decoder, and may be set to, for example, “000000”. The A-bit information (e.g., 52 un-coded bits) transmitted by the U-SIG (or U-SIG field) may be divided into version-independent bits and version-dependent bits. For example, the version-independent bits may have a fixed or variable size. For example, the version-independent bits may be allocated only to the first symbol of the U-SIG, or the version-independent bits may be allocated to both of the first and second symbols of the U-SIG. For example, the version-independent bits and the version-dependent bits may be called in various terms such as a first control bit, a second control bit, or the like. For example, the version-independent bits of the U-SIG may include a PHY version identifier of 3 bits. For example, the PHY version identifier of 3 bits may include information related to a PHY version of a TX/RX PPDU. For example, a first value of the PHY version identifier of 3 bits may indicate that the TX/RX PPDU is an EHT PPDU. In other words, when the transmitting STA transmits the EHT PPDU, the PHY version identifier of 3 bits may be set to a first value. In other words, the receiving STA may determine that the RX PPDU is the EHT PPDU, based on the PHY version identifier having the first value. For example, the version-independent bits of the U-SIG may include a UL/DL flag field of 1 bit. A first value of the UL/DL flag field of 1 bit relates to UL communication, and a second value of the UL/DL flag field relates to DL communication. For example, the version-independent bits of the U-SIG may include information related to a TXOP length and information related to a BSS color ID. For example, when the EHT PPDU is divided into various types (e.g., various types such as an EHT PPDU related to an SU mode, an EHT PPDU related to a MU mode, an EHT PPDU related to a TB mode, an EHT PPDU related to extended range transmission, or the like), information related to the type of the EHT PPDU may be included in the version-dependent bits of the U-SIG. For example, the U-SIG may include: 1) a bandwidth field including information related to a bandwidth; 2) a field including information related to an MCS scheme applied to EHT-SIG; 3) an indication field including information regarding whether a dual subcarrier modulation (DCM) scheme is applied to EHT-SIG; 4) a field including information related to the number of symbol used for EHT-SIG; 5) a field including information regarding whether the EHT-SIG is generated across a full band; 6) a field including information related to a type of EHT-LTF/STF; and 7) information related to a field indicating an EHT-LTF length and a CP length. Preamble puncturing may be applied to the PPDU ofFIG.10. The preamble puncturing implies that puncturing is applied to part (e.g., a secondary 20 MHz band) of the full band. For example, when an 80 MHz PPDU is transmitted, an STA may apply puncturing to the secondary 20 MHz band out of the 80 MHz band, and may transmit a PPDU only through a primary 20 MHz band and a secondary 40 MHz band. For example, a pattern of the preamble puncturing may be configured in advance. For example, when a first puncturing pattern is applied, puncturing may be applied only to the secondary 20 MHz band within the 80 MHz band. For example, when a second puncturing pattern is applied, puncturing may be applied to only any one of two secondary 20 MHz bands included in the secondary 40 MHz band within the 80 MHz band. For example, when a third puncturing pattern is applied, puncturing may be applied to only the secondary 20 MHz band included in the primary 80 MHz band within the 160 MHz band (or 80+80 MHz band). For example, when a fourth puncturing is applied, puncturing may be applied to at least one 20 MHz channel not belonging to a primary 40 MHz band in the presence of the primary 40 MHz band included in the 80 MHaz band within the 160 MHz band (or 80+80 MHz band). Information related to the preamble puncturing applied to the PPDU may be included in U-SIG and/or EHT-SIG. For example, a first field of the U-SIG may include information related to a contiguous bandwidth, and second field of the U-SIG may include information related to the preamble puncturing applied to the PPDU. For example, the U-SIG and the EHT-SIG may include the information related to the preamble puncturing, based on the following method. When a bandwidth of the PPDU exceeds 80 MHz, the U-SIG may be configured individually in unit of 80 MHz. For example, when the bandwidth of the PPDU is 160 MHz, the PPDU may include a first U-SIG for a first 80 MHz band and a second U-SIG for a second 80 MHz band. In this case, a first field of the first U-SIG may include information related to a 160 MHz bandwidth, and a second field of the first U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the first 80 MHz band. In addition, a first field of the second U-SIG may include information related to a 160 MHz bandwidth, and a second field of the second U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the second 80 MHz band. Meanwhile, an EHT-SIG contiguous to the first U-SIG may include information related to a preamble puncturing applied to the second 80 MHz band (i.e., information related to a preamble puncturing pattern), and an EHT-SIG contiguous to the second U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) applied to the first 80 MHz band. Additionally or alternatively, the U-SIG and the EHT-SIG may include the information related to the preamble puncturing, based on the following method. The U-SIG may include information related to a preamble puncturing (i.e., information related to a preamble puncturing pattern) for all bands. That is, the EHT-SIG may not include the information related to the preamble puncturing, and only the U-SIG may include the information related to the preamble puncturing (i.e., the information related to the preamble puncturing pattern). The U-SIG may be configured in unit of 20 MHz. For example, when an 80 MHz PPDU is configured, the U-SIG may be duplicated. That is, four identical U-SIGs may be included in the 80 MHz PPDU. PPDUs exceeding an 80 MHz bandwidth may include different U-SIGs. The EHT-SIG ofFIG.10may include control information for the receiving STA. The EHT-SIG may be transmitted through at least one symbol, and one symbol may have a length of 4 μs. Information related to the number of symbols used for the EHT-SIG may be included in the U-SIG. The EHT-SIG may include a technical feature of the HE-SIG-B described with reference toFIG.8andFIG.9. For example, the EHT-SIG may include a common field and a user-specific field as in the example ofFIG.8. The common field of the EHT-SIG may be omitted, and the number of user-specific fields may be determined based on the number of users. As in the example ofFIG.8, the common field of the EHT-SIG and the user-specific field of the EHT-SIG may be individually coded. One user block field included in the user-specific field may include information for two users, but a last user block field included in the user-specific field may include information for one user. That is, one user block field of the EHT-SIG may include up to two user fields. As in the example ofFIG.9, each user field may be related to MU-MIMO allocation, or may be related to non-MU-MIMO allocation. As in the example ofFIG.8, the common field of the EHT-SIG may include a CRC bit and a tail bit. A length of the CRC bit may be determined as 4 bits. A length of the tail bit may be determined as 6 bits, and may be set to ‘000000’. As in the example ofFIG.8, the common field of the EHT-SIG may include RU allocation information. The RU allocation information may imply information related to a location of an RU to which a plurality of users (i.e., a plurality of receiving STAs) are allocated. The RU allocation information may be configured in unit of 8 bits (or N bits), as in Table 1. A mode in which the common field of the EHT-SIG is omitted may be supported. The mode in the common field of the EHT-SIG is omitted may be called a compressed mode. When the compressed mode is used, a plurality of users (i.e., a plurality of receiving STAs) may decode the PPDU (e.g., the data field of the PPDU), based on non-OFDMA. That is, the plurality of users of the EHT PPDU may decode the PPDU (e.g., the data field of the PPDU) received through the same frequency band. Meanwhile, when a non-compressed mode is used, the plurality of users of the EHT PPDU may decode the PPDU (e.g., the data field of the PPDU), based on OFDMA. That is, the plurality of users of the EHT PPDU may receive the PPDU (e.g., the data field of the PPDU) through different frequency bands. The EHT-SIG may be configured based on various MCS schemes. As described above, information related to an MCS scheme applied to the EHT-SIG may be included in U-SIG. The EHT-SIG may be configured based on a DCM scheme. For example, among N data tones (e.g., 52 data tones) allocated for the EHT-SIG, a first modulation scheme may be applied to half of consecutive tones, and a second modulation scheme may be applied to the remaining half of the consecutive tones. That is, a transmitting STA may use the first modulation scheme to modulate specific control information through a first symbol and allocate it to half of the consecutive tones, and may use the second modulation scheme to modulate the same control information by using a second symbol and allocate it to the remaining half of the consecutive tones. As described above, information (e.g., a 1-bit field) regarding whether the DCM scheme is applied to the EHT-SIG may be included in the U-SIG. An HE-STF ofFIG.10may be used for improving automatic gain control estimation in a multiple input multiple output (MIMO) environment or an OFDMA environment. An HE-LTF ofFIG.10may be used for estimating a channel in the MIMO environment or the OFDMA environment. Information related to a type of STF and/or LTF (information related to a GI applied to LTF is also included) may be included in a SIG-A field and/or SIG-B field or the like ofFIG.10. A PPDU (e.g., EHT-PPDU) ofFIG.10may be configured based on the example ofFIG.5andFIG.6. For example, an EHT PPDU transmitted on a 20 MHz band, i.e., a 20 MHz EHT PPDU, may be configured based on the RU ofFIG.5. That is, a location of an RU of EHT-STF, EHT-LTF, and data fields included in the EHT PPDU may be determined as shown inFIG.5. An EHT PPDU transmitted on a 40 MHz band, i.e., a 40 MHz EHT PPDU, may be configured based on the RU ofFIG.6. That is, a location of an RU of EHT-STF, EHT-LTF, and data fields included in the EHT PPDU may be determined as shown inFIG.6. Since the RU location ofFIG.6corresponds to 40 MHz, a tone-plan for 80 MHz may be determined when the pattern ofFIG.6is repeated twice. That is, an 80 MHz EHT PPDU may be transmitted based on a new tone-plan in which not the RU ofFIG.7but the RU ofFIG.6is repeated twice. When the pattern ofFIG.6is repeated twice, 23 tones (i.e., 11 guard tones+12 guard tones) may be configured in a DC region. That is, a tone-plan for an 80 MHz EHT PPDU allocated based on OFDMA may have 23 DC tones. Unlike this, an 80 MHz EHT PPDU allocated based on non-OFDMA (i.e., a non-OFDMA full bandwidth 80 MHz PPDU) may be configured based on a 996-RU, and may include 5 DC tones, 12 left guard tones, and 11 right guard tones. A tone-plan for 160/240/320 MHz may be configured in such a manner that the pattern ofFIG.6is repeated several times. The PPDU ofFIG.10may be determined (or identified) as an EHT PPDU based on the following method. A receiving STA may determine a type of an RX PPDU as the EHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the EHT PPDU: 1) when a first symbol after an L-LTF signal of the RX PPDU is a BPSK symbol; 2) when RL-SIG in which the L-SIG of the RX PPDU is repeated is detected; and 3) when a result of applying “modulo 3” to a value of a length field of the L-SIG of the RX PPDU is detected as “0”. When the RX PPDU is determined as the EHT PPDU, the receiving STA may detect a type of the EHT PPDU (e.g., an SU/MU/Trigger-based/Extended Range type), based on bit information included in a symbol after the RL-SIG ofFIG.10. In other words, the receiving STA may determine the RX PPDU as the EHT PPDU, based on: 1) a first symbol after an L-LTF signal, which is a BPSK symbol; 2) RL-SIG contiguous to the L-SIG field and identical to L-SIG; 3) L-SIG including a length field in which a result of applying “modulo 3” is set to “0”; and 4) a 3-bit PHY version identifier of the aforementioned U-SIG (e.g., a PHY version identifier having a first value). For example, the receiving STA may determine the type of the RX PPDU as the EHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the HE PPDU: 1) when a first symbol after an L-LTF signal is a BPSK symbol; 2) when RL-SIG in which the L-SIG is repeated is detected; and 3) when a result of applying “modulo 3” to a value of a length field of the L-SIG is detected as “1” or “2”. For example, the receiving STA may determine the type of the RX PPDU as a non-HT, HT, and VHT PPDU, based on the following aspect. For example, the RX PPDU may be determined as the non-HT, HT, and VHT PPDU: 1) when a first symbol after an L-LTF signal is a BPSK symbol; and 2) when RL-SIG in which L-SIG is repeated is not detected. In addition, even if the receiving STA detects that the RL-SIG is repeated, when a result of applying “modulo 3” to the length value of the L-SIG is detected as “0”, the RX PPDU may be determined as the non-HT, HT, and VHT PPDU. In the following example, a signal represented as a (TX/RX/UL/DL) signal, a (TX/RX/UL/DL) frame, a (TX/RX/UL/DL) packet, a (TX/RX/UL/DL) data unit, (TX/RX/UL/DL) data, or the like may be a signal transmitted/received based on the PPDU ofFIG.10. The PPDU ofFIG.10may be used to transmit/receive frames of various types. For example, the PPDU ofFIG.10may be used for a control frame. An example of the control frame may include a request to send (RTS), a clear to send (CTS), a power save-poll (PS-poll), BlockACKReq, BlockAck, a null data packet (NDP) announcement, and a trigger frame. For example, the PPDU ofFIG.10may be used for a management frame. An example of the management frame may include a beacon frame, a (re-)association request frame, a (re-)association response frame, a probe request frame, and a probe response frame. For example, the PPDU ofFIG.10may be used for a data frame. For example, the PPDU ofFIG.10may be used to simultaneously transmit at least two or more of the control frames, the management frame, and the data frame. FIG.11illustrates an example of a modified transmission device and/or receiving device of the present specification. Each device/STA of the sub-figure (a)/(b) ofFIG.1may be modified as shown inFIG.11. A transceiver630ofFIG.11may be identical to the transceivers113and123ofFIG.1. The transceiver630ofFIG.11may include a receiver and a transmitter. A processor610ofFIG.11may be identical to the processors111and121ofFIG.1. Alternatively, the processor610ofFIG.11may be identical to the processing chips114and124ofFIG.1. A memory620ofFIG.11may be identical to the memories112and122ofFIG.1. Alternatively, the memory620ofFIG.11may be a separate external memory different from the memories112and122ofFIG.1. Referring toFIG.11, a power management module611manages power for the processor610and/or the transceiver630. A battery612supplies power to the power management module611. A display613outputs a result processed by the processor610. A keypad614receives inputs to be used by the processor610. The keypad614may be displayed on the display613. A SIM card615may be an integrated circuit which is used to securely store an international mobile subscriber identity (IMSI) and its related key, which are used to identify and authenticate subscribers on mobile telephony devices such as mobile phones and computers. Referring toFIG.11, a speaker640may output a result related to a sound processed by the processor610. A microphone641may receive an input related to a sound to be used by the processor610. 1. Spatial Reuse (SR) Behavior In 802.11ax wireless LAN systems, SR operation is a method of improving spectral efficiency by increasing the number of parallel transmissions. Carrier Sense Threshold (CST) adjustment for interBSS transmission detected through SR operation may be performed. CST coordination is achieved through two mechanisms: i) Overlapping Basic Service Set Packet Detect (OBSS PD)-based SR, and ii) Parametrized Spatial Reuse (PSR). The main difference between the two mechanisms lies in the degree of collaboration between the BSSs to identify SR-based opportunities. Both mechanisms include Transmission Power Control (TPC) to limit further interference generated by simultaneous transmissions. SR operation is introduced as a mechanism to increase the number of stored transmissions and spectral efficiency in OBSS. In some cases, dynamic sensitivity and transmit power tuning have been shown to significantly improve network performance and contribute to reducing the impact of the well-known hidden/exposed device problem. However, in some cases, modifying the CST or transmit power may exacerbate the hidden/exposed device problem by creating flow starvation and asymmetry. FIG.12is a chart showing the effect of increasing and decreasing transmit power and sensitivity in a WLAN. For example, increasing the sensitivity can contribute to more frequent access to the channel because the carrier sense (CS) area is reduced. However, this may lead to observing a higher number of collisions with hidden nodes. In addition, a more robust Modulation and Coding Scheme (MCS) is required because a more aggressive channel access policy may expose the receiver to higher levels of interference. SR operation relies on dynamic Clear Channel Assessment/Carrier Sense (CCA/CS) coordination to increase the number of transmit opportunities (TXOPs) in OBSS. The CCA/CS mechanism is triggered on a Wi-Fi device when it detects the preamble of another device transmission. A detected transmission (exceeding the physical sensitivity threshold) may not decode properly if the received signal is poor. In contrast, for decoded transmissions that exceed the CCA/CS threshold, the physical or virtual carrier sensing action sets the medium in use. The capture effect is also used when detecting multiple signals, so operation can be locked to the strongest signal without experiencing packet collisions. FIG.13is an example illustrating a CS area in a WLAN system. The aforementioned concept is illustrated inFIG.13. InFIG.13, the AP A in the middle can detect a received signal higher than the receiver sensitivity of the antenna, but can only decode signals above the CCA/CS threshold. In addition, channel utilization is improved because AP B transmission can be ignored using the OBSS/PD threshold due to the 11ax SR operation. In addition, transmit power limiting is applied in the case of a TXOP sensed using the OBSS/PD threshold. InFIG.13, transmit power is fixed and all devices use the same frequency channel. 1.1 OBSS PD-based SR Upon receiving a PPDU, the MAC layer of a specific device receives notification from the PHY. At this time, the node inspects the frame and determines whether the PPDU is an Intra-BSS frame or an Inter-BSS frame among various operations. By quickly identifying the source of an ongoing transmission, a HE STA can improve the probability of accessing a channel using an appropriate OBSS/PD value. 802.11ax defines a set of rules to limit the OBSS/PD threshold, and the upper limit is as follows. OBBS/PD≤max(OBSS/PDmin,min(OBSS/PDmax,OBSS/PDmin+(TX_PWRref-TX_PWR)), Here, OBSS/PDminand OBSS/PDmaxare −82 dBm and −62 dBm, respectively, and the reference power TX PWRrefis 21 dBm or 25 dBm depending on the capability of the device. TX PWR means the transmit power at the antenna connector in dBm of the HE node that identifies the SR-based TXOP. FIG.14is a graph showing adjustment rules for OBSS/PD and transmit power. Along with sensitivity adjustment, SR operations include transmit power limiting for all transmissions that occur as a result of a sensed SR TXOP (i.e., after ignoring inter-BSS frames given via OBSS/PD-based SR operations). The maximum allowable transmit power (TX PWRmax) is defined as: T_PWRmax=TX_PWRref−(OBSS/PD−OBSS/PDmin) The previous equation holds for OBSS/PDmax>=OBSS/PD>OBSS/PDmin. Otherwise, the maximum transmit power is not limited. By applying power limiting, the OBSS/PD value aims to reduce the effect of simultaneous transmission caused by SR. Simply put, the higher the OBSS/PD threshold (more inter-BSS transmissions can be ignored), the lower the transmit power (less interference must be generated). The transmit power limit lasts until the end of the SR TXOP identified by the HE node, which begins when the backoff reaches zero. This period depends on the active transmission period used to detect the SR TXOP. 1.2 Parametrized Spatial Reuse (PSR) PSR operation is defined as an alternative to OBSS/PD based SR for TB transmission. A node using a PSR opportunity identifies the PSR opportunity in the sensed TB transmission. On the other hand, the opportunist performs TB transmission and finds a transmission holder indicating support for PSR operation in the header of TF (Trigger Frame). To identify a PSR opportunity, the opportunist must check whether the TB PPDU following a given TF packet can be ignored. To do so, the opportunist's intended transmit power must not exceed the requirement imposed by the transmit holder (encapsulated in the PSR_INPUT parameter). If the opportunist checks the PSR value of the detected TF and confirms that the intended transmit power is acceptable, it is transmitted during the duration of the TB PPDU(s) (indicated in the Common Info field). In particular, the intended transmit power must be less than the PSR value measured in the legacy portion of the TF (i.e., the PHY header) minus the Received Power Level (RPL). The PSR value is calculated as follows. PSR=TX PWRAP+IAPmax where TX PWRAP is the normalized transmit power in dBm at the output of the antenna connector and I∧max_AP is the normalized value in dB that captures the maximum allowed interference at the transmit holder. In particular, I∧max_AP is calculated by subtracting the minimum SNR that gives 10% PER from the target RSSI indicated in TF (based on the highest MCS used for UL HE TB PPDU transmission). A safety margin (set in the AP) is also included to not exceed 5 dB. 2. Trigger frame and SR FIG.15shows an operation according to UL-MU. As shown, a transmitting STA (e.g., AP) may perform channel access through contending (i.e., backoff operation) and transmit a trigger frame1030. That is, the transmitting STA (e.g., AP) may transmit a PPDU including a trigger frame1030. When a PPDU including a trigger frame is received, a TB (trigger-based) PPDU is transmitted after a delay equal to SIFS. The TB PPDUs1041and1042may be transmitted in the same time zone and transmitted from a plurality of STAs (e.g., user STAs) for which AIDs are indicated in the trigger frame1030. The ACK frame1050for the TB PPDU may be implemented in various forms. Specific characteristics of the trigger frame are described with reference toFIGS.16to19. Even when UL-MU communication is used, an orthogonal frequency division multiple access (OFDMA) technique or a MU MIMO technique may be used, and OFDMA and MU MIMO techniques may be used simultaneously. FIG.16shows an example of a common information field of a trigger frame. FIG.17shows another example of a common information field of a trigger frame. FIG.16shows an HE variant of a common information field, andFIG.17shows an EHT variant of a common information field. That is, the trigger frame may include a common information field corresponding to the HE variant and/or a common information field corresponding to the EHT variant. FIG.18shows a format of a UL Spatial Reuse subfield. Referring toFIGS.16and17, when the trigger frame requests the HE TB PPDU, the UL Spatial Reuse subfield of the common information field delivers a value to be included in the Spatial Reuse field in the HE-SIG-A field of the requested HE TB PPDU. In the UL Spatial Reuse subfield, each Spatial Reuse n subfield (1<=n<=4) is set to the same value as the corresponding subfield in the HE-SIG-A field of the HE TB PPDU. Spatial Reuse1, Spatial Reuse2, Spatial Reuse3, and Spatial Reuse4fields included in the HE-SIG-A field of the HE TB PPDU are defined as follows. Each Spatial Reuse field consists of 4 bits. Each Spatial Reuse field included in the HE-SIG-A field of the HE TB PPDU indicates whether a specific spatial reuse mode is allowed in a subband of the PPDU while the PPDU is being transmitted, and indicates a value used to determine the limit on transmission power of a Parameterized Spatial Reuse Transmission (PSRT) PPDU when PSR reuse is allowed. First, if the Bandwidth field indicates 20 MHz, 40 MHz or 80 MHz, the Spatial Reuse1field is applied to the first 20 MHz subband. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse1field is applied to the first 40 MHz subband of the 160 MHz operating band. The Spatial Reuse1field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse1field refers to the first value in the TXVECTOR parameter SPATIAL_REUSE when present. Second, if the bandwidth field indicates 40 MHz or 80 MHz, the Spatial Reuse2field is applied to the second 20 MHz subband. If the channel width in which the STA operates is 20 MHz, the Spatial Reuse2field is set to the same value as the Spatial Reuse1field. If the channel width in which the STA operates is 40 MHz in the 2.4 GHz band, the Spatial Reuse2field is set to the same value as the Spatial Reuse1field. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse2field is applied to the second 40 MHz subband of the 160 MHz operating band. The Spatial Reuse2field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse2field refers to the second value in the TXVECTOR parameter SPATIAL_REUSE when present. Thirdly, if the bandwidth field indicates 80 MHz, the Spatial Reuse3field is applied to the third 20 MHz subband. If the channel width in which the STA operates is 20 MHz or 40 MHz, the Spatial Reuse3field is set to the same value as the Spatial Reuse1field. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse3field is applied to the third 40 MHz subband of the 160 MHz operating band. If the channel width in which the STA operates is 80+80 MHz, the Spatial Reuse3field is set to the same value as the Spatial Reuse1field. The Spatial Reuse3field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse3field refers to the third value in the TXVECTOR parameter SPATIAL_REUSE when present. Fourth, if the bandwidth field indicates 80 MHz, the Spatial Reuse4field is applied to the fourth 20 MHz subband. If the channel width in which the STA operates is 20 MHz, the Spatial Reuse4field is set to the same value as the Spatial Reuse1field. If the channel width in which the STA operates is 40 MHz, the Spatial Reuse4field is set to the same value as the Spatial Reuse2field. If the bandwidth field indicates 160/80+80 MHz, the Spatial Reuse4field is applied to the fourth 40 MHz subband of the 160 MHz operating band. If the channel width in which the STA operates is 80+80 MHz, the Spatial Reuse4field is set to the same value as the Spatial Reuse2field. The Spatial Reuse4field is set to one of the Spatial Reuse field encoding values for the HE TB PPDU as shown in Table 3 below. The Spatial Reuse4field refers to the fourth value in the TXVECTOR parameter SPATIAL_REUSE when present. TABLE 3ValueMeaning0PSR_DISALLOW1PSR = −80 dBm2PSR = −74 dBm3PSR = −68 dBm4PSR = −62 dBm5PSR = −56 dBm6PSR = −50 dBm7PSR = −47 dBm8PSR = −44 dBm9PSR = −41 dBm10PSR = −38 dBm11PSR = −35 dBm12PSR = −32 dBm13PSR = −29 dBm14PSR ≥ −26 dBm15PSR_AND_NON_SRG_OBSS_PD_PROHIBITED The four Spatial Reuse1,2,3, and4fields are arranged in order of frequency as follows. In the case of 20 MHz, one Spatial Reuse field corresponds to the entire 20 MHz (the other 3 Spatial Reuse fields show the same value). The Spatial Reuse field applies only to the MHz used for transmission. In the case of 40 MHz, there are two Spatial Reuse fields including a Spatial Reuse3field having the same value as the Spatial Reuse1field and a Spatial Reuse4field having the same value as the Spatial Reuse2field. Each pair of Spatial Reuse fields applies only to the corresponding 20 MHz used for transmission. In the case of 80 MHz, there are four Spatial Reuse fields, one for each 20 MHz subchannelIn the case of OFDMA transmission of a given BW, each Spatial Reuse field corresponding to a 20 MHz subband is also applicable to the 242-tone RUs aligned closest to the frequency of the 20 MHz subband described above (in the tone plan for that BW). The correspondence from Spatial Reuse field to 242-tone RU is also applied to all RUs within 242 ton RU. The above also shows that it implies that the 20 MHz OBSS STA uses the Spatial Reuse field corresponding to its own 20 MHz channel, the 40 MHz OBSS STA located in the lower frequency half of the 80 MHz BSS uses the values of the Spatial Reuse1field and Spatial Reuse2field, and the 40 MHz OB SS STA located at the upper frequency half of the 80 MHz BSS uses Spatial Reuse3field and Spatial Reuse4field values. For 160 MHz and 80+80 MHz, there are four Spatial Reuse fields, one for each 40 MHz subchannelIn the case of OFDMA transmission of a given BW, each Spatial Reuse field corresponding to a 40 MHz subband can also be applied to the 484-tone RU aligned closest to the frequency of the aforementioned 40 MHz subband. The correspondence from Spatial Reuse field to 484-tone RU is also applied to all RUs within 484-tone RU. The table below shows an example of encoding a Spatial Reuse field for HE SU PPDU, HE ER SU PPDU, and HE MU PPDU. TABLE 4ValueMeaning0PSR_DISALLOW1-12Reserved13SR_RESTRICTED14SR_DELAYED15PSR_AND_NON_SRG_OBSS_PD_PROHIBITED Returning toFIG.18again, when the trigger frame requests the EHT TB PPDU, each Spatial Reuse n subfield (1<=n<=4) of the Common Info field is a Spatial Reuse1subfield or Spatial Reuse2subfield of the Special User Info field. determined based on one of the fields. FIG.19shows an example of a Special User Info field format. If the Special User Info field is included in the trigger frame, the Special User Info Field Present subfield of the EHT variant of the Common Info Field is set to 0, otherwise it is set to 1. The Special User Info field is identified by an AID12 value of 2007 and is optionally present in a trigger frame generated by the EHT AP. The Special User Info field, if present, is located immediately after the Common Info field of the trigger frame, conveys the nonderived subfield of the U-SIG field of the requested EHT TB PPDU, and the Special User Info Field of the Common Info field Present Subfield is set to 0. The existence of the Special User Info field in the trigger frame is indicated by B55 of the Common Info field in the trigger frame. B55 is set to 1 to indicate that there is no Special User Info field in the trigger frame, and is set to 0 to indicate that the Special User Info field exists in the trigger frame right after the Common Info field. The Spatial Reuse n subfield (1<=n<=2) ofFIG.19is set to the same value as the corresponding Spatial Reuse subfield in the U-SIG field of the EHT TB PPDU. Spatial Reuse1and Spatial Reuse2fields included in the U-SIG field of the EHT TB PPDU are defined as follows. Each Spatial Reuse field consists of 4 bits. Each Spatial Reuse field included in the U-SIG field of the EHT TB PPDU indicates whether a specific spatial reuse mode is allowed in a subband of the PPDU while the PPDU is being transmitted, and indicates a value used to determine the transmission power limit of the PSRT PPDU when PSR reuse is allowed. First, if the bandwidth field indicates 20 MHz or 40 MHz, the Spatial Reuse1field is applied to the first 20 MHz subband. If the bandwidth field indicates 80 MHz, the Spatial Reuse1field is applied to each 20 MHz subchannel of the first 40 MHz subband within the 80 MHz operating band. If the bandwidth field indicates 160 MHz, the Spatial Reuse1field is applied to each 20 MHz subchannel of the first 80 MHz subband within the 160 MHz operating band. If the bandwidth field indicates 320 MHz-1 or 320 MHz-2, the Spatial Reuse1field is applied to each 20 MHz subchannel of the first 160 MHz subband within the 320 MHz operating band. The Spatial Reuse1field is set to the SPATIAL_REUSE(1) parameter of TXVECTOR including the Spatial Reuse field encoding value for the HE TB PPDU as shown in Table 3 above. Second, if the bandwidth field indicates 20 MHz, the Spatial Reuse2field is set to the same value as the Spatial Reuse1field, and disregarded if dot11EHTBaseLineFeaturesImplementedOnly is true. If the bandwidth field indicates 40 MHz, the Spatial Reuse2field is applied to the second 20 MHz subband. When operating in the 2.4 GHz band, the Spatial Reuse2field is set to the same value as the Spatial Reuse1field. If the bandwidth field indicates 80 MHz, the Spatial Reuse2field is applied to each 20 MHz subchannel of the second 40 MHz subband within the 80 MHz operating band. If the bandwidth field indicates 160 MHz, the Spatial Reuse2field is applied to each 20 MHz subchannel of the second 80 MHz subband within the 160 MHz operating band. If the bandwidth field indicates 320 MHz-1 or 320 MHz-2, the Spatial Reuse2field is applied to each 20 MHz subchannel of the second 160 MHz subband within the 320 MHz operating band. The Spatial Reuse2field is set to the SPATIAL_REUSE(2) parameter of TXVECTOR including the Spatial Reuse field encoding value for the HE TB PPDU as shown in Table 3 above. 3. Embodiments Applicable to this Specification In the WLAN 802.11be system, transmission of increased streams is considered by using a wider band than the existing 802.11ax or using more antennas to increase peak throughput. In addition, the present specification also considers a method of aggregating and using various bands/links. Meanwhile, in order to reduce interference between BSSs, spatial reuse can be used in the same way as 802.11ax, and the present specification proposes a configuration of a spatial reuse field of an EHT TB PPDU. The EHT trigger frame reuses the structure of the HE Trigger frame for backward compatibility with 802.11ax, and instead, the EHT Common Info field and EHT User Info field for the EHT TB PPDU can be configured. The Special User Info field is a User Info field that does not deliver user-specific information and delivers extended common information that is not provided in the Common Info field. When the Special User Info field is included in the trigger frame, the Special User Info field flag subfield of the EHT variant of the Common Info field is set to 0, and when the Special User Info field is not included in the trigger frame, the Special User Info field flag subfield field is set to 1. The Special User Info field is identified by an AID12 value of 2007 and is optionally present in a trigger frame generated by the EHT AP. If the Special User Info field exists, it is located immediately after the Common Info field of the Trigger frame and transmits a nonderived subfield of the U-SIG field of the requested EHT TB PPDU, and the Special User Info Field Flag subfield of the Common Info field is set to 0. The existence of the Special User Info field in the trigger frame is indicated by B55 of the Common Info field in the trigger frame. B55 is set to 1 to indicate that there is no Special User Info field in the Trigger frame, and is set to 0 to indicate that the Special User Info field exists in the Trigger frame immediately after the Common Info field. 19, in the Special User Info field, the AID12 subfield consists of 12 bits, the PHY Version ID subfield consists of 3 bits, the UL Bandwidth Extension subfield consists of 2 bits, Spatial Reuse1subfield consists of 4 bits, the Spatial Reuse2subfield consists of 4 bits, the U-SIG Disregard And Validate subfield consists of 12 bits, and the Reserved subfield consists of 3 bits. The PHY Version ID subfield indicates the Wi-Fi version after EHT and EHT. For EHT, the PHY Version ID subfield is set to 0. The UL Bandwidth Extension subfield indicates the bandwidth of the TB PPDU requested from the EHT STA addressed together with the UL BW subfield of the Common Info field (i.e., the bandwidth of the U SIG field of the EHT TB PPDU). The UL bandwidth extension subfields are defined in the table below. TABLE 5ULBandwidth for HEUL BandwidthBandwidth for EHTBWTB PPDU (MHz)ExtensionTB PPDU (MHz)0200200201Reserved0202Reserved0203Reserved1400401401Reserved1402Reserved1403Reserved2800802801Reserved2802Reserved2803Reserved31600Reserved3160116031602320-131603320-2 The following shows an example of the configuration of the UL BW and UL BW Extension fields when an Aggregated-PPDU (A-PPDU) in which HE Sub-PPDU and EHT Sub-PPDU are mixed is triggered. TABLE 6ULBandwidth for HEUL BandwidthBandwidth for EHTBWTB PPDU (MHz)ExtensionTB PPDU (MHz)0200200201Reserved0202Reserved0203Reserved1400401401Reserved1402Reserved1403Reserved28008028011602802320-12803320-23800803160116031602320-131603320-2 The UL BW and UL BW Extension fields may be configured in a manner different from the above table. Spatial Reuse1and2subfields are set to the same values as Spatial Reuse1and2subfields of the U-SIG field of the EHT TB PPDU, which are values for specific channels according to BW and will be described in more detail below. The U-SIG Disregard And Validate subfield is set to a value copied as it is in the Reserved field in the U-SIG of the EHT TB PPDU. Reserved subfield 3 bits can be reserved or used for other purposes. FIG.20shows an example of an EHT User Info field format. Referring toFIG.20, a PS160 field indicates an RU and a multi-resource unit (MRU) allocated to an STA along with a RU Allocation field. FIG.10shows the structure of a representative EHT PPDU. It can be used for SU and MU transmission, and EHT-SIG may not be included when TB PPDU is transmitted. Universal-SIG (U-SIG) includes a version independent field and a version dependent field. EHT-SIG can carry various common information and user specific information. The bandwidth can be indicated using the bandwidth field, which can be included in U-SIG version independent. The corresponding field may consist of 3 bits and may contain only bandwidth information without including information on the preamble puncturing pattern. In addition, puncturing information may be carried in other fields of U-SIG or specific fields of EHT-SIG. In addition, the version independent field may include a 3-bit version identifier indicating a Wi-Fi version after 802.11be and 802.11be, a 1-bit DL/UL field, BSS color, TXOP duration, etc., and the version dependent field may include information such as PPDU type. In addition, U-SIG is jointly encoded with two symbols and consists of 52 data tones and 4 pilot tones for each 20 MHz. Also, it is modulated in the same way as HE-SIG-A. That is, it is modulated at BPSK 1/2 code rate. Also, EHT-SIG can be encoded as a variable MCS, and as in the existing 802.11ax, 1 2 1 2 . . . in units of 20 MHz. It may have a structure (may be composed of other structures, for example, 1 2 3 4 . . . or 1 2 1 2 3 4 3 4 . . . ), may also be configured in units of 80 MHz, and in a bandwidth of 80 MHz or higher, the EHT-SIG may be duplicated in units of 80 MHz. Spatial Reuse can be used to reduce interference with OBSS. This specification particularly proposes a configuration of a spatial reuse field in the EHT TB PPDU. In the EHT TB PPDU, the spatial reuse field may be located in a U-SIG version dependent field and may be composed of 4 fields as in 802.11ax, and each field may use 4 bits. The meaning of each entry expressed by each 4 bits may be the same as that described above or may have a different meaning. Alternatively, each field may use a different number of bits. Also, in the EHT TB PPDU, the spatial reuse field may consist of 2 fields instead of 4 fields. The following is a configuration of a representative U-SIG field of the EHT TB PPDU. TABLE 7Two partsNumber ofof U-SIGBitFieldbitsDescriptionU-SIG1B0-B2PHY Version3Differentiate between differentIdentifierPHY clauses.Set to 0 for EHT.Values 1-7 are Validate ifdot11EHTBaseLineFeatures-ImplementedOnly equals true.B3-B5BW3Set to 0 for 20 MHz.Set to 1 for 40 MHz.Set to 2 for 80 MHz.Set to 3 for 160 MHz.Set to 4 for 320 MHz-1.Set to 5 for 320 MHz-2B6UL/DL1Set to 1 to indicate that the PPDUis addressed to the AP.B7-B12BSS Color6An identifier of the BSS. See the.TXVECTOR parameter BSS_COLOR.B13-B19TXOP7Set to 127 to indicate no durationinformation if the TXVECTORparameter TXOP_DURATION isUNSPECIFIED. Set to a value lessthan 127 to indicate durationinformation for NAV setting andprotection of the TXOP as follows:If the TXVECTOR parameterTXOP_DURATION is less than 512,then B13 is set to 0 and B14-B19is set to floor(TXOP_DURATION/8).Otherwise, B13 is set to 1 andB14-B19 is set tofloor((TXOP_DURATION512)/128),B20-B25Disregard6Set to a value indicated in B25-B30 of the U-SIG Disregard andValidate subfield in the SpecialUser Info field in the Trigger frameand Disregard ifdot11EHTBaseLineFeaturesImpleme ntedOnly equals to true. SeeTable 9-29j4 (Mapping fromSpecial User Info field to U-SIG-1and U-SIG-2 fields in the EHT TBPPDU)U-SIG2B0-B1PPDU Type And2Set to a value of 0 for a TB PPDU.CompressedFor further clarification on allModevalues of this field, refer toCombination of UL/DL and PPDUType And Compression Modefield. Undefined values of this fieldare Validate ifdot11EHTBaseLineFeatures-ImplementedOnly equals true.B2Validate1Set to a value indicated in B31 ofthe U-SIG Disregard and Validatesubfield in the Special User Infofield in the Trigger frame andValidate ifdot11EHTBaseLineFeatures-ImplementedOnly equals true.B3-B6Spatial Reuse 14Indicates whether or not specificspatial reuse modes are allowed ina subband of the PPDU during thetransmission of this PPDU, and ifPSR spatial reuse is allowed,indicates a value that is used todetermine a limit on the transmitpower of the PSRT PPDU. If theBandwidth field indicates 20 MHzor 40 MHZ, then this field appliesto the first 20 MHz subband. If theBandwidth field indicates 80 MHz,then this field applies to each 20MHz subchannel of the first 40MHz subband within the 80 MHzoperating band. If the Bandwidthfield indicates 160 MHz, then thisfield applies to each 20 MHZsubchannel of the first 80 MHzsubband within the 160 MHZoperating band. If the Bandwidthfield indicates 320 MHz-1 or 320MHz-2, then this field applies toeach 20 MHz subchannel of thefirst 160 MHz subband within the320 MHz operating band.B7-B10Spatial Reuse 24Indicates whether or not specificspatial reuse modes are allowed ina subband of the PPDU during thetransmission of this PPDU, and ifPSR spatial reuse is allowed,indicates a value that is used todetermine a limit on the transmitpower of the PSRT PPDU. If theBandwidth field indicates 20 MHZ,this field is set to the same valueas the Spatial Reuse 1 field, andDisregard ifdot11EHTBaseLineFeaturesImplementedOnly equals true. If theBandwidth field indicates 40 MHZ,this field applies to the second 20MHz subband. If operating in the2.4 GHz band, this field is set tothe same value as the SpatialReuse 1 field. If the Bandwidthfield indicates 80 MHz, then thisfield applies to each 20 MHzsubchannel of the second 40 MHzsubband within the 80 MHzoperating band. If the Bandwidthfield indicates 160 MHz, then thisfield applies to each 20 MHzsubchannel of the second 80 MHZsubband within the 160 MHzoperating band. If the Bandwidthfield indicates 320 MHz-1 or 320MHz-2, then this field applies toeach 20 MHz subchannel of thesecond 160 MHz subband withinthe 320 MHz operating band.B11-B15Disregard5Set to a value indicated in B32-B36 of the U-SIG Disregard andValidate subfield in the SpecialUser Info field in the Trigger frameand Disregard ifdot11EHTBaseLineFeatures-ImplementedOnly equals true.B16-B19CRC4CRC for bits 0-41 of the USIG field.Bits 0-41 of the U-SIG fieldcorrespond to bits 0-25 of USIG-1field followed by bits 0-15 of U-SIG-2 fieldB20-B25Tail6Used to terminate the trellis of theconvolutional decoder. Set to 0.Total # of Bits in U-SIG52 The above U-SIG field can be configured by copying the field of the trigger frame as it is. This specification proposes a method of configuring 4 Spatial Reuse fields of the Common Info field and 2 Spatial Reuse fields of the EHT Common Info field (or Special Info field) considering the case where the trigger frame triggers the HE TB PPDU, EHT TB PPDU, or TB A-PPDU. Here, it is assumed that the trigger frame is an EHT trigger frame capable of triggering all HE TB PPDUs, EHT TB PPDUs, or TB A-PPDUs. In addition, it is assumed that the Common Info field of the Trigger frame is a HE/EHT variant Common Info field, and the EHT Common Info field of the Trigger frame is assumed to be a Special Info field. The structure of the EHT Trigger frame, HE TB PPDU, and EHT TB PPDU is as follows. The EHT Trigger frame consists of a HE/EHT variant Common Info field, (Special User Info field) and a HE/EHT variant User Info field. The EHT variant Common Info field includes 4 Spatial Reuse fields, and the 4 Spatial Reuse fields are applied to each of 4 subchannels and are defined for SR (Spatial Reuse) of the OBSS HE STA. The Special User Info field exists when AID=2007, includes two Spatial Reuse fields, the two Spatial Reuse fields are duplicated to the two Spatial Reuse fields in the U-SIG of the EHT TB PPDU and are defined for the SR of the OBSS EHT STA. As described above, the bandwidth of the EHT TB PPDU is indicated through the 2-bit UL BW field in the EHT variant Common Info field and the 2-bit UL Bandwidth Extension subfield in the Special User Info field. Among the UL HE-SIG-A2 Reserved subfields in the HE variant Common Info field, B54 and B55 are used as HE/EHT P160 and Special User Info Field Flag subfields in the EHT variant Common Info field, respectively (seeFIGS.16and17). The HE/EHT P160 subfield indicates whether the primary160is a HE TB PPDU (set to 1) or an EHT TB PPDU (set to 0). The Special User Info Field Flag subfield indicates whether the Special User Info field exists (set to 0) or not (set to 1). That is, B54 and B55 of the UL HE-SIG-A2 Reserved subfields were originally set to 11, but when the EHT Trigger frame triggers the EHT TB PPDU, B54 and B55 are set to 00. The HE TB PPDU includes 4 Spatial Reuse fields in HE-SIG-A. The EHT TB PPDU includes two Spatial Reuse fields in the U-SIG. For the two Spatial Reuse fields included in the U-SIG, the values of the two Spatial Reuse fields of the Special User Info field are duplicated. 3.1. When Trigger Frame Triggers HE TB PPDU Only A trigger frame may be configured simply like an existing HE trigger frame without the EHT Common Info field and the EHT User Info field. In this case, the UL BW indicates the BW of the HE TB PPDU, and accordingly, 4 Spatial Reuse fields can also be set in the same way as in the existing 802.11ax, and this can be used to configure the Spatial Reuse field in HE-SIG-A when HE TB PPDU is transmitted. That is, 4 Spatial Reuse fields in the Common Info field and 4 Spatial Reuse fields in the HE TB PPDU may be set as shown in Appendix 1 described later. 3.2. When Trigger Frame Triggers Only EHT TB PPDU When the trigger frame triggers only the EHT TB PPDU, the UL BW of the Common Info field may be set to a specific value to indicate the BW of the EHT TB PPDU. If the OBSS HE STA and the non-associated HE STA BW) can be used to determine the BW of the TB PPDU. (It may vary depending on the UL BW configuration, but in the UL BW configuration example above, the same BW can be determined when the 20/40/80/160 MHz EHT TB PPDU is triggered. But if 320 MHz EHT TB PPDU is triggered, the UL BW can be determined as 160 MHz). Therefore, since the OBSS HE STA and the non-associated HE STA can perform Spatial Reuse using the 4 Spatial Reuse fields of the Common Info field, four Spatial Reuse fields in Common Info field of Trigger Frame need to be set to specific values. In the example of the UL BW and UL Bandwidth Extension subfields above, the 4 Spatial Reuse fields in the Common Info field are the BW indicated by the UL BW (20/40/80/160 MHz), it can be set like the existing 802.11ax Trigger frame (even if it is not the above example, when the 20/40/80/160 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is the same case). Like Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. Basically, this may be a value regardless of the configuration of the Spatial Reuse field in the U-SIG when transmitting the EHT TB PPDU, but as shown in Appendix 3 described later, the Spatial Reuse field in the U-SIG of the EHT TB PPDU can be configured using the four Spatial Reuse fields in the Common Info field. In this case, two Spatial Reuse fields in the EHT Common Info field (Special User Info field) may be set identically (In other words, the method of configuring the Spatial Reuse field in U-SIG of EHT TB PPDU in Appendix 3 is applied as it is to the composition of two Spatial Reuse fields in EHT Common Info field, and this value is may be used for setting fields) or reserved. Four Spatial Reuse fields in the Common Info field can be set as follows according to the BW (20/40/80/160 MHz) indicated in the UL BW in the example of UL BW and UL Bandwidth Extension subfield above, when 20/40/80/160 MHz EHT TB PPDU is triggered. (Even if it is not the above example, when the 20/40/80/160 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is the same) The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. When the 20 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to the same value, and one of these two values can be duplicated to set the same value in all four fields in the Common Info field. When the 40 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to each 20 MHz. These values can be copied and set as they are in the corresponding 20 MHz field among the 4 Spatial Reuse fields in the Common Info field. In other words, the value of the first field of the two Spatial Reuse fields in the EHT Common Info field can be duplicated to the first and third values of the four Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be duplicated to the second and fourth values among the four Spatial Reuse fields in the Common Info field. When the 80 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to each 40 MHz, and these values are duplicated as they are and the first two fields among the four Spatial Reuse fields in the Common Info field can be set as the first value of the two Spatial Reuse fields in the EHT Common Info field. The last two fields among the four Spatial Reuse fields in the Common Info field can be set as the last values among the two Spatial Reuse fields in the EHT Common Info field. In addition, in order to correct the value according to the BW difference (or according to the normalization difference), after adding or subtracting a specific dBm value to the meaning of the value (i.e., PSR value in dBm), it can be changed to a value corresponding to a maximum dBm value that is smaller than or equal to this value. In this case, it may be desirable to compensate by subtracting 6 (or 20 log 2) dB in particular. Even if the channel size corresponding to each spatial reuse field value is different, if normalization is applied to the same channel size (for example, normalization per 20 MHz), it is not necessary to correct when copying and setting, and this is the same in various situations below. When the 160 MHz EHT TB PPDU is triggered, the 2 Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 80 MHz each, and these values are copied as they are and the first two fields among the 4 Spatial Reuse fields in the Common Info field can be set as the first value among the two Spatial Reuse fields in the EHT Common Info field, and the last two fields among the four Spatial Reuse fields in the Common Info field can be set as the last value among the two Spatial Reuse fields in the EHT Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dBm value may be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to a maximum dBm value that is smaller than or equal to this value. In this case, it may be desirable to compensate by subtracting 6 (or 20 log 2) dB in particular. However, if the values of the two Spatial Reuse fields in the EHT Common Info field are normalized to a 20 MHz channel and the values of the four Spatial Reuse fields in the Common Info field are simply normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The 4 Spatial Reuse fields in the Common Info field can be set as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted below 160 MHz. In this case, 4 Spatial Reuse fields in the Common Info field may be set as 160 MHz in Appendix 1 described later. However, 160 MHz may be 160 MHz including a channel through which a trigger frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set identically (In other words, it is set to one of the four Spatial Reuse field values. For example, it can be set to the largest or smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field can be set as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted below 160 MHz. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are each set to a spatial reuse value corresponding to 160 MHz. Among the two Spatial Reuse fields in the EHT Common Info field, the values corresponding to 160 MHz including the channel through which the Trigger Frame is transmitted can be copied and the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the difference in BW (or according to the difference in normalization), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value (set the 4 values the same). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log4) in particular. However, if the values of the 2 Spatial Reuse fields in the EHT Common Info field are normalized to a 20 MHz channel and the values of the 4 Spatial Reuse fields in the Common Info field are simply normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The 4 Spatial Reuse fields in the Common Info field can be set as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. Like 160 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 160 MHz may be one of Primary 160 MHz and Secondary 160 MHz (or low 160 MHz and high 160 MHz). For example, it can be simply Primary 160 MHz. Alternatively, each Spatial Reuse value (or PSR value, the same applies below) among Primary 160 MHz and Secondary 160 MHz (or low 160 MHz and high 160 MHz) can be set to a larger or smaller Spatial Reuse value of 160 MHz. Or, it can be set to a Spatial Reuse value of 160 MHz with a smaller or larger value among the minimum or maximum value of the four 40 MHz Spatial Reuse values within the Primary 160 MHz (or low 160 MHz) and the minimum or maximum value of the four Spatial Reuse values within the Secondary 160 MHz (or high 160 MHz). Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set identically or reserved. (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field are triggered when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. According to the BW (160 MHz) indicated in the UL BW, it can be set as another example as follows. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel There are four 40 MHz Spatial Reuse values each within Primary 160 MHz and Secondary 160 MHz (or low 160 MHz and high 160 MHz), the Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 40 MHz at the same location within two 160 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 40 MHz Spatial Reuse value of Primary 160 MHz (or low 160 MHz) and the lowest 40 MHz Spatial Reuse value of Secondary 160 MHz (or high 160 MHz). The second Spatial Reuse field in the Common Info field can be set by comparing the Spatial Reuse value of the second low 40 MHz of the Primary 160 MHz (or low 160 MHz) and the Spatial Reuse value of the second low 40 MHz of the Secondary 160 MHz (or high 160 MHz). The third Spatial Reuse field in the Common Info field can be set by comparing the Spatial Reuse value of the second high 40 MHz of the Primary 160 MHz (or low 160 MHz) and the Spatial Reuse value of the second high 40 MHz of the Secondary 160 MHz (or high 160 MHz). The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 40 MHz Spatial Reuse value of Primary 160 MHz (or low 160 MHz) and the highest 40 MHz Spatial Reuse value of Secondary 160 MHz (or high 160 MHz). The 4 Spatial Reuse fields in the Common Info field cab be set as another example as follows according to the BW (160 MHz) indicated in the UL BW when the 320 MHz EHT TB PPDU is triggered in the example of the UL BW and UL Bandwidth Extension subfields above. (Even if it is not the above example, if the 320 MHz EHT TB PPDU is triggered, the BW indicated in the UL BW is 160 MHz). It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are each set to a spatial reuse value corresponding to 160 MHz. By copying the larger or smaller value among the two Spatial Reuse fields in the EHT Common Info field, the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value (The four values are set equal). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log4) in particular. However, if the values of the 2 Spatial Reuse fields in the EHT Common Info field are normalized to a 20 MHz channel and the values of the 4 Spatial Reuse fields in the Common Info field simply mean values normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The four Spatial Reuse fields in the Common Info field can be set as follows when a 160 MHz EHT TB PPDU is triggered with a configuration of UL BW and UL Bandwidth Extension subfields and the BW indicated in the UL BW is 80 MHz. It is assumed that the transmission BW of the Trigger Frame is different from that of the EHT TB PPDU and is transmitted below 80 MHz. Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be 80 MHz including a channel through which a trigger frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 80 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field can be configured as follows when a 160 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the Trigger Frame is different from that of the EHT TB PPDU and is transmitted below 80 MHz. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 160 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 80 MHz respectively. Among the two Spatial Reuse fields in the EHT Common Info field, the values corresponding to 80 MHz including the channel through which the trigger frame is transmitted can be copied and the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (The four values can be set the same). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log 4) in particular. The 4 Spatial Reuse fields in the Common Info field can be configured as follows when a 160 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be one of Primary and Secondary 80 MHz (or low 80 MHz and high 80 MHz). For example, it may simply be Primary 80 MHz. Alternatively, each Spatial Reuse value of Primary 80 MHz and Secondary MHz (or low 80 MHz and high 80 MHz) can be set to a Spatial Reuse value of 80 MHz having a larger or smaller value. Or, it can be set to a Spatial Reuse value of 80 MHz with a smaller or larger value among the minimum or maximum value of the four 20 MHz Spatial Reuse values within the Primary 80 MHz (or low 80 MHz) and the minimum or maximum value of the four 20 MHz Spatial Reuse values within the Secondary 80 MHz (or high 80 MHz). Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field may be set identically (That is, it can be set to one of the four Spatial Reuse field values. For example, it can be set to the largest or smallest value. This value corresponds to the corresponding 80 MHz Spatial Reuse field in the U-SIG of the EHT TB PPDU. may be used for settings) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 80 MHz other than 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU. The 4 Spatial Reuse fields in the Common Info field can be configured as follows as another example when a 160 MHz EHT TB PPDU is triggered with a configuration of UL BW and UL Bandwidth Extension subfields and the BW indicated in the UL BW is 80 MHz. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. There are four 20 MHz Spatial Reuse values each within Primary 80 MHz and Secondary 80 MHz (or low 80 MHz and high 80 MHz), The Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 20 MHz at the same location within two 80 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 20 MHz Spatial Reuse value of Primary (or low 80 MHz) and the lowest 20 MHz Spatial Reuse value of Secondary 80 MHz (or high 80 MHz). The second Spatial Reuse field in the Common Info field can be set by comparing the second low 20 MHz Spatial Reuse value of Primary 80 MHz (or low 80 MHz) and the second low 20 MHz Spatial Reuse value of Secondary 80 MHz (or high 80 MHz). The third Spatial Reuse field in the Common Info field can be set by comparing the second high Spatial Reuse value of Primary 80 MHz (or low 80 MHz) and the second high 20 MHz Spatial Reuse value of Secondary 80 MHz (or high 80 MHz). The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 20 MHz Spatial Reuse value of Primary 80 MHz (or low 80 MHz) and the highest 20 MHz Spatial Reuse value of Secondary (or high 80 MHz). The 4 Spatial Reuse fields in the Common Info field can be configured as follows as another example when a 160 MHz EHT TB PPDU is triggered with a configuration of UL BW and UL Bandwidth Extension subfields and the BW indicated in the UL BW is 80 MHz. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 160 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 80 MHz respectively. By copying the larger or smaller value among the two Spatial Reuse fields in the EHT Common Info field, the corresponding values can be set identically to the four fields in the Common Info field. In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (The four values can be set the same). In this case, it may be desirable to compensate by subtracting 12 dB (or 20 log 4) in particular. However, if the values of the 2 Spatial Reuse fields in the EHT Common Info field are normalized to a channel and the values of the 4 Spatial Reuse fields in the Common Info field are simply normalized to the corresponding channel, 40 MHz, it may be desirable to correct by adding 6 (or 20 log 2) dB. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the Trigger Frame is different from that of the EHT TB PPDU and is transmitted below 80 MHz. Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be 80 MHz including a channel through which a trigger frame is transmitted. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted between 80 MHz and 160 MHz. Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be one of two 80 MHz channels within a 160 MHz channel including a channel through which a trigger frame is transmitted. Alternatively, each Spatial Reuse value of two 80 MHz channels in a 160 MHz channel including a channel through which a trigger frame is transmitted may be set to a larger or smaller 80 MHz Spatial Reuse value. Or It can be set to a Spatial Reuse value of 80 MHz with a smaller or larger value among the minimum or maximum value of the four 20 MHz Spatial Reuse values within the first 80 MHz and the minimum or maximum value of the four Spatial Reuse values within the second 80 MHz of the 160 MHz channel including the channel through which the Trigger Frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belongs can be set to one of the values of the 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz to which MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field belongs may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, the fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belong can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU belong. Four Spatial Reuse fields in the Common Info field are configured as UL BW and UL Bandwidth Extension subfields, and as another example when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz, it can be set as follows. It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted between 80 MHz and 160 MHz. There are four 20 MHz Spatial Reuse values in the first 80 MHz and the second 80 MHz of the 160 MHz channel that includes the channel through which the Trigger Frame is transmitted. The Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 20 MHz at the same location within two 80 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 20 MHz Spatial Reuse value of the first 80 MHz and the lowest 20 MHz Spatial Reuse value of the second 80 MHz among the 160 MHz channels including the channel through which the Trigger Frame is transmitted. The second Spatial Reuse field in the Common Info field can be set by comparing the second low 20 MHz Spatial Reuse value of the first 80 MH of the 160 MHz channel including the channel through which the trigger frame is transmitted and the second low 20 MHz Spatial Reuse value of the second 80 MHz. The third Spatial Reuse field in the Common Info field can be set by comparing the second high 20 MHz Spatial Reuse value of the first 80 MHz and the second high 20 MHz Spatial Reuse value of the second 80 MHz among the 160 MHz channels including the channel through which the Trigger Frame is transmitted. The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 20 MHz Spatial Reuse value of the first 80 MHz and the highest MHz Spatial Reuse value of the second 80 MHz among the 160 MHz channels including the channel through which the trigger frame is transmitted. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belongs can be set to one of the values of 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz to which 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field belongs may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, the fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belong can be set to appropriate Spatial Reuse values, and these values can be used to set fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU belong. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is different from that of the EHT TB PPDU and is transmitted below 160 MHz. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to 160 MHz respectively. Among the 2 Spatial Reuse fields in the EHT Common Info field, the value corresponding to 160 MHz including the channel through which the Trigger Frame is transmitted can be copied and the corresponding values can be set identically to the 4 fields in the Common Info field. (Four spatial reuse fields represent 80 MHz, each corresponding to 20 MHz, and a 160 MHz spatial reuse value may be set as it is). In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (4 values set equal). In this case, it may be desirable to compensate by subtracting 18 dB (or 20 log 8) in particular. The four Spatial Reuse fields in the Common Info field can be configured as follows when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz with the configuration of the UL BW and UL Bandwidth Extension subfields. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel Like 80 MHz in Appendix 1 described later, 4 Spatial Reuse fields in the Common Info field can be set. However, 80 MHz may be one of two 80 MHz (or lowest 80 MHz and second lowest 80 MHz and second highest 80 MHz and highest 80 MHz) of Primary 80 MHz, Secondary 80 MHz, and Secondary 160 MHz. For example, it may simply be Primary 80 MHz. Or each Spatial Reuse value may be set to a Spatial Reuse value of 80 MHz having a larger or smaller value among two 80 MHz of Primary 80 MHz, Secondary 80 MHz, and Secondary 160 MHz or lowest 80 MHz and second lowest 80 MHz and second highest 80 MHz and highest 80 MHz. Or it can be set to a Spatial Reuse value of 80 MHz having a smaller or larger value among the minimum or maximum values of the four 20 MHz Spatial Reuse values within the high 80 MHz (or highest 80 MHz) among the minimum or maximum value of four 20 MHz Spatial Reuse values within Primary 80 MHz (or lowest 80 MHz) and the minimum or maximum value of four 20 MHz Spatial Reuse values within Secondary 80 MHz (or second lowest 80 MHz) and the lower 80 MHz (or second lowest 80 MHz) of Secondary 160 MHz highest 80 MHz) of the minimum or maximum of the four 20 MHz Spatial Reuse values and the Secondary 160 MHz. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, the Spatial Reuse field in the U-SIG of the EHT TB PPDU corresponding to 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belongs can be set to one of the values of 4 Spatial Reuse fields in the Common Info field. For example, it can be set to the largest value or the smallest value. In this case, a field corresponding to 160 MHz to which 80 MHz indicated by values of 4 Spatial Reuse fields in the Common Info field among 2 Spatial Reuse fields in the EHT Common Info field belongs may be set in the same way (That is, it can be set to one of the four values of the Spatial Reuse field. For example, it can be set to the largest value or the smallest value. This value may be used to configure the Spatial Reuse field corresponding to the corresponding 160 MHz in the U-SIG of the EHT TB PPDU) or reserved. Among the two Spatial Reuse fields in the EHT Common Info field, the fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field belong can be set to appropriate Spatial Reuse values, this value can be used to set fields corresponding to 160 MHz other than 160 MHz to which 80 MHz indicated by the values of the 4 Spatial Reuse fields in the Common Info field among the Spatial Reuse fields in the U-SIG of the EHT TB PPDU belong. Four Spatial Reuse fields in the Common Info field are configured as UL BW and UL Bandwidth Extension subfields, and as another example when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz, it can be set as follows. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. There are four 20 MHz Spatial Reuse values within each of the two 80 MHz (or lowest 80 MHz and second lowest 80 MHz and second highest 80 MHz and highest 80 MHz) of Primary 80 MHz, Secondary 80 MHz, and Secondary 160 MHz. The Spatial Reuse value can be set to a larger or smaller value by comparing the Spatial Reuse value of 20 MHz at the same location within four 80 MHz. That is, the first Spatial Reuse field in the Common Info field can be set by comparing the lowest 20 MHz Spatial Reuse value of the low 80 MHz (or second highest 80 MHz) and the lowest 20 MHz Spatial Reuse value of the high 80 MHz (or highest 80 MHz) of Secondary 160 MHz among the lowest 20 MHz Spatial Reuse value of Primary 80 MHz (or lowest 80 MHz), the lowest 20 MHz Spatial Reuse value of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. The second Spatial Reuse field in the Common Info field can be set by comparing the second low 20 MHz Spatial Reuse value of the low 80 MHz (or second highest 80 MHz) and the second low 20 MHz Spatial Reuse value of the high 80 MHz (or highest 80 MHz) of the Secondary 160 MHz among the second lowest Spatial Reuse value of 20 MHz of Primary 80 MHz (or lowest 80 MHz) and the second lowest Spatial Reuse value of 20 MHz of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. The third Spatial Reuse field in the Common Info field can be set by comparing the Spatial Reuse value of the second high 20 MHz of the low 80 MHz (or the second highest 80 MHz) and the Spatial Reuse of the second high 20 MHz of the high 80 MHz (or the highest 80 MHz) of the Secondary 160 MHz among the second highest Spatial Reuse value of 20 MHz of Primary 80 MHz (or lowest 80 MHz) and the second highest Spatial Reuse value of 20 MHz of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. The fourth Spatial Reuse field in the Common Info field can be set by comparing the highest 20 MHz Spatial Reuse value of the low 80 MHz (or second highest 80 MHz) and the highest 20 MHz Spatial Reuse value of the high 80 MHz (or highest 80 MHz) of the Secondary 160 MHz among the highest 20 MHz Spatial Reuse value of Primary 80 MHz (or lowest 80 MHz) and the highest 20 MHz Spatial Reuse value of Secondary 80 MHz (or second lowest 80 MHz) and Secondary 160 MHz. Four Spatial Reuse fields in the Common Info field are configured as UL BW and UL Bandwidth Extension subfields, and as another example when a 320 MHz EHT TB PPDU is triggered and the BW indicated in the UL BW is 80 MHz, it can be set as follows. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when a 320 MHz EHT TB PPDU is triggered, two Spatial Reuse fields in the EHT Common Info field are each set to a spatial reuse value corresponding to 160 MHz. By copying the larger or smaller value of the 2 Spatial Reuse fields in the EHT Common Info field, the corresponding values can be set identically to the 4 fields in the Common Info field. (Four spatial reuse fields represent 80 MHz, each corresponding to 20 MHz, and a 160 MHz spatial reuse value may be set as it is). In addition, in order to correct the value according to the BW difference (or normalization difference), a specific dbm value can be added or subtracted from the meaning (dBm value) of the corresponding value, and then changed to a value corresponding to the maximum dbm value that is smaller than or equal to this value. (The four values can be set the same). In this case, it may be particularly desirable to correct by subtracting 18 dB (20 log 8). The 4 Spatial Reuse fields in the Common Info field consist of UL BW and UL Bandwidth Extension subfields, 80 MHz (or W MHz, W is 80, 40 or 20) EHT TB PPDU is triggered and the BW indicated in UL BW is 160 MHz (or 2*W MHz, where W is 80, 40 or 20), it can be set as follows. 4 Spatial Reuse fields in the Common Info field can be set like 160 MHz (or 2*W MHz, W is 80, 40 or 20) in Appendix 1 described later. However, the actual Spatial Reuse value can be set only for 80 MHz (or W MHz, where W is 80, 40, or 20) where the actual EHT TB PPDU is transmitted. For other 80 MHz (or W MHz, W is 80, 40 or 20), any Spatial Reuse value can be set. However, since it is a part where actual signals are not transmitted, it may be desirable to set it to a large Spatial Reuse value. Basically, this may be a value that has nothing to do with the configuration of the Spatial Reuse field in U-SIG when transmitting the EHT TB PPDU, but the Spatial Reuse field in U-SIG of the EHT TB PPDU can be configured using 4 Spatial Reuse fields in the Common Info field. For example, among the values of the four Spatial Reuse fields in the Common Info field, two 40 MHz (or two W/2 MHz for W can be set to 80 or 40, and one 20 MHz for W can be set using the 20) Spatial Reuse value. In this case, two Spatial Reuse fields in the EHT Common Info field may also be set identically (That is, two 40 MHz (or two W/2 MHz for W) corresponding to 80 MHz (or W MHz, W is 80, 40, or 20) used for transmission of EHT TB PPDU among the values of the four Spatial Reuse fields 80 or 40, one 20 MHz for W can be set using the 20) Spatial Reuse value. This value may be used to configure the U-SIG Spatial Reuse field of the EHT TB PPDU) or reserved. The 4 Spatial Reuse fields in the Common Info field consist of UL BW and UL Bandwidth Extension subfields, 80 MHz (or W MHz, W is 80, 40 or 20) EHT TB PPDU is triggered and the BW indicated in UL BW is 160 MHz (or 2*W MHz, where W is 80, 40 or 20), another example may be set as follows. It is assumed that the transmission BW of the trigger frame is the same as that of the EHT TB PPDU and that it is transmitted through the same channel. The settings of the two Spatial Reuse fields in the EHT Common Info field are described in Appendix 3 and can be set using them. That is, when the 80 MHz EHT TB PPDU is triggered, the two Spatial Reuse fields in the EHT Common Info field are set to spatial reuse values corresponding to each 40 MHz. These values can be copied and set as they are in the corresponding 40 MHz field among the 4 Spatial Reuse fields in the Common Info field. For example, if the 80 MHz EHT TB PPDU corresponds to the lower frequency of the 160 MHz channel, the value of the first field of the two Spatial Reuse fields in the EHT Common Info field can be copied to the first value of the four Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the second value among the four Spatial Reuse fields in the Common Info field. If the 80 MHz EHT TB PPDU corresponds to the high frequency of the 160 MHz channel, the value of the first field of the two Spatial Reuse fields in the EHT Common Info field can be copied to the third value of the four Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the fourth value among the four Spatial Reuse fields in the Common Info field. Among the four Spatial Reuse fields in the Common Info field that do not apply, the values of the two Spatial Reuse fields that do not apply can be set to specific values (preferably set to a high value), and for ease of implementation, the values of the two Spatial Reuse fields in the EHT Common Info field can be used. In other words, the value of the first field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the first and third values of the 4 Spatial Reuse fields in the Common Info field, and the value of the second field among the two Spatial Reuse fields in the EHT Common Info field can be copied to the second and fourth values among the four Spatial Reuse fields in the Common Info field. Alternatively, the 4 Spatial Reuse fields in the Common Info field may simply be set according to the BW (EHT TB PPDU BW) indicated in the UL BW and UL Bandwidth Extension subfields for spatial reuse of EHT STAs. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB PPDU is transmitted. That is, as shown in Appendix 2 described later, 4 Spatial Reuse fields in the Common Info field can be set, and the Spatial Reuse field in the U-SIG of the EHT TB PPDU can be configured. In this case, the two Spatial Reuse fields in the EHT Common Info field are set identically (i.e., the method of configuring the Spatial Reuse field in the U-SIG of the EHT TB PPDU in Appendix 2 is equivalent to the configuration of the two Spatial Reuse fields in the EHT Common Info field). This value may be used to configure the Spatial Reuse field in the U-SIG of the EHT TB PPDU) or may be reserved. Or the 4 Spatial Reuse fields in the Common Info field can be set to a value (0) that disallows spatial reuse or a value (15) that prohibits spatial reuse regardless of the BW of the simply triggered EHT TB PPDU or the BW indicated in the UL BW. The reason is that in order for the OBSS HE STA to perform spatial reuse, it is impossible to obtain BSS color information from the EHT TB PPDU in terms of the 802.11ax spec. In the SR value, PSR_Disallow (value=0) disables SR, but OBSS PD (Preamble Detection) is available. PSR_AND_NON_SRG_OBSS_PD_PROHIBITED (value=15) disables not only SR but also OBSS PD. The dB value can be defined the same as the existing 802.11ax (see Table 3). The two Spatial Reuse fields in the EHT Common Info field can be set according to the BW (EHT TB PPDU BW) indicated in the UL BW and UL Bandwidth Extension subfields in addition to the setting method suggested above. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set and a Spatial Reuse field in the U-SIG of the EHT TB PPDU can be configured. 3.3. When Triggering TB A-PPDU FIG.21shows an example of transmitting a TB A-PPDU. A TB A-PPDU (Trigger Based Aggregated-PPDU) is a PPDU in which an EHT TB PPDU and a HE TB PPDU are simultaneously transmitted by a trigger frame. As shown inFIG.21, the trigger frame can trigger EHT TB PPDU and HE TB PPDU, and TB A-PPDU can be transmitted simultaneously by one STA by aggregating EHT TB PPDU and HE TB PPDU. Alternatively, the TB A-PPDU may be an aggregate of the EHT TB PPDU and the HE TB PPDU, and the EHT TB PPDU or HE TB PPDU may be transmitted by a plurality of STAs. As described above, in the trigger frame triggering the TB A-PPDU, 4 spatial reuse fields for the HE TB PPDU and 2 spatial reuse fields for the EHT TB PPDU may exist. The four spatial reuse fields can be set to a value for the bandwidth of only the HE TB PPDU (i.e., considering only the bandwidth through which the HE TB PPDU is transmitted regardless of the entire bandwidth of the TB A-PPDU), the two spatial reuse fields may be set to a value considering the bandwidth of only the EHT TB PPDU or the entire bandwidth. 4 Spatial Reuse fields in the Common Info field can be set like the existing 802.11ax Trigger frame according to the BW (HE TB Sub-PPDU BW) indicated in the UL BW. This can be used to configure the Spatial Reuse field in HE-SIG-A when HE TB PPDU is transmitted. That is, as shown in Appendix 1 described later, four Spatial Reuse fields in the Common Info field can be set and a Spatial Reuse field in the HE TB Sub-PPDU can be configured. Two Spatial Reuse fields in the EHT Common Info field can be set according to the BW (EHT TB Sub-PPDU BW or A-PPDU BW) indicated in the UL BW and UL BW Extension subfields. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set, and Spatial Reuse field in U-SIG of EHT TB Sub-PPDU may be configured. It may be preferable that it is set to the Spatial Reuse value of the indicated BW. Alternatively, two Spatial Reuse fields in the EHT Common Info field are used when the BW indicated in the UL BW and UL BW Extension subfields is the EHT TB Sub-PPDU BW, it is not set according to the corresponding BW, but can be set according to the entire BW of the A-PPDU. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB Sub-PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set and a Spatial Reuse field in the U-SIG of the EHT TB Sub-PPDU can be configured. This may be desirable because it is a spatial reuse value considering the BW of all A-PPDUs actually transmitted, but problems may occur depending on the value of the BW indicator of the TB PPDU. Alternatively, two Spatial Reuse fields in the EHT Common Info field are used when the BW indicated in the UL BW and UL BW Extension subfields is A-PPDU BW, it is not set according to the corresponding BW, but can be set according to the EHT TB Sub-PPDU BW. This can be used to configure the Spatial Reuse field in U-SIG when EHT TB Sub-PPDU is transmitted. That is, as shown in Appendix 3 described later, two Spatial Reuse fields in the EHT Common Info field can be set and a Spatial Reuse field in the U-SIG of the EHT TB Sub-PPDU can be configured. This is a Spatial Reuse value that considers only the BW of the EHT TB Sub-PPDU. It has a small resolution and can be good for performance. However, problems may occur depending on the BW indicator value of the TB PPDU. In all of the above proposals, when setting the Spatial Reuse field by comparing several Spatial Reuse values, it may be desirable to set it to a small value. The reason for this is that if the Spatial Reuse value is set to a large value, the adjacent OBSS transmits with high power, resulting in interference with a power greater than the allowable interference power. In all the above proposals, if a specific Spatial Reuse value is copied and set to a specific Spatial Reuse value, if there is a difference in BW, the meaning (dBm value) by adding or subtracting a specific dbm value, and then changing it to a value that corresponds to the maximum dbm value that is less than or equal to this value. Even if different Spatial Reuse fields have values corresponding to different channel sizes, if normalization is applied to the same channel size, it is not necessary to make additional corrections when copying and setting. In Appendices 1, 2, and 3 described later, regardless of the channel size to which each Spatial Reuse value corresponds, the value can be normalized to a 20 MHz channel. For example, the Spatial Reuse value corresponding to 40 MHz can be normalized to 20 MHz by subtracting 6 (or 20 log 2) from the corresponding PSR value (in dBm, that is, the value calculated based on 40 MHz) before normalization, and then converted to the corresponding Spatial Reuse value. As another example, the Spatial Reuse value corresponding to 80 MHz is normalized to 20 MHz by subtracting 12 (or 20 log 4) from the corresponding PSR value (in dBm, that is, the value calculated based on 80 MHz) before normalization, and then it can be set to the corresponding Spatial Reuse value. As another example, the Spatial Reuse value corresponding to 160 MHz is normalized to 20 MHz by subtracting 18 (or 20 log 8) from the corresponding PSR value (in dBm, that is, the value calculated based on 160 MHz) before normalization, and then it can be set to the corresponding Spatial Reuse value. <Appendix 1>4 Spatial Reuse fields in Common Info field of Trigger frame i) 20 MHz: The four spatial reuse fields may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel ii) 40 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower MHz subchannel. Also, when transmitting a 2.4 GHz band TB PPDU, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. iii) 80 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower 20 MHz subchannel Spatial reuse field3: In general, this may mean a spatial reuse value of a second higher Spatial reuse field4: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. iv) 160 MHz: Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: In general, this may mean a spatial reuse value of a second lower 40 MHz subchannel. Spatial reuse field3: This may generally mean a spatial reuse value of a second higher 40 MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 40 MHz subchannel.4 Spatial Reuse fields in HE-SIG-A of HE TB (Sub-)PPDU Copy the 4 Spatial Reuse fields in the Trigger frame above as they are. <Appendix 2>4 Spatial Reuse fields in Common Info field of Trigger frame i) 20 MHz: The four spatial reuse subfields may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel. Alternatively, spatial reuse 3 and 4 may be reserved. ii) 40 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower MHz subchannel. Also, when transmitting a 2.4 GHz band TB PPDU, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved.iii) 80 MHz: Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved. or Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of a second lower MHz subchannel. Spatial reuse field3: In general, this may mean a spatial reuse value of a second higher MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. iv) 160 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved. or Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: In general, this may mean a spatial reuse value of a second lower 20 MHz subchannel. Spatial reuse field3: This may generally mean a spatial reuse value of a second higher Spatial reuse field4: This may generally mean a spatial reuse value of the highest 40 v) 320 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 160 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 160 MHz subchannel. Spatial reuse field3can be set equal to 1 and spatial reuse field4equal to 2. Alternatively, spatial reuse 3 and 4 may be reserved. or Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: In general, this may mean a spatial reuse value of a second lower 80 MHz subchannel. Spatial reuse field3: This may generally mean a spatial reuse value of a second higher 80 MHz subchannel. Spatial reuse field4: This may generally mean a spatial reuse value of the highest 80 MHz subchannel.2 Spatial Reuse fields in U-SIG of EHT TB (Sub-)PPDU i) 20 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields 1 and 2 of the trigger frame as they are. That is, it may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel. ii) 40 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields 1 and 2 of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. In addition, even when the TB PPDU is transmitted in the 2.4 GHz band, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. iii) 80 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields 1 and 2 of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. Or The two spatial reuse fields can be configured by copying spatial reuse fields 1 and 3 of the trigger frame as they are or copying fields 2 and 4 as they are. Alternatively, you can select and copy one of the two values in each field as shown below. The selection criterion may be a large or small value. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest or second lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest or second highest 20 MHz subchannel. or The two spatial reuse fields can be defined differently for each 40 MHz (that is, the U-SIG configuration can be different for each 40 MHz). At 40 MHz, spatial reuse fields 3 and 4 of the trigger frame can be copied and configured as they are. That is, it may be as follows. Spatial reuse field1at a low 40 MHz: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2at a low 40 MHz: This may generally mean a spatial reuse value of a second low 20 MHz subchannel. Spatial reuse field1at high 40 MHz: This may generally mean a spatial reuse value of a second high 20 MHz subchannel. Spatial reuse field2at high 40 MHz: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. iv) 160 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields 1 and 2 of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. or The two spatial reuse fields can be configured by copying spatial reuse fields 1 and 3 of the trigger frame as they are or copying fields 2 and 4 as they are. Alternatively, you can select and copy one of the two values in each field as shown below. The selection criterion may be a large or small value. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest or second lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean the spatial reuse value of the highest or second highest 40 MHz subchannel. or The two spatial reuse fields can be defined differently for each 80 MHz (that is, the U-SIG configuration can be different for each 80 MHz). At 80 MHz, spatial reuse fields 3 and 4 of the trigger frame can be copied and configured as they are. That is, it may be as follows. Spatial reuse field1at a low 80 MHz: This may generally mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2at a low 80 MHz: This may generally mean a spatial reuse value of a second low 40 MHz subchannel. Spatial reuse field1at high 80 MHz: This may generally mean a spatial reuse value of a second high 40 MHz subchannel. Spatial reuse field2at high 80 MHz: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. v) 320 MHz: The two spatial reuse fields may be configured by copying spatial reuse fields 1 and 2 of the trigger frame as they are. That is, it may be as follows. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 160 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 160 MHz subchannel. or The two spatial reuse fields can be configured by copying spatial reuse fields 1 and 3 of the trigger frame as they are or copying fields 2 and 4 as they are. Alternatively, you can select and copy one of the two values in each field as shown below. The selection criterion may be a large or small value. Spatial reuse field1: This may generally mean a spatial reuse value of the lowest or second lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest or second highest 80 MHz subchannel. Or The two spatial reuse fields can be defined differently for each 160 MHz (i.e., the U-SIG configuration can be different for each 160 MHz), and at a lower 160 MHz, the spatial reuse fields 1 and 2 of the trigger frame can be copied and configured as they are, at a high frequency of 160 MHz, spatial reuse fields 3 and 4 of the trigger frame can be copied and configured as they are. That is, it may be as follows. Spatial reuse field1at a low 160 MHz: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2at a low 160 MHz: This may generally mean a spatial reuse value of a second low 80 MHz subchannel. Spatial reuse field1at high 160 MHz: This may generally mean a spatial reuse value of a second high 80 MHz subchannel. Spatial reuse field2at high 160 MHz: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. <Appendix 3>2 Spatial Reuse fields in EHT Common Info field of Trigger frame i) 20 MHz: The two spatial reuse fields may have the same spatial reuse value and may mean a spatial reuse value corresponding to a 20 MHz channel.ii) 40 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 20 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 20 MHz subchannel. In addition, even when the TB PPDU is transmitted in the 2.4 GHz band, it may be set to the same value as Spatial reuse field1. The reason is that since 40 MHz channelization overlaps in the 2.4 GHz band, it is impossible to determine which channelization was used by the OBSS STA that decoded the corresponding TB PPDU in a specific 20 MHz channel, so it is simply set to the same value. iii) 80 MHz: Spatial reuse field1: In general, this may mean a spatial reuse value of the lowest 40 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 40 MHz subchannel. iv) 160 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 80 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 80 MHz subchannel. v) 320 MHz: Spatial reuse field1: This may generally mean a spatial reuse value of the lowest 160 MHz subchannel. Spatial reuse field2: This may generally mean a spatial reuse value of the highest 160 MHz subchannel.2 Spatial Reuse fields in U-SIG of EHT TB (Sub-)PPDU Copy the 2 Spatial Reuse fields in the Trigger frame above as they are. FIG.22is a process flow diagram illustrating the operation of the transmission device according to the present embodiment. The example ofFIG.22may be performed by a transmitting STA or a transmitting device (AP and/or non-AP STA). Some of each step (or detailed sub-steps to be described later) in the example ofFIG.22may be omitted or changed. Through step S2210, the transmitting device (transmitting STA) may obtain information about the above-described tone plan. As described above, the information about the tone plan includes the size and location of the RU, control information related to the RU, information about a frequency band including the RU, information about an STA receiving the RU, and the like. Through step S2220, the transmitting device may configure/generate a PPDU based on the acquired control information. A step of configuring/generating the PPDU may include a step of configuring/generating each field of the PPDU. That is, step S2220includes a step of configuring the EHT-SIG field including control information about the tone plan. That is, step S2220may include a step of configuring a field including control information (e.g. N bitmaps) indicating the size/position of the RU and/or a step of configuring a field including an identifier of an STA (e.g. AID) receiving the RU. Also, step S2220may include a step of generating an STF/LTF sequence transmitted through a specific RU. The STF/LTF sequence may be generated based on a preset STF generation sequence/LTF generation sequence. Also, step S2220may include a step of generating a data field (i.e., MPDU) transmitted through a specific RU. The transmitting device may transmit the PPDU constructed through step S2220to the receiving device based on step S2230. While performing step S2230, the transmitting device may perform at least one of operations such as CSD, Spatial Mapping, IDFT/IFFT operation, and GI insertion. A signal/field/sequence constructed according to the present specification may be transmitted in the form ofFIG.10. FIG.23is a process flow diagram illustrating the operation of the receiving device according to the present embodiment. The aforementioned PPDU may be received according to the example ofFIG.22. The example ofFIG.23may be performed by a receiving STA or a receiving device (AP and/or non-AP STA). Some of each step (or detailed sub-steps to be described later) in the example ofFIG.23may be omitted. The receiving device (receiving STA) may receive all or part of the PPDU through step S2310. The received signal may be in the form ofFIG.10. The sub-step of step S2310may be determined based on step S2230ofFIG.22. That is, in step S2310, an operation of restoring the result of the CSD, Spatial Mapping, IDFT/IFFT operation, and GI insertion operation applied in step S2230may be performed. In step S2320, the receiving device may perform decoding on all/part of the PPDU. Also, the receiving device may obtain control information related to a tone plan (i.e., RU) from the decoded PPDU. More specifically, the receiving device may decode the L-SIG and EHT-SIG of the PPDU based on the legacy STF/LTF and obtain information included in the L-SIG and EHT SIG fields. Information on various tone plans (i.e., RUs) described in this specification may be included in the EHT-SIG, and the receiving STA may obtain information on the tone plan (i.e., RU) through the EHT-SIG. In step S2330, the receiving device may decode the remaining part of the PPDU based on information about the tone plan (i.e., RU) acquired through step S2320. For example, the receiving STA may decode the STF/LTF field of the PPDU based on information about one plan (i.e., RU). In addition, the receiving STA may decode the data field of the PPDU based on information about the tone plan (i.e., RU) and obtain the MPDU included in the data field. In addition, the receiving device may perform a processing operation of transferring the data decoded through step S2330to a higher layer (e.g., MAC layer). In addition, when generation of a signal is instructed from the upper layer to the PHY layer in response to data transmitted to the upper layer, a subsequent operation may be performed. Hereinafter, the above-described embodiment will be described with reference toFIG.1toFIG.23. FIG.24is a flowchart illustrating a procedure for configuring a trigger frame and a TB PPDU supporting spatial reuse by an AP according to the present embodiment. The example ofFIG.24may be performed in a network environment in which a next generation WLAN system (IEEE 802.11be or EHT WLAN system) is supported. The next generation wireless LAN system is a WLAN system that is enhanced from an 802.11ax system and may, therefore, satisfy backward compatibility with the 802.11ax system. The example ofFIG.24is performed by a transmitting STA, and the transmitting STA may correspond to an access point (AP). A receiving STA ofFIG.24may correspond to a non-AP STA. This embodiment proposes a method for configuring a trigger frame and a TB PPDU simultaneously supporting spatial reuse of an 802.11ax (or HE) WLAN system and an 802.11be (or EHT) WLAN system. In step S2410, a transmitting station (STA) transmits a trigger frame to a receiving STA. In step S2420, the transmitting STA receives a Trigger Based Physical Protocol Data Unit (TB PPDU) from the receiving STA through a preset frequency band. The trigger frame includes a common information field and a special user information field. The common information field includes first to fourth spatial reuse fields. The special user information field includes fifth and sixth spatial reuse fields. This embodiment assumes a situation in which the trigger frame triggers the EHT TB PPDU. The common information field is an EHT variant Common Info field, and includes four spatial reuse fields (HSR1, HSR2, HSR3, and HSR4). The four spatial reuse fields HSR1, HSR2, HSR3, and HSR4 are defined for spatial reuse of the OBSS HE STA. The special user information field is included in the trigger frame when an association identifier (AID) is 2007, and includes two spatial reuse fields (ESR1 and ESR2). The two spatial reuse fields (ESR1 and ESR2) are defined for spatial reuse of the OBSS EHT STA. When the preset frequency band is a 20 MHz band, the first to fourth spatial reuse fields are set to a value of the fifth spatial reuse field (HSR1=HSR2=HSR3=HSR4=ESR1). The OBSS HE STA may determine that the trigger frame triggers a 20 MHz HE TB PPDU. When the preset frequency band is a 40 MHz band, the first and third spatial reuse fields are set to a value of the fifth spatial reuse field, and the second and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR3=ESR1/HSR2=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 40 MHz HE TB PPDU. When the preset frequency band is an 80 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers an 80 MHz HE TB PPDU. When the preset frequency band is a 160 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. When the preset frequency band is a 320 MHz band, the first to fourth spatial reuse fields are set to a smaller value among the values of the fifth and sixth spatial reuse fields (HSR1=HSR2=HSR3=HSR4=min(ESR1, ESR2)). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. Since the OBSS HE STA can operate on one of the two 160 MHz channels through which the EHT TB PPDU is transmitted, the HSR value must be a value that can represent both of the two 160 MHz channels. At this time, setting the HSR value to a value of a weak channel is preferable because it can reduce interference by lowering the transmit power of the OBSS STA. That is, this embodiment proposes a method in which four spatial reuse fields (HSR1, HSR2, HSR3, HSR4) are set based on two spatial reuse fields (ESR1, ESR2) in the Special User Info field in the common information field (EHT variant Common Info field) in each frequency band. The band (or channel) through which the trigger frame is transmitted is the same as the band (or channel) through which the TB PPDU is transmitted. When the preset frequency band is the 20 MHz band, the values of the first to fourth spatial reuse fields may be spatial reuse values for the 20 MHz band. That is, the first to fourth spatial reuse fields may include the same spatial reuse value for the 20 MHz band. The spatial reuse value for the 20 MHz band may be a value used to calculate transmit power accessible by the OBSS HE STA for the 20 MHz band. When the preset frequency band is the 40 MHz band, the values of the first and third spatial reuse fields may be spatial reuse values for a first 20 MHz subchannel having a low frequency in the 40 MHz band, and the values of the second and fourth spatial reuse fields may be spatial reuse values for a second 20 MHz subchannel having a high frequency in the 40 MHz band. When the TB PPDU is transmitted in a 2.4 GHz band, the spatial reuse value for the second 20 MHz subchannel may be set equal to the spatial reuse value for the first 20 MHz subchannel The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. If the preset frequency band is the 80 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 20 MHz subchannel having the lowest frequency in the 80 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 20 MHz subchannel having a second lowest frequency in the 80 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 20 MHz subchannel having a second highest frequency in the 80 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 20 MHz subchannel having the highest frequency in the 80 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 40 MHz subchannel having a low frequency in the 80 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 40 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. The spatial reuse value for the third 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 20 MHz subchannel. The spatial reuse value for the fourth 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth MHz subchannel. When the preset frequency band is the 160 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 40 MHz subchannel having the lowest frequency in the 160 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel having a second lowest frequency in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel having a second highest frequency in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest frequency in the 160 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 80 MHz subchannel having a low frequency in the 160 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 80 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. When the preset frequency band is the 320 MHz band, Since the OBSS HE STA can only decode the first bandwidth field (2-bit UL BW subfield) described later (the second bandwidth field (2-bit UL Bandwidth Extension subfield) cannot be interpreted), it may interpret the preset frequency band as a 160 MHz band. Accordingly, the OBSS HE STA interprets the value of the first spatial reuse field as the lowest spatial reuse value for the first 40 MHz subchannel in the 160 MHz band (where it is located), interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel that is second lowest in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel that is second highest in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest value in the 160 MHz band. However, the AP sets the first spatial reuse field to a value of a spatial reuse field representing the first 40 MHz subchannel having the lowest frequency within each 160 MHz channel of the 320 MHz band, sets the second spatial reuse field to a value of a spatial reuse field representing a second 40 MHz subchannel having a second lowest frequency in each 160 MHz channel of the 320 MHz band, sets the third spatial reuse field to a value of a spatial reuse field representing a third 40 MHz subchannel having a second highest frequency within each 160 MHz channel of the 320 MHz band, and sets the fourth spatial reuse field to a value of a spatial reuse field representing a fourth 40 MHz subchannel having the highest frequency within each 160 MHz channel of the 320 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. The common information field may include a first bandwidth field, and the special user information field includes a second bandwidth field. A bandwidth of the preset frequency band may be set based on the first and second bandwidth fields. For example, when the first bandwidth field is set to 0 and the second bandwidth field is set to 0, the preset frequency band may be 20 MHz. When the first bandwidth field is set to 1 and the second bandwidth field is set to 0, the preset frequency band may be 40 MHz. When the first bandwidth field is set to 2 and the second bandwidth field is set to 0, the preset frequency band may be 80 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 1, the preset frequency band may be 160 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 2, the preset frequency band may be 320-1 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 3, the preset frequency band may be 320-2 MHz. It is assumed that the TB PPDU is an EHT TB PPDU. The first bandwidth field is a field indicating the bandwidth of the HE TB PPDU. By using the first and second bandwidth fields together, the bandwidth of the EHT TB PPDU can also be indicated. The TB PPDU may include a Universal-Signal (U-SIG) field. The U-SIG field may include seventh and eighth spatial reuse fields. The seventh spatial reuse field may be configured by duplicating the fifth spatial reuse field. The eighth spatial reuse field may be configured by duplicating the sixth spatial reuse field. Values of the seventh and eighth spatial reuse fields may be normalized values for each 20 MHz subchannel. Since the seventh spatial reuse field duplicates the fifth spatial reuse field and the eighth spatial reuse field duplicates the sixth spatial reuse field, values of the fifth and sixth spatial reuse fields may also be normalized values for each 20 MHz subchannel. Accordingly, the values of the first to fourth spatial reuse fields may also be normalized values for each 20 MHz subchannel. For example, when the preset frequency band is an 80 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 40 MHz subband in the MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 40 MHz subband in the 80 MHz band. When the preset frequency band is a 160 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 80 MHz subband in the 160 MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 80 MHz subband in the 160 MHz band. When the preset frequency band is a 320 MHz-1 or 320 MHz-2 band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band. The first to eighth spatial reuse fields each consist of 4 bits and may use the same value as the value defined in the 802.11ax wireless LAN system (see Table 3). According to this embodiment, the transmitting STA informs the OBSS STA of an interference power value that is allowable for a specific band (or specific channel) through a spatial reuse value, and the OBSS STA derives transmit power using the interference power value and the value of the AP TX Power subfield, and transmits a signal by performing spatial reuse in the specific band (or specific channel). Since the OBSS STA performs spatial reuse, the transmitting STA may not receive interference due to the OBSS STA when receiving the TB PPDU. That is, the present embodiment has an effect of improving throughput and efficiency by enabling spatial reuse of the OBSS STA and stably using transmission resources for a specific band without collision. The trigger frame is divided into a HE variant case and an EHT variant case, and a common information field and a user information field may be configured differently (See FIGS.16and17for the common information field, andFIG.20for the user information field). The TB PPDU may be an EHT TB PPDU. The EHT TB PPDU may include a Legacy-Short Training Field (L-STF), a Legacy-Long Training Field (L-LTF), a Legacy-Signal (L-SIG), a Repeated L-SIG (RL-SIG), a Universal-Signal (U-SIG), a EHT-STF and EHT-LTFs, and a data field. That is, the EHT TB PPDU is defined in a format excluding EHT-SIG from the EHT MU PPDU. Also, the TB PPDU may be a TB Trigger Based Aggregated-Physical Protocol Data Unit (A-PPDU) in which a High Efficiency (HE) TB PPDU and an Extreme High Throughput (EHT) TB PPDU are aggregated. FIG.25is a flowchart illustrating a procedure for configuring a trigger frame and a TB PPDU supporting spatial reuse by an STA according to the present embodiment. The example ofFIG.25may be performed in a network environment in which a next generation WLAN system (IEEE 802.11be or EHT WLAN system) is supported. The next generation wireless LAN system is a WLAN system that is enhanced from an 802.11ax system and may, therefore, satisfy backward compatibility with the 802.11ax system. The example ofFIG.25may be performed by a receiving STA, and the receiving STA may correspond to a non-AP STA. A transmitting STA ofFIG.25may correspond to an access point (AP). This embodiment proposes a method for configuring a trigger frame and a TB PPDU simultaneously supporting spatial reuse of an 802.11ax (or HE) WLAN system and an 802.11be (or EHT) WLAN system. In step S2510, a receiving station (STA) receives a trigger frame from a transmitting STA. In step S2520, the receiving STA transmits a Trigger Based Physical Protocol Data Unit (TB PPDU) to the transmitting STA through a preset frequency band. The trigger frame includes a common information field and a special user information field. The common information field includes first to fourth spatial reuse fields. The special user information field includes fifth and sixth spatial reuse fields. This embodiment assumes a situation in which the trigger frame triggers the EHT TB PPDU. The common information field is an EHT variant Common Info field, and includes four spatial reuse fields (HSR1, HSR2, HSR3, and HSR4). The four spatial reuse fields HSR1, HSR2, HSR3, and HSR4 are defined for spatial reuse of the OBSS HE STA. The special user information field is included in the trigger frame when an association identifier (AID) is 2007, and includes two spatial reuse fields (ESR1 and ESR2). The two spatial reuse fields (ESR1 and ESR2) are defined for spatial reuse of the OBSS EHT STA. When the preset frequency band is a 20 MHz band, the first to fourth spatial reuse fields are set to a value of the fifth spatial reuse field (HSR1=HSR2=HSR3=HSR4=ESR1). The OBSS HE STA may determine that the trigger frame triggers a 20 MHz HE TB PPDU. When the preset frequency band is a 40 MHz band, the first and third spatial reuse fields are set to a value of the fifth spatial reuse field, and the second and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR3=ESR1/HSR2=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 40 MHz HE TB PPDU. When the preset frequency band is an 80 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers an 80 MHz HE TB PPDU. When the preset frequency band is a 160 MHz band, the first and second spatial reuse fields are set to a value of the fifth spatial reuse field, and the third and fourth spatial reuse fields are set to a value of the sixth spatial reuse field (HSR1=HSR2=ESR1/HSR3=HSR4=ESR2). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. When the preset frequency band is a 320 MHz band, the first to fourth spatial reuse fields are set to a smaller value among the values of the fifth and sixth spatial reuse fields (HSR1=HSR2=HSR3=HSR4=min(ESR1, ESR2)). The OBSS HE STA may determine that the trigger frame triggers a 160 MHz HE TB PPDU. Since the OBSS HE STA can operate on one of the two 160 MHz channels through which the EHT TB PPDU is transmitted, the HSR value must be a value that can represent both of the two 160 MHz channels. At this time, setting the HSR value to a value of a weak channel is preferable because it can reduce interference by lowering the transmit power of the OBSS STA. That is, this embodiment proposes a method in which four spatial reuse fields (HSR1, HSR2, HSR3, HSR4) are set based on two spatial reuse fields (ESR1, ESR2) in the Special User Info field in the common information field (EHT variant Common Info field) in each frequency band. The band (or channel) through which the trigger frame is transmitted is the same as the band (or channel) through which the TB PPDU is transmitted. When the preset frequency band is the 20 MHz band, the values of the first to fourth spatial reuse fields may be spatial reuse values for the 20 MHz band. That is, the first to fourth spatial reuse fields may include the same spatial reuse value for the 20 MHz band. The spatial reuse value for the 20 MHz band may be a value used to calculate transmit power accessible by the OBSS HE STA for the 20 MHz band. When the preset frequency band is the 40 MHz band, the values of the first and third spatial reuse fields may be spatial reuse values for a first 20 MHz subchannel having a low frequency in the 40 MHz band, and the values of the second and fourth spatial reuse fields may be spatial reuse values for a second 20 MHz subchannel having a high frequency in the 40 MHz band. When the TB PPDU is transmitted in a 2.4 GHz band, the spatial reuse value for the second 20 MHz subchannel may be set equal to the spatial reuse value for the first 20 MHz subchannel. The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second 20 MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. If the preset frequency band is the 80 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 20 MHz subchannel having the lowest frequency in the 80 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 20 MHz subchannel having a second lowest frequency in the 80 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 20 MHz subchannel having a second highest frequency in the 80 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 20 MHz subchannel having the highest frequency in the 80 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 40 MHz subchannel having a low frequency in the 80 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 40 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 20 MHz subchannel may be a value used to calculate transmit power accessible by an Overlapping Basic Service Set (OBSS) High Efficiency (HE) STA for the first 20 MHz subchannel. The spatial reuse value for the second MHz subchannel may be a value used to calculate transmit power accessible by the OBSS HE STA for the second 20 MHz subchannel. The spatial reuse value for the third 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 20 MHz subchannel. The spatial reuse value for the fourth 20 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth MHz subchannel. When the preset frequency band is the 160 MHz band, the OBSS HE STA interprets the value of the first spatial reuse field as a spatial reuse value for a first 40 MHz subchannel having the lowest frequency in the 160 MHz band, interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel having a second lowest frequency in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel having a second highest frequency in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest frequency in the 160 MHz band. However, the AP sets the first and second spatial reuse fields to values of a spatial reuse field representing a first 80 MHz subchannel having a low frequency in the 160 MHz band, and sets the third and fourth spatial reuse fields to values of a spatial reuse field representing a second 80 MHz subchannel having a high frequency in the 80 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. When the preset frequency band is the 320 MHz band, Since the OBSS HE STA can only decode the first bandwidth field (2-bit UL BW subfield) described later (the second bandwidth field (2-bit UL Bandwidth Extension subfield) cannot be interpreted), it may interpret the preset frequency band as a 160 MHz band. Accordingly, the OBSS HE STA interprets the value of the first spatial reuse field as the lowest spatial reuse value for the first 40 MHz subchannel in the 160 MHz band (where it is located), interprets the value of the second spatial reuse field as a spatial reuse value for a second 40 MHz subchannel that is second lowest in the 160 MHz band, interprets the value of the third spatial reuse field as a spatial reuse value for a third 40 MHz subchannel that is second highest in the 160 MHz band, and interprets the value of the fourth spatial reuse field as a spatial reuse value for a fourth 40 MHz subchannel having the highest value in the 160 MHz band. However, the AP sets the first spatial reuse field to a value of a spatial reuse field representing the first 40 MHz subchannel having the lowest frequency within each 160 MHz channel of the 320 MHz band, sets the second spatial reuse field to a value of a spatial reuse field representing a second 40 MHz subchannel having a second lowest frequency in each 160 MHz channel of the 320 MHz band, sets the third spatial reuse field to a value of a spatial reuse field representing a third 40 MHz subchannel having a second highest frequency within each 160 MHz channel of the 320 MHz band, and sets the fourth spatial reuse field to a value of a spatial reuse field representing a fourth 40 MHz subchannel having the highest frequency within each 160 MHz channel of the 320 MHz band. The spatial reuse value for the first 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the first 40 MHz subchannel. The spatial reuse value for the second 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the second 40 MHz subchannel. The spatial reuse value for the third 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the third 40 MHz subchannel. The spatial reuse value for the fourth 40 MHz subchannel may be a value used to calculate transmit power accessible by an OBSS HE STA for the fourth 40 MHz subchannel. The common information field may include a first bandwidth field, and the special user information field includes a second bandwidth field. A bandwidth of the preset frequency band may be set based on the first and second bandwidth fields. For example, when the first bandwidth field is set to 0 and the second bandwidth field is set to 0, the preset frequency band may be 20 MHz. When the first bandwidth field is set to 1 and the second bandwidth field is set to 0, the preset frequency band may be 40 MHz. When the first bandwidth field is set to 2 and the second bandwidth field is set to 0, the preset frequency band may be 80 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 1, the preset frequency band may be 160 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 2, the preset frequency band may be 320-1 MHz. When the first bandwidth field is set to 3 and the second bandwidth field is set to 3, the preset frequency band may be 320-2 MHz. It is assumed that the TB PPDU is an EHT TB PPDU. The first bandwidth field is a field indicating the bandwidth of the HE TB PPDU. By using the first and second bandwidth fields together, the bandwidth of the EHT TB PPDU can also be indicated. The TB PPDU may include a Universal-Signal (U-SIG) field. The U-SIG field may include seventh and eighth spatial reuse fields. The seventh spatial reuse field may be configured by duplicating the fifth spatial reuse field. The eighth spatial reuse field may be configured by duplicating the sixth spatial reuse field. Values of the seventh and eighth spatial reuse fields may be normalized values for each MHz subchannel Since the seventh spatial reuse field duplicates the fifth spatial reuse field and the eighth spatial reuse field duplicates the sixth spatial reuse field, values of the fifth and sixth spatial reuse fields may also be normalized values for each 20 MHz subchannel. Accordingly, the values of the first to fourth spatial reuse fields may also be normalized values for each 20 MHz subchannel. For example, when the preset frequency band is an 80 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 40 MHz subband in the MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 40 MHz subband in the 80 MHz band. When the preset frequency band is a 160 MHz band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 80 MHz subband in the 160 MHz band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 80 MHz subband in the 160 MHz band. When the preset frequency band is a 320 MHz-1 or 320 MHz-2 band, the fifth (or seventh) spatial reuse field may be applied to each 20 MHz subchannel of a first 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band, and the sixth (or eighth) spatial reuse field may be applied to each 20 MHz subchannel of the second 160 MHz subband in the 320 MHz-1 or 320 MHz-2 band. The first to eighth spatial reuse fields each consist of 4 bits and may use the same value as the value defined in the 802.11ax wireless LAN system (see Table 3). According to this embodiment, the transmitting STA informs the OBSS STA of an interference power value that is allowable for a specific band (or specific channel) through a spatial reuse value, and the OBSS STA derives transmit power using the interference power value and the value of the AP TX Power subfield, and transmits a signal by performing spatial reuse in the specific band (or specific channel). Since the OBSS STA performs spatial reuse, the transmitting STA may not receive interference due to the OBSS STA when receiving the TB PPDU. That is, the present embodiment has an effect of improving throughput and efficiency by enabling spatial reuse of the OBSS STA and stably using transmission resources for a specific band without collision. The trigger frame is divided into a HE variant case and an EHT variant case, and a common information field and a user information field may be configured differently (SeeFIGS.16and17for the common information field, andFIG.20for the user information field). The TB PPDU may be an EHT TB PPDU. The EHT TB PPDU may include a Legacy-Short Training Field (L-STF), a Legacy-Long Training Field (L-LTF), a Legacy-Signal (L-SIG), a Repeated L-SIG (RL-SIG), a Universal-Signal (U-SIG), a EHT-STF and EHT-LTFs, and a data field. That is, the EHT TB PPDU is defined in a format excluding EHT-SIG from the EHT MU PPDU. Also, the TB PPDU may be a TB Trigger Based Aggregated-Physical Protocol Data Unit (A-PPDU) in which a High Efficiency (HE) TB PPDU and an Extreme High Throughput (EHT) TB PPDU are aggregated. 4. Device Configuration The technical features of the present disclosure may be applied to various devices and methods. For example, the technical features of the present disclosure may be performed/supported through the device(s) ofFIG.1and/orFIG.11. For example, the technical features of the present disclosure may be applied to only part ofFIG.1and/orFIG.11. For example, the technical features of the present disclosure may be implemented based on the processing chip(s)114and124ofFIG.1, or implemented based on the processor(s)111and121and the memory(s)112and122, or implemented based on the processor610and the memory620ofFIG.11. For example, the device according to the present disclosure receives a trigger frame from a transmitting station (STA); and transmits a Trigger Based Physical Protocol Data Unit (TB PPDU) through a preset frequency band to the transmitting STA. The technical features of the present disclosure may be implemented based on a computer readable medium (CRM). For example, a CRM according to the present disclosure is at least one computer readable medium including instructions designed to be executed by at least one processor. The CRM may store instructions that perform operations including receiving a trigger frame from a transmitting station (STA); and transmitting a Trigger Based Physical Protocol Data Unit (TB PPDU) through a preset frequency band to the transmitting STA. At least one processor may execute the instructions stored in the CRM according to the present disclosure. At least one processor related to the CRM of the present disclosure may be the processor111,121ofFIG.1, the processing chip114,124ofFIG.1, or the processor610ofFIG.11. Meanwhile, the CRM of the present disclosure may be the memory112,122ofFIG.1, the memory620ofFIG.11, or a separate external memory/storage medium/disk. The foregoing technical features of the present specification are applicable to various applications or business models. For example, the foregoing technical features may be applied for wireless communication of a device supporting artificial intelligence (AI). Artificial intelligence refers to a field of study on artificial intelligence or methodologies for creating artificial intelligence, and machine learning refers to a field of study on methodologies for defining and solving various issues in the area of artificial intelligence. Machine learning is also defined as an algorithm for improving the performance of an operation through steady experiences of the operation. An artificial neural network (ANN) is a model used in machine learning and may refer to an overall problem-solving model that includes artificial neurons (nodes) forming a network by combining synapses. The artificial neural network may be defined by a pattern of connection between neurons of different layers, a learning process of updating a model parameter, and an activation function generating an output value. The artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include synapses that connect neurons. In the artificial neural network, each neuron may output a function value of an activation function of input signals input through a synapse, weights, and deviations. A model parameter refers to a parameter determined through learning and includes a weight of synapse connection and a deviation of a neuron. A hyper-parameter refers to a parameter to be set before learning in a machine learning algorithm and includes a learning rate, the number of iterations, a mini-batch size, and an initialization function. Learning an artificial neural network may be intended to determine a model parameter for minimizing a loss function. The loss function may be used as an index for determining an optimal model parameter in a process of learning the artificial neural network. Machine learning may be classified into supervised learning, unsupervised learning, and reinforcement learning. Supervised learning refers to a method of training an artificial neural network with a label given for training data, wherein the label may indicate a correct answer (or result value) that the artificial neural network needs to infer when the training data is input to the artificial neural network. Unsupervised learning may refer to a method of training an artificial neural network without a label given for training data. Reinforcement learning may refer to a training method for training an agent defined in an environment to choose an action or a sequence of actions to maximize a cumulative reward in each state. Machine learning implemented with a deep neural network (DNN) including a plurality of hidden layers among artificial neural networks is referred to as deep learning, and deep learning is part of machine learning. Hereinafter, machine learning is construed as including deep learning. The foregoing technical features may be applied to wireless communication of a robot. Robots may refer to machinery that automatically process or operate a given task with own ability thereof. In particular, a robot having a function of recognizing an environment and autonomously making a judgment to perform an operation may be referred to as an intelligent robot. Robots may be classified into industrial, medical, household, military robots and the like according uses or fields. A robot may include an actuator or a driver including a motor to perform various physical operations, such as moving a robot joint. In addition, a movable robot may include a wheel, a brake, a propeller, and the like in a driver to run on the ground or fly in the air through the driver. The foregoing technical features may be applied to a device supporting extended reality. Extended reality collectively refers to virtual reality (VR), augmented reality (AR), and mixed reality (MR). VR technology is a computer graphic technology of providing a real-world object and background only in a CG image, AR technology is a computer graphic technology of providing a virtual CG image on a real object image, and MR technology is a computer graphic technology of providing virtual objects mixed and combined with the real world. MR technology is similar to AR technology in that a real object and a virtual object are displayed together. However, a virtual object is used as a supplement to a real object in AR technology, whereas a virtual object and a real object are used as equal statuses in MR technology. XR technology may be applied to a head-mount display (HMD), a head-up display (HUD), a mobile phone, a tablet PC, a laptop computer, a desktop computer, a TV, digital signage, and the like. A device to which XR technology is applied may be referred to as an XR device. The claims recited in the present specification may be combined in a variety of ways. For example, the technical features of the method claims of the present specification may be combined to be implemented as a device, and the technical features of the device claims of the present specification may be combined to be implemented by a method. In addition, the technical characteristics of the method claim of the present specification and the technical characteristics of the device claim may be combined to be implemented as a device, and the technical characteristics of the method claim of the present specification and the technical characteristics of the device claim may be combined to be implemented by a method.
203,701
11943627
DETAILED DESCRIPTION The present invention is generally directed to spectrum analysis and management for electromagnetic signals, and more particularly for providing dynamic, prioritized spectrum utilization management. In one embodiment, the present invention provides a system for dynamic, prioritized spectrum utilization management in an electromagnetic environment including at least one monitoring sensor operable to monitor the electromagnetic environment, thereby creating measured data, at least one data analysis engine for analyzing the measured data, at least one application, a semantic engine including a programmable rules and policy editor, and a tip and cue server, wherein the at least one data analysis engine includes a detection engine and a learning engine, wherein the detection engine is operable to automatically detect at least one signal of interest, and wherein the learning engine is operable to learn the electromagnetic environment, wherein the at least one application includes a survey occupancy application and a resource brokerage application, wherein the survey occupancy application is operable to determine occupancy in frequency bands and schedule occupancy in at least one frequency band, and wherein the resource brokerage application is operable to optimize resources to improve performance of at least one customer application and/or at least one customer device, wherein the programmable rules and policy editor includes at least one rule and/or at least one policy, and wherein one or more of the at least one rule and/or the at least one policy is defined by at least one customer, wherein the tip and cue server is operable to use analyzed data from the at least one data analysis engine to create actionable data, wherein each of the at least one customer application is assigned a priority, and wherein the priority and the one or more of the at least one rule and/or the at least one policy are used to dynamically allocate the at least one frequency band in the electromagnetic spectrum. In another embodiment, the present invention provides a system for dynamic, prioritized spectrum utilization management in an electromagnetic environment including at least one monitoring sensor operable to monitor the electromagnetic environment, thereby creating measured data, at least one data analysis engine for analyzing the measured data, at least one application, a semantic engine including a programmable rules and policy editor, and a tip and cue server, wherein the at least one data analysis engine includes a detection engine, an identification engine, a classification engine, a geolocation engine, and a learning engine, wherein the detection engine is operable to automatically detect at least one signal of interest, and wherein the learning engine is operable to learn the electromagnetic environment, wherein the at least one application includes a survey occupancy application and a resource brokerage application, wherein the survey occupancy application is operable to determine occupancy in frequency bands and schedule occupancy in at least one frequency band, and wherein the resource brokerage application is operable to optimize resources to improve performance of at least one customer application and/or at least one customer device, wherein the programmable rules and policy editor includes at least one rule and/or at least one policy, and wherein one or more of the at least one rule and/or the at least one policy is defined by at least one customer, wherein the tip and cue server is operable to use analyzed data from the at least one data analysis engine to create actionable data, wherein each of the at least one customer application is assigned a priority, and wherein the priority and the one or more of the at least one rule and/or the at least one policy are used to dynamically allocate the at least one frequency band in the electromagnetic spectrum. In yet another embodiment, the present invention provides a method for dynamic, prioritized spectrum utilization management in an electromagnetic environment including providing a semantic engine including a programmable rules and policy editor, wherein the programmable rules and policy editor includes at least one rule and/or at least one policy, and wherein one or more of the at least one rule and/or the at least one policy is defined by at least one customer, monitoring the electromagnetic environment using at least one monitoring sensor, thereby creating measured data, analyzing the measured data using at least one data analysis engine, thereby creating analyzed data, wherein the at least one data analysis engine includes a detection engine and a learning engine, learning the electromagnetic environment using the learning engine, automatically detecting at least one signal of interest using the detection engine, determining occupancy in frequency bands and scheduling occupancy in at least one frequency band using a survey occupancy application, optimizing resources to improve performance of at least one customer application and/or at least one customer device using a resource brokerage application, assigning a priority to each of the at least one customer application, and dynamically allocating the at least one frequency band in the electromagnetic spectrum based on the priority and the one or more of the at least one rule and/or the at least one policy. In still another embodiment, the present invention provides a method for dynamic, prioritized spectrum utilization management in an electromagnetic environment including providing a semantic engine including a programmable rules and policy editor, wherein the programmable rules and policy editor includes at least one rule and/or at least one policy, and wherein one or more of the at least one rule and/or the at least one policy is defined by at least one customer, monitoring the electromagnetic environment using at least one monitoring sensor, thereby creating measured data, analyzing the measured data using at least one data analysis engine, thereby creating analyzed data, wherein the at least one data analysis engine includes a detection engine, a classification engine, an identification engine, a geolocation engine, and a learning engine, learning the electromagnetic environment using the learning engine, automatically detecting at least one signal of interest using the detection engine, classifying the at least one signal of interest using the classification engine, identifying the at least one signal of interest using the identification engine, determining a location of the at least one signal of interest using the geolocation engine, determining occupancy in frequency bands and scheduling occupancy in at least one frequency band using a survey occupancy application, optimizing resources to improve performance of at least one customer application and/or at least one customer device using a resource brokerage application, assigning a priority to each of the at least one customer application, and dynamically allocating the at least one frequency band in the electromagnetic spectrum based on the priority and the one or more of the at least one rule and/or the at least one policy. Traditional management of spectrum is static, based on licenses that are geographical and band specific. The Federal Communications Commission (FCC) has allocated spectrum into a table. Utilization is increased by slicing the spectrum into finer slices. Additionally, interference is limited by imposing penalties by strict geographical band utilization rules and licenses. However, these traditional methods of spectrum management do not work with increasing demand and new services coming out. The new services would have to be at higher frequencies (e.g., above 10 GHz), which is very expensive and requires costly transceiver with a limited distance range. Spectrum is valuable because it is a finite resource. Further, the demand for spectrum is ever-increasing. The Shannon-Hartley theorem calculates the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise as follows: C=BW log2(1+SNR)where C is the channel capacity in bits per second, BW is the bandwidth of the channel in Hz, and SNR is the signal-to-noise ratio. Early attempts at managing spectrum include developing technology that increases spectrum efficiency (i.e., maximizing SNR). Although this results in more bits per Hz, the logarithmic function limits the gains in channel capacity resulting from improving technology. Additional attempts at managing spectrum also include developing technology to enable use of alternate spectrum (e.g., free-space optical (FSO) communication). However, using alternate spectrum, such as higher frequencies, leads to smaller ranges, line of sight limitations, increased elevation of transmission structures, and/or expensive infrastructure. The missing component to spectrum management is bandwidth management. Bandwidth management provides flexible utilization of the spectrum, enables management of spectrum resources and users, while allowing spectrum usage to be quantified. The majority of applications using the spectrum can coexist if each application knows about the spectrum needs of other applications and how they plan to use the spectrum. However, because the needs of each application are dynamic, a dynamic spectrum management system is needed. The present invention allows autonomous, dynamic sharing of the electromagnetic spectrum to allow maximum utilization by diverse applications according to specific utilization rules (dynamic and/or static) while maintaining minimum interference between applications. This requires new tools that provide dynamic environmental spectral awareness of all signals present in the electromagnetic (e.g., radio frequency (RF)) environment to properly execute utilization rules, which are operable to describe or facilitate sharing spectrum resources among several competing users or protect one service user from others, among others. 5G requires spectrum awareness. Larger blocks of spectrum are required to support higher speeds. Dynamic spectrum sharing is necessary to make the spectrum assets available. Further, visibility of spectrum activity is required to support reliability targets. Interference avoidance and resolution must be embedded. Internet of Things (IoT)/machine communication wireless dependency elevates the need for real-time RF visibility to avoid disruption and safety concerns. The system of the present invention provides scalable processing capabilities at the edge. Edge processing is fast and reliable with low latency. Environmental sensing processes optimize collection and analytics, making data sets manageable. Advantageously, the system minimizes backhaul requirements, allowing for actionable data to be delivered faster and more efficiently. Deep learning techniques extract and deliver knowledge from large data sets in near-real time. These deep learning techniques are critical for identifying and classifying signals. Edge analytics further allow third party data (e.g., social media, population information, real estate information, traffic information, geographic information system) to further enrich captured data sets. A semantic engine and inference reasoner leverages insights generated by machine learning and edge analytics. Ontologies are established allowing for the creation of knowledge operable to inform and direct actions and/or decisions. Referring now to the drawings in general, the illustrations are for the purpose of describing one or more preferred embodiments of the invention and are not intended to limit the invention thereto. The present invention provides systems, methods, and apparatuses for spectrum analysis and management by identifying, classifying, and cataloging at least one or a multiplicity of signals of interest based on electromagnetic spectrum measurements (e.g., radiofrequency spectrum measurements), location, and other measurements. The present invention uses real-time and/or near real-time processing of signals (e.g., parallel processing) and corresponding signal parameters and/or characteristics in the context of historical, static, and/or statistical data for a given spectrum, and more particularly, all using baseline data and changes in state for compressed data to enable near real-time analytics and results for individual monitoring sensors and for aggregated monitoring sensors for making unique comparisons of data. The systems, methods, and apparatuses according to the present invention preferably are operable to detect in near real time, and more preferably to detect, sense, measure, and/or analyze in near real time, and more preferably to perform any near real time operations within about 1 second or less. In one embodiment, near real time is defined as computations completed before data marking an event change. For example, if an event happens every second, near real time is completing computations in less than one second. Advantageously, the present invention and its real time functionality described herein uniquely provide and enable the system to compare acquired spectrum data to historical data, to update data and/or information, and/or to provide more data and/or information on open space. In one embodiment, information (e.g., open space) is provided on an apparatus unit or a device that is occupying the open space. In another embodiment, the system compares data acquired with historically scanned (e.g., 15 min to 30 days) data and/or or historical database information in near-real time. Also, the data from each monitoring sensor, apparatus unit, or device and/or aggregated data from more than one monitoring sensor, apparatus unit, and/or device are communicated via a network to at least one server computer and stored on a database in a virtualized or cloud-based computing system, and the data is available for secure, remote access via the network from distributed remote devices having software applications (apps) operable thereon, for example by web access (mobile app) or computer access (desktop app). The at least one server computer is operable to analyze the data and/or the aggregated data. The system is operable to monitor the electromagnetic (e.g., RF) environment via at least one monitoring sensor. The system is then operable to analyze data acquired from the at least one monitoring sensor to detect, classify, and/or identify at least one signal in the electromagnetic environment. The system is operable to learn the electromagnetic environment, which allows the system to extract environmental awareness. In a preferred embodiment, the system extracts environmental awareness by including customer goals. The environmental awareness is combined with the customer goals, customer defined policies, and/or rules (e.g., customer defined rules, government defined rules) to extract actionable information to help the customer optimize performance according to the customer goals. The actionable information is combined and correlated with additional information sources to enhance customer knowledge and user experience through dynamic spectrum utilization and prediction models. The systems, methods, and apparatuses of the various embodiments enable spectrum utilization management by identifying, classifying, and cataloging signals of interest based on electromagnetic (e.g., radio frequency) measurements. In one embodiment, signals and parameters of the signals are identified. In another embodiment, indications of available frequencies are presented to a user and/or user equipment. In yet another embodiment, protocols of signals are also identified. In a further embodiment, the modulation of signals, data types carried by the signals, and estimated signal origins are identified. Identification, classification, and cataloging signals of interest preferably occurs in real time or near-real time. Embodiments are directed to a spectrum monitoring unit that is configurable to obtain spectrum data over a wide range of wireless communication protocols. Embodiments also provide for the ability to acquire data from and send data to database depositories that are used by a plurality of spectrum management customers and/or applications or services requiring spectrum resources. In one embodiment, the system includes at least one spectrum monitoring unit. Each of the at least one spectrum monitoring unit includes at least one monitoring sensor that is preferably in network communication with a database system and spectrum management interface. In one embodiment, the at least one spectrum monitoring unit and/or the at least one monitoring sensor is portable. In a preferred embodiment, one or more of the at least one spectrum monitoring unit and/or the at least one monitoring sensor is a stationary installation. The at least one spectrum monitoring unit and/or the at least one monitoring sensor is operable to acquire different spectrum information including, but not limited to, frequency, bandwidth, signal power, time, and location of signal propagation, as well as modulation type and format. The at least one spectrum monitoring unit is preferably operable to provide signal identification, classification, and/or geo-location. Additionally, the at least one spectrum monitoring unit preferably includes a processor to allow the at least one spectrum monitoring unit to process spectrum power density data as received and/or to process raw In-Phase and Quadrature (I/Q) complex data. Alternatively, the at least one spectrum monitoring unit and/or the at least one monitoring sensor transmits the data to at least one data analysis engine for storage and/or processing. In a preferred embodiment, the transmission of the data is via a backhaul operation. The spectrum power density data and/or the raw I/Q complex data are operable to be used to further signal processing, signal identification, and data extraction. The system preferably is operable to manage and prioritize spectrum utilization based on five factors: frequency, time, spatial, signal space, and application goals. The frequency range is preferably as large as possible. In one embodiment, the system supports a frequency range between 1 MHz and 6 GHz. In another embodiment, the system supports a frequency range with a lower limit of 9 kHz. In yet another embodiment, the system supports a frequency range with a higher limit of 12.4 GHz. In another embodiment, the system supports a frequency range with a higher limit of 28 GHz or 36 GHz. Alternatively, the system supports a frequency range with a higher limit of 60 GHz. In still another embodiment, the system supports a frequency range with a higher limit of 100 GHz. The system preferably has an instantaneous processing bandwidth (IPBW) of 40 MHz, 80 MHz, 100 MHz, or 250 MHz per channel. The time range is preferably as large as possible. In one embodiment, the number of samples per dwell time in a frequency band is calculated. In one example, the system provides a minimum coverage of 2 seconds. The number of samples per dwell in time in the frequency band is calculated as follows: Ns≥(IPBW)(2)/channel The storage required in a buffer is a minimum of 2 seconds per channel per dwell time, which is calculated as follows: storage=(IPBW)(2)(2 Bytes)(channels)/(dwell time) Spatial processing is used to divide an area of coverage by a range of azimuth and elevation angles. The area of coverage is defined as an area under a certain azimuth and range. This is implemented by antenna arrays processing, steerable beamforming, array processing, and/or directional antennas. In one embodiment, the directional antennas include at least one steerable electrical or mechanical antenna. Alternatively, the directional antennas include an array of steerable antennas. More antennas require more signal processing. Advantageously, spatial processing allows for better separation of signals, reduction of noise and interference signals, geospatial separation, increasing signal processing gains, and provides a spatial component to signal identification. Further, this allows for simple integration of geolocation techniques, such as time difference of arrival (TDOA), angle of arrival (AOA), and/or frequency difference of arrival (FDOA). This also allows for implementation of a geolocation engine, which will be discussed in detail infra. Each signal has inherent signal characteristics including, but not limited to a modulation type (e.g., frequency modulation (FM), amplitude modulation (AM), quadrature phase-shift keying (QPSK), quadrature amplitude modulation (QAM), binary phase-shift keying (BPSK), etc.), a protocol used (e.g., no protocol for analog signals, digital mobile radio (DMR), land mobile radio (LMR), Project 25 (P25), NXDN, cellular, long-term evolution (LTE), universal mobile telecommunications system (UMTS), 5G), an envelope behavior (e.g., bandwidth (BW), center frequency (Fc), symbol rate, data rate, constant envelope, peak power to average power ratio (PAR), cyclostationary properties), an interference index, and statistical properties (e.g., stationary, cyclostationary, higher moment decomposition, non-linear decomposition (e.g., Volterra series to cover non-linearities, learning basic model). The application goals are dependent on the particular application used within the system. Examples of applications used in the system include, but are not limited to, traffic management, telemedicine, virtual reality, streaming video for entertainment, social media, autonomous and/or unmanned transportation, etc. Each application is operable to be prioritized within the system according to customer goals. For example, traffic management is a higher priority application than streaming video for entertainment. As previously described, the system is operable to monitor the electromagnetic (e.g., RF) environment, analyze the electromagnetic environment, and extract environmental awareness of the electromagnetic environment. In a preferred embodiment, the system extracts the environmental awareness of the electromagnetic environment by including customer goals. In another embodiment, the system uses the environmental awareness with the customer goals and/or user defined policies and rules to extract actionable information to help the customer optimize the customer goals. The system combines and correlates other information sources with the extracted actionable information to enhance customer knowledge through dynamic spectrum utilization and prediction models. FIG.1illustrates one embodiment of an RF awareness and analysis system. The system includes an RF awareness subsystem. The RF awareness subsystem includes, but is not limited to, an antenna subsystem, an RF conditioning subsystem, at least one front end receiver, a programmable channelizer, a blind detection engine, a blind classification engine, an envelope feature extraction module, a demodulation bank, an automatic gain control (AGC) double loop subsystem, a signal identification engine, a feature extraction engine, a learning engine, a geolocation engine, a data analysis engine, and/or a database storing information related to at least one signal (e.g., metadata, timestamps, power measurements, frequencies, etc.). The system further includes an alarm system, a visualization subsystem, a knowledge engine, an operational semantic engine, a customer optimization module, a database of customer goals and operational knowledge, and/or a database of actionable data and decisions. The antenna subsystem monitors the electromagnetic (e.g., RF) environment to produce monitoring data. The monitoring data is then processed through the RF conditioning subsystem before being processed through the front end receivers. The AGC double loop subsystem is operable to perform AGC adjustment. Data is converted from analog to digital by the front end receivers. The digital data is then sent through the programmable channelizer, and undergoes I,Q buffering and masking. A fast Fourier transform (FFT) is performed and the blind detection engine performs blind detection. Additionally, the blind classification engine performs blind classification. Information (e.g., observed channels) is shared from the blind detection engine to the blind classification and/or the programmable channelizer (e.g., to inform logic and selection processes). Information from the blind detection engine is also sent to the envelope feature extraction module. Information from the blind classification engine is sent to the demodulation bank. Information from the envelope feature extraction module, the demodulation bank, and/or the blind classification engine are operable to be used by the signal identification engine, the feature extraction engine, the learning engine, and/or the geolocation engine. Information from the AGC double loop subsystem, the I,Q buffer, masking, the programmable channelizer, the signal identification engine, the feature extraction engine, the learning engine, and the geolocation engine, the envelope feature extraction module, the demodulation bank, and/or the blind classification engine is operable to be stored in the database storing information related to the at least one signal (e.g., signal data, metadata, timestamps). Information from the database (i.e., the database storing information related to the at least one signal), the signal identification engine, the feature extraction engine, the learning engine, and/or the geolocation engine is operable to be sent to the data analysis engine for further processing. The alarm system includes information from the database storing information related to the at least one signal and/or the database of customer goals and operational knowledge. Alarms are sent from the alarm system to the visualization subsystem. In a preferred embodiment, the visualization subsystem customizes a graphical user interface (GUI) for each customer. The visualization system is operable to display information from the database of actionable data and decisions. In one embodiment, the alarms are sent via text message and/or electronic mail. In one embodiment, the alarms are sent to at least one internet protocol (IP) address. The database of customer goals and operational knowledge is also operable to send information to a semantic engine (e.g., customer alarm conditions and goals) and/or an operational semantic engine (e.g., customer operational knowledge). The semantic engine translates information into constraints and sends the constraints to the customer optimization module, which also receives information (e.g., signal metadata) from the data analysis engine. The customer optimization module is operable to send actionable data related to the electromagnetic environment to the operational semantic engine. The customer optimization module is operable to discern which information (e.g., environmental information) has the largest statistically sufficient impact related to the customer goals and operation. In one embodiment, the system includes at least one monitoring sensor, at least one data analysis engine, at least one application, a semantic engine, a programmable rules and policy editor, a tip and cue server, and/or a control panel as shown inFIG.2. The at least one monitoring sensor includes at least one radio server and/or at least one antenna. The at least one antenna is a single antenna (e.g., uni-directional or directional) or an antenna array formed of multiple antennas resonating at different frequency bands and configured in a 1D (linear), 2D (planar), or 3D (area) antenna configuration. The at least one monitoring sensor is operable to scan the electromagnetic (e.g., RF) spectrum and measure properties of the electromagnetic spectrum, including, but not limited to, receiver I/Q data. The at least one monitoring unit is preferably operable to autonomously capture the electromagnetic spectrum with respect to frequency, time, and/or space. In one embodiment, the at least one monitoring sensor is operable to perform array processing. In another embodiment, the at least one monitoring sensor is mobile. In one embodiment, the at least one monitoring sensor is mounted on a vehicle or a drone. Alternatively, the at least one monitoring sensor is fixed. In one embodiment, the at least one monitoring sensor is fixed in or on a street light and/or a traffic pole. In yet another embodiment, the at least one monitoring sensor is fixed on top of a building. In one embodiment, the at least one monitoring sensor is integrated with at least one camera. In one embodiment, the at least one camera captures video and/or still images. In another embodiment, the at least one monitoring sensor includes at least one monitoring unit. Examples of monitoring units include those disclosed in U.S. Pat. Nos. 10,219,163, 10,231,206, 10,237,770, 10,244,504, 10,257,727, 10,257,728, 10,271,233, 10,299,149, 10,498,951, and 10,529,241, and U.S. Publication Nos. 20190215201, 20190364533, and 20200066132, each of which is incorporated herein by reference in its entirety. In a preferred embodiment, the system includes at least one data analysis engine to process data captured by the at least one monitoring sensor. An engine is a collection of functions and algorithms used to solve a class of problems. The system preferably includes a detection engine, a classification engine, an identification engine, a geo-location engine, a learning engine, and/or a statistical inference and machine learning engine. For example, the geolocation engine is a group of functions and geolocation algorithms that are used together to solve multiple geolocation problems. The detection engine is preferably operable to detect at least one signal of interest in the electromagnetic (e.g., RF) environment. In a preferred embodiment, the detection engine is operable to automatically detect the at least one signal of interest. In one embodiment, the automatic signal detection process includes mask creation and environment analysis using masks. Mask creation is a process of elaborating a representation of the electromagnetic environment by analyzing a spectrum of signals over a certain period of time. A desired frequency range is used to create a mask, and FFT streaming data is also used in the mask creation process. A first derivative is calculated and used for identifying possible maximum power values. A second derivative is calculated and used to confirm the maximum power values. A moving average value is created as FFT data is received during a time period selected by the user for mask creation. For example, the time period is 10 seconds. The result is an FFT array with an average of the maximum power values, which is called a mask. The classification engine is preferably operable to classify the at least one signal of interest. In one embodiment, the classification engine generates a query to a static database to classify the at least one signal of interest based on its components. For example, the information stored in static database is preferably used to determine spectral density, center frequency, bandwidth, baud rate, modulation type, protocol (e.g., global system for mobile (GSM), code-division multiple access (CDMA), orthogonal frequency-division multiplexing (OFDM), LTE, etc.), system or carrier using licensed spectrum, location of the signal source, and/or a timestamp of the at least one signal of interest. In an embodiment, the static database includes frequency information gathered from various sources including, but not limited to, the Federal Communication Commission, the International Telecommunication Union, and data from users. In one example, the static database is an SQL database. The data store is operable to be updated, downloaded or merged with other devices or with its main relational database. In one embodiment, software application programming interface (API) applications are included to allow database merging with third-party spectrum databases that are only operable to be accessed securely. In a preferred embodiment, the classification engine is operable to calculate second, third, and fourth order cumulants to classify modulation schemes along with other parameters, including center frequency, bandwidth, baud rate, etc. The identification engine is preferably operable to identify a device or an emitter transmitting the at least one signal of interest. In one embodiment, the identification engine uses signal profiling and/or comparison with known database(s) and previously recorded profile(s) to identify the device or the emitter. In another embodiment, the identification engine states a level of confidence related to the identification of the device or the emitter. The geolocation engine is preferably operable to identify a location from which the at least one signal of interest is emitted. In one embodiment, the geolocation engine uses statistical approximations to remove error causes from noise, timing and power measurements, multipath, and non-line of sight (NLOS) measurements. By way of example, the following methods are used for geolocation statistical approximations and variances: maximum likelihood (nearest neighbor or Kalman filter); least squares approximation; Bayesian filter if prior knowledge data is included; and the like. In another embodiment, time difference of arrival (TDOA) and frequency difference of arrival (FDOA) equations are derived to assist in solving inconsistencies in distance calculations. In still another embodiment, angle of arrival (AOA) is used to determine geolocation. In yet another embodiment, power distribution ratio versus azimuth measurements are used to determine geolocation. In a preferred embodiment, geolocation is performed using Angle of Arrival (AOA), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), and power distribution ratio measurements. Several methods or combinations of these methods are operable to be used with the present invention because geolocation is performed in different environments, including but not limited to indoor environments, outdoor environments, hybrid (stadium) environments, inner city environments, etc. The learning engine is preferably operable to learn the electromagnetic environment. In one embodiment, the learning engine uses statistical learning techniques to observe and learn an electromagnetic environment over time and identify temporal features of the electromagnetic environment (e.g., signals) during a learning period. In a preferred embodiment, the learning engine is operable to learn information from the detection engine, the classification engine, the identification engine, and/or the geolocation engine. In one embodiment, the learning function of the system is operable to be enabled and disabled. When the learning engine is exposed to a stable electromagnetic environment and has learned what is normal in the electromagnetic environment, it will stop its learning process. In a preferred embodiment, the electromagnetic environment is periodically reevaluated. In one embodiment, the learning engine reevaluates and/or updates the electromagnetic environment at a predetermined timeframe. In another embodiment, the learning engine reevaluates and/or updates the electromagnetic environment is updated after a problem is detected. The statistical inference and machine learning (ML) engine utilizes statistical learning techniques and/or control theory to learn the electromagnetic environment and make predictions about the electromagnetic environment. The survey occupancy application is operable to determine occupancy in frequency bands. In another embodiment, the survey occupancy application is operable to schedule occupancy in a frequency band. The survey occupancy application is also used to preprocess at least two signals that exist in the same band based on interference between the at least two signals. The resource brokerage application is operable to optimize resources to improve application performance. In a preferred embodiment, the resource brokerage application is operable to use processed data from the at least one monitoring sensor and/or additional information to determine environmental awareness (e.g., environmental situational awareness). The environmental awareness and/or capabilities of a device and/or a resource are used to determine policies and/or reasoning to optimize the device and/or the resource. The resource brokerage application is operable to control the device and/or the resource. Additionally, the resource brokerage application is operable to control the at least one monitoring sensor. The certification and compliance application is operable to determine if applications and/or devices are behaving according to rules and/or policies (e.g., customer policies and/or rules, government rules). In another embodiment, the certification and compliance application is operable to determine if the applications and/or the devices are sharing frequency bands according to the rules and/or the policies. In yet another embodiment, the certification and compliance application is operable to determine if the applications and/or the devices are behaving according to non-interferences rules and/or policies. The sharing application is operable to determine optimization of how applications and/or devices share the frequency bands. In a preferred embodiment, the sharing application uses a plurality of rules and/or policies (e.g., a plurality of customer rules and/or policies, government rules) to determine the optimization of how the applications and/or the devices share the frequency bands. Thus, the sharing application satisfies the plurality of rules and/or policies as defined by at least one customer and/or the government. The statistical inference and prediction utilization application is operable to utilize predictive analytics techniques including, but not limited to, machine learning (ML), artificial intelligence (AI), neural networks (NNs), historical data, and/or data mining to make future predictions and/or models. The system is preferably operable to recommend and/or perform actions based on historical data, external data sources, ML, AI, NNs, and/or other learning techniques. The semantic engine is operable to receive data in forms including, but not limited to, audio data, text data, video data, and/or image data. In one embodiment, the semantic engine utilizes a set of system rules and/or a set of system policies. In another embodiment, the set of system rules and/or the set of system policies is created using a prior knowledge database. The semantic engine preferably includes an editor and a language dictionary. The semantic engine preferably further includes a programmable rules and policy editor. The programmable rules and policy editor is operable to include at least one rule and/or at least one policy. In one embodiment, the at least one rule and/or the at least one policy is defined by at least one customer. Advantageously, this allows the at least one customer to dictate rules and policies related to customer objectives. The system further includes a tip and cue server. The tip and cue server is operable utilize the environmental awareness from the data processed by the at least one data analysis engine in combination with additional information to create actionable data. In a preferred embodiment, the tip and cue server utilizes information from a specific rule set (e.g., customer defined rule set), further enhancing the optimization capabilities of the system. The specific rule set is translated into optimization objectives, including constraints associated with signal characteristics. In a preferred embodiment, the tip and cue server is operable to activate at least one alarm and/or provide at least one report. In another embodiment, the tip and cue server is operable to activate the at least one alarm and/or provide the at least one report according to the specific rule set. Advantageously, the system is operable to run autonomously and continuously. The system learns from the environment, and, without operator intervention, is operable to detect anomalous signals that either were not there before, or have changed in power or bandwidth. Once detected, the system is operable to send alerts (e.g., by text or email) and begin high resolution spectrum capture, or I/Q capture of the signal of interest. Additionally, the system is operable to optimize and prioritize applications using the learning engine. FIG.3is a flow diagram of the system according to one embodiment. FIG.4illustrates the acquisition component of the system. The system includes an antenna subsystem including at least one antenna, an analog front-end conditioning system, a radio receiver front-end system, and a I/Q buffer. The system is operable to perform control functions including, but not limited to, controlling a radio server, conditioning the radio server, I/Q flow control and/or time stamping, and/or buffer management. FIG.5illustrates one embodiment of an analog front end of the system. In one embodiment, electromagnetic waves are sent directly to a radio receiver front-end subsystem as shown in Path A. Alternatively, the electromagnetic waves are sent through an analog filter bank and amplifier/channel with a filter (SSS), an amplifier (e.g., variable gain amplifier), and an automatic gain controller as shown in Path B before reaching the radio receiver front-end subsystem. In one embodiment, the BCU is 80 MHz. Alternatively, the BCU is 150 MHz. The radio receiver front-end subsystem is described inFIG.6. FIG.6illustrates one embodiment of a radio receiver front-end subsystem. Path A and Path B continue into a radio-frequency integrated circuit (RFIC), and then proceed to a digital down-converter (DDC) before downsampling (e.g., decimation) and moving through a field programmable gate array (FPGA). In one embodiment, signals from the FPGA are operable to be sent to a digital to analog converter (DAC). Alternatively, signals are sent via bus to a Universal Software Radio Peripheral hardware driver (UHD) host and SD controller before continuing to Path E, which is described inFIG.7. FIG.7continues the embodiment of the radio receiver front-end shown inFIG.6after digitization. In one embodiment, Path E continues to the I,Q buffer. In another embodiment, Path E continues to a baseband receiver. In one embodiment, signals are further processed using signal processing software (e.g., GNU Radio software). In yet another embodiment, the baseband receiver is connected to inputs and/or outputs. In one embodiment, the inputs include, but are not limited to, MicroSD Flash memory and/or a Universal Serial Bus (USB) console. In one embodiment, the outputs include, but are not limited to, USB 2.0 host and/or audio. Alternatively, data from the baseband receiver is sent to the I,Q buffer via the IGbE port. The system preferably uses multiple receiver channels for the front end. In one embodiment, there are 4 receiver channels. Alternatively, there are 8, 12, 16, or 32 receiver channels. I,Q data is preferably tagged by the receiver channel and receiver antenna (e.g., bandwidth, gain, etc.) and then stored in the I,Q buffer before analysis is completed. Advantageously, the system is hardware agnostic. The system is operable to provide a suggestion for hardware for a particular frequency set. Additionally, the hardware agnostic nature of the system allows for established architecture to persist. The system is cost effective because it also allows for cheaper antennas to be used, as well as less expensive filters, because calibration can be done using the system rather than the antennas and/or filters, as well as post-ADC processing to rectify any performance loss. Because the system processes all signals present in the spectrum and their inter-relationships to extract environmental awareness, so the analog front end does not require elaborate filtering to avoid interference and provide optimum dynamic range. Additionally, the analog front end does not require optimal antennas for all frequency bands and ranges to obtain environmental awareness. For a time domain programmable channelizer, all filters' impulse responses must be programmable and the number of filters must be programmable. Additionally, the channel bandwidth resolution must be programmable starting from a minimum bandwidth. The center frequency of each channel must also be programmable. Decimation is based on channel bandwidth and desired resolution. However, these requirements are difficult to implement for channels with variable bandwidth and center frequency. Wavelet filters can be used effectively if the center frequency and channel's bandwidth follow a tree structure (e.g., Harr and Deubauchi wavelets).FIG.8is an example of a time domain programmable channelizer. In a preferred embodiment, the system includes a frequency domain programmable channelizer as shown inFIG.9. The programmable channelizer includes buffer services, preprocessing, bin selection, at least one band pass filter (BPF), an inverse fast Fourier transform (IFFT) function, decomposition, and/or frequency down conversion and phase correction to yield baseband I,Q for channels1through R. The IFFT function is done to obtain each decomposed channel I,Q at the proper sampling rate. Advantageously, the frequency domain programmable channelizer is more computationally efficient than a time domain programmable channelizer because each filter is just a vector in the frequency domain and the filtering operation is just a vector multiplication, decomposing the input signal into multiple channels of differing bandwidths is parsing the vector representing the input signal frequency domain content into a subvector of different length. FIG.10is another embodiment of a programmable channelizer. Data enters the filter and channels generators with channelization selector logic for a table lookup of filter coefficient and channelization vectors. The programmable channelizer includes a comparison at each channel, which provides anomalous detection using a mask with frequency and power, which is then sent to the learning engine and/or the alarm system (“A”). Data processed with the FFT is sent to the blind detection engine and/or for averaging processing (“B”). Data from the table lookup of filter coefficient and channelization vectors undergoes preprocessing with a mix circular rotator to produce Di blocks of R1points. A sum is taken of the D1block, and an R1point IFFT is taken to produce discor overlap samples OL1. This process occurs (e.g., in parallel) for D1blocks of R1points through DR blocks of RRpoints to produce OL1through OLR, which are then sent to the classification engine (“C”). All data from the I,Q buffer is preferably stored in a buffered database (“D”). In one embodiment, the I,Q buffer is partitioned into N blocks with L oversamples. In one embodiment, the original sample rate is decimated by D1. FIG.11illustrates one embodiment of a blind detection engine. Data from the programmable channelizer undergoes an N point FFT. A power spectral density (PSD) is calculated for each N point FFT and then a complex average FFT is obtained for the P blocks of N point FFT. The PSD is sent to a noise floor estimator, an edge detection algorithm, and/or an isolator. Noise floor estimates from the noise floor estimator are sent to the signal database. The edge detection algorithm passes information to a signal separator (e.g., bandwidth, center frequency). The isolator obtains information including, but not limited to, PSD, the bandwidth and center frequency per channel, the complex average FFT, and/or the N point FFT. Information from the isolator is sent to the programmable channelizer, the envelope feature extraction module, and/or the classification engine. FIG.12illustrates one embodiment of an edge detection algorithm. Peaks are detected for all power values above the noise floor. Peaks are recorded in a power array and/or an index array. Consecutive power values are found by looping through the arrays. For each group of consecutive power values, a sub-power array and/or a sub-index array are created. The blind detection engine steps through each power value starting with a default rising threshold. If N consecutive values are increasing above the rising threshold, a first value of N values is set as the rising edge and the index of the first value of N values is recorded. The Nth value is recorded as a rising reference point. The rising threshold is updated based on the rising reference point, and the blind detection engine continues to scan for rising values. If the blind detection engine does not detect rising values and detects M consecutive values decreasing below a falling threshold, a first value of M values is set as the falling edge and the index of the first value of M values is recorded. The Mth value is recorded as a falling reference point. The falling threshold is updated based on the falling reference point. In one embodiment, x is a value between 1 dB and 2.5 dB. In one embodiment, y is a value between 1 dB and 2.5 dB. The blind classification engine receives information from the blind detection engine as shown inFIG.13. Signals are separated based on bandwidth and/or other envelope properties (e.g., duty cycle). An IFFT is performed on R signals for narrowband and/or broadband signals. Decimation is then performed based on bandwidth. Moment calculations are performed for each signal I,Q using the decimated values and/or information from the channelizer. In a preferred embodiment, the moment calculations include a second moment and/or a fourth moment for each signal. A match based on cumulants is selected for each I,Q stream, which is sent to the demodulation bank and/or the geolocation engine. From the definitions of the second and fourth moments, the following equations are used to calculate the cumulants: Cˆ2⁢0=1N⁢∑n=1n=N❘"\[LeftBracketingBar]"Y⁡(n)❘"\[RightBracketingBar]"2Cˆ2⁢1=1N⁢∑n=1n=NY2(n)Cˆ4⁢0=1N⁢∑n=1n=NY4(n)-3⁢Cˆ2⁢02Cˆ4⁢1=1N⁢∑n=1n=NY3(n)⁢Y*(n)-3⁢Cˆ2⁢0⁢Cˆ2⁢1Cˆ4⁢2=1N⁢∑n=1n=N❘"\[LeftBracketingBar]"Y⁡(n)❘"\[RightBracketingBar]"4-❘"\[LeftBracketingBar]"Cˆ2⁢0❘"\[RightBracketingBar]"2-2⁢Cˆ2⁢12 If it assumed that transmitted constellations are normalized to unity average power, which is easily completed by a power factor equal to 0 dB, this results in Ĉ21≈1. To calculate a normalized fourth moment is calculated using the following equation: {circumflex over ({tilde over (C)})}4j≙Ĉ4J/Ĉ21,forJ=0,1,2 Advantageously, normalizing the fourth moment cumulants removes any scaling power problems. FIG.14illustrates details on selection match based on cumulants for modulation selection. As previously described, the cumulants preferably include a second moment and/or a fourth moment for each signal. For example, a fourth moment between −0.9 and 0.62 is a quadrature amplitude modulation (QAM) signal, a fourth moment greater than or equal to 1 is an amplitude modulation (AM) signal, a fourth moment equal to −1 is a constant envelope signal (e.g., frequency modulation (FM), Gaussian minimum-shift keying (GMSK), frequency-shift keying (FSK), or phase-shift keying (PSK)), a fourth moment between −1.36 and 1.209 is a pulse-amplitude modulation (PAM) signal, and a fourth moment equal to −2 is a binary phase-shift keying (BPSK) signal. A type is selected using a look up table, the signal I,Q is labeled with the type, and the information is sent to the demodulation bank. Additional information about selection match based on cumulants for modulation selection is available in Table 1 below. TABLE 1Type4042σ(40)σ(42)AM>1.0FM−1GMSK−1FSK−1BPSK−2.00−2.0000PAM (4)−1.36−1.362.562.56PAM (8)−1.238−1.2384.824.82PAM (16)−1.2094−1.20945.525.52PSK (4)−1.00−1.00QAM (4)−0.68−0.68QAM (16)−0.64−0.643.832.24QAM (32)−0.61−0.613.892.31 FIG.15illustrates a flow diagram according to one embodiment of the present invention. Data in the I/Q buffer is processed using a library of functions. The library of functions includes, but is not limited to, FFT, peak detection, characterization, and/or rate adjustment. As previously described, the system preferably includes at least one data analysis engine. In one embodiment, the at least one data analysis engine includes a plurality of engines. In one embodiment, the plurality of engines includes, but is not limited to, a detection engine, a classification engine, an identification engine, a geolocation engine, and/or a learning engine. Each of the plurality of engines is operable to interact with the other engines in the plurality of engines. The system is operable to scan for occupancy of the spectrum, create a mask, detect drones, and/or analyze data. The control panel manages all data flow between the I/Q buffer, library functions, the plurality of engines, applications, and user interface. A collection of basic functions and a particular sequence of operations are called from each of the plurality of engines. Each of the plurality of engines is operable to pass partially processed and/or analyzed data to other engines to enhance functionality of other engines and/or applications. The data from the engines are then combined and processed to build applications and/or features that are customer or market specific. In one embodiment, a plurality of state machines performs a particular analysis for a customer application. In one embodiment, the plurality of state machines is a plurality of nested state machines. In another embodiment, one state machine is utilized per each engine application. The plurality of state machines is used to control flow of functions and/or an engine's input/output utilization to perform required analyses. FIG.16illustrates control panel functions according to one embodiment. The control panel is operable to detect occupation of the spectrum, activate an alarm, perform drone detection and direction finding, geolocation, artificial spectrum verification, and provide at least one user interface. The at least one user interface is preferably a graphical user interface (GUI). The at least one user interface (UI) is operable to display output data from the plurality of engines and/or applications. In one embodiment, the at least one UI incorporates third party GIS for coordinate display information. The at least one UI is also operable to display alarms, reports, utilization statistics, and/or customer application statistics. In one embodiment, the at least one UI includes an administrator UI and at least one customer UI. The at least one customer UI is specific to each customer. In one embodiment, the systems and methods of the present invention provide unmanned vehicle (e.g., drone) detection. The overall system is capable of surveying the spectrum from 20 MHz to at least 6 GHz, not just the common 2.4 GHz and 5.8 GHz bands as in the prior art. The systems and methods of the present invention are operable to detect UVs and their controllers by protocol. In one embodiment, the systems and methods of the present invention maintain a state-of-the-art learning system and a protocol library for classifying detected signals by manufacturer and controller type. The state-of-the-art learning system and the protocol library are updated as new protocols emerge. In one embodiment, classification by protocol chipset is utilized to provide valuable intelligence and knowledge for risk mitigation and threat defense. The valuable intelligence and knowledge include effective operational range, supported peripherals (e.g., external or internal camera, barometers, global positioning system (GPS) and dead reckoning capabilities), integrated obstacle avoidance systems, and interference mitigation techniques. Advantageously, the system is operable to detect drones that are not in the protocol library. Further, the system is operable to detect drones without demodulating command and control protocols. In one embodiment, the system does not include a protocol library. New protocols and new drones are constantly being released. Additionally, a nefarious operator can switch out the chipset of a drone, which would leave an area vulnerable to the modified drone because a system would not be able to identify the signal as a drone if the protocol is not in the protocol library. In one embodiment, the system generates actionable data that indicates that at least one signal is behaving like a drone. The system performs blind detection, which allows the system to detect the drone signal without the protocol library. In one embodiment, the system is operable to detect drones by evaluating an envelope of the command and control signal. In one embodiment, the system detects the drone signal based on a duty cycle and/or changes in power levels of the signal envelope. In one example, an LTE signal is classified by the system as a drone when moving at a high velocity. FIG.17illustrates one embodiment of an RF analysis sub-architecture of the system. The control panel interacts with the I/Q buffer, library functions, engines, applications, and/or user interface. The engines include a data analysis engine. Analyzed data from the data analysis engine results in an alarm when an alarm condition is met. The alarm is transmitted via text and/or email, or is visualized on a graphical user interface (GUI) of at least one remote device (e.g., smartphone, tablet, laptop computer, desktop computer). FIG.18illustrates one embodiment of a detection engine of the system. The detection engine receives data from the at least one monitoring unit. the detection engine includes blind feature extraction algorithms. A mask is created. The detection engine then performs a mask utilization rating and the mask is compared to previous masks. Anomalies are then detected. As previously described, in one embodiment, the data analysis engine is operable to perform mask creation and analyze an electromagnetic (e.g., RF) environment using masks. Mask creation is a process of elaborating a representation of an electromagnetic environment by analyzing a spectrum of signals over a certain period of time. A mask is created with a desired frequency range (e.g., as entered into the system via user input), and FFT streaming data is also used in the mask creation process. A first derivative is calculated and used for identifying maximum power values. A moving average value is created as FFT data is received during a selected time period for mask creation (e.g., via user input). For example, the time period is 10 seconds. The result is an FFT array with an average of maximum power values, which is called a mask.FIG.19illustrates a mask according to one embodiment of the present invention. In one embodiment, the mask is used for electromagnetic environment analysis. In one embodiment, the mask is used for identifying potential unwanted signals in an electromagnetic (e.g., RF) environment. The system is operable to utilize masks based on a priori knowledge and/or masks based on expected behavior of the electromagnetic environment. Each mask has an analysis time. During its analysis time, a mask is scanned and live FFT streaming data is compared against the mask before next mask arrives. If a value is detected over the mask range, a trigger analysis is performed. Each mask has a set of trigger conditions, and an alarm is triggered into the system if the trigger conditions are met. In one embodiment, there are three main trigger conditions including an alarm duration, a decibel (dB) offset, and a count. The alarm duration is a time window an alarm needs to appear to be considered a trigger condition. For example, the time window is 2 seconds. If a signal is seen for 2 seconds, it passes to the next condition. The dB offset is a threshold value (i.e., dB value) a signal needs to be above the mask to be considered as a potential alarm. The count is the number of times the first two conditions need to happen before an alarm is triggered into the system. FIG.20illustrates a workflow of automatic signal detection according to one embodiment of the present invention. A mask definition is specified by a user for an automatic signal detection process including creating masks, saving masks, and performing electromagnetic (e.g., RF) environment analysis based on the masks created and FFT data stream from a radio server. In one embodiment, if trigger conditions are met, alarms are triggered and stored to a local database for visualization. FIG.21illustrates components of a Dynamic Spectrum Utilization and Sharing model according to one embodiment of the present invention. By employing the Dynamic Spectrum Utilization and Sharing model, the present invention is operable to perform a plurality of radio frequency (RF) environmental awareness functionalities including, but not limited to, monitoring and/or detection, identification, and/or classification. Monitoring and/or detection functionalities include, but are not limited to, broadband frequency range detection, wideband capture in real-time or near-real-time, initial processing and/or post event processing, 24-hour autonomous monitoring, and/or reconfiguration options relating to time, frequency, and spatial settings. Identification functionalities include, but are not limited to, anomalous signal detection, anomalous signal flagging, anomalous signal time stamp recording, providing an anomalous signal database, and/or utilization of a spectrum mask. In one embodiment, the spectrum mask is a dynamic spectrum mask. Classification functionalities include, but are not limited to, correlating signal events with known signal protocols, correlating signal events with known variables, correlating signal events with known databases, correlating signal events with existing wireless signal formats, and/or correlating signal events with existing cellular protocol formats. Each of the aforementioned functionalities incorporates learning processes and/or procedures. These include, but are not limited to, historical data analysis, data preservation tools, and/or learning analytics. Incorporation of machine learning (ML), artificial intelligence (AI), and/or neural networks (NN) ensures that every aspect of detection, monitoring, identification, and/or classification is performed autonomously. This is compounded through the use of the learning analytics, enabling the use of utilization masks for continual ML, prediction modeling, location analysis, intermodulation analysis, and/or the integration of third-party data sets for increasing overall learning capabilities and/or functionalities of the platform. Moreover, these capabilities and/or functionalities are backed up through secure data preservation services, providing both a secure platform environment and/or data enforcement documentation (i.e., legal documents). Furthermore, the platform is operable to provide automated notifications, programmable event triggers, customizable rules and/or policies, and Tip and Cue practices. Automated notifications include, but are not limited to, alerts, alarms, and/or reports. Advantageously, this functionality enables the platform to react to specific rules and/or policies, as well as incorporating the platform's own awareness and knowledge, creating an optimized platform for any RF environment and/or mission. Prediction models used by the platform provide an accurate insight into the dynamic spectrum allocation and utilization functionalities. These prediction models enable the platform to autonomously create forecasts for future spectrum usage. In addition, the prediction models used by the platform incorporate descriptive analytics, diagnostic analytics, predictive analytics, and/or prescriptive analytics. Descriptive analytics refers specifically to the data stored, analyzed, and/or used by the platform. Descriptive analytics provides data enabling the platform to act and/or provide a suggested action. Diagnostic analytics refers to how and/or why the descriptive analytics acted and/or suggested an action. Predictive analytics specifically refers to the utilization of techniques including, but not limited to, ML, AI, NNs, historical data, and/or data mining to make future predictions and/or models. Prescriptive analytics refers to the act and/or the suggested act generated by the descriptive analytics. Once this predictive model is in place, the platform is operable to recommend and/or perform actions based on historical data, external data sources, ML, AI, NNs, and/or other learning techniques. FIG.22illustrates a Results model according to one embodiment of the present invention. The Results model provided by the present invention is centered around four core practices: proactive, predictive, preventative, and preservation. The predictive practice refers to using the aforementioned learning functionalities and capabilities to evolve the platform, enabling the characterization of events that led up to an interference scenario and/or performing interference source modeling to forecast future probabilities and/or conflicting events. The predictive practice is intertwined with the platform remaining proactive, identifying possible signals of interference. While identifying possible signals of interference is a combination of the platform's predictive and proactive capabilities, the platform also remains proactive in performing wireless location characterization for both pre- and post-event scenarios. In addition, the platform's proactive capabilities include, but are not limited to, identifying all possible sources of conflict based on prior events. Furthermore, the platform also focuses on preventative practices. These include, but are not limited to, maintaining a set of de-confliction rules, providing trigger warning notifications and/or early warning notifications, and/or maintaining compatibility with multiple government agencies, including corresponding government project management offices (PMOs) and any interfering sources. In one embodiment, the platform automatically establishes the set of de-confliction rules, where the set of de-confliction rules are operable for editing. In one embodiment, the platform is operable to autonomously edit the set of de-confliction rules. In another embodiment, the platform enables editing of the set of de-confliction rules via user input. Finally, the platform includes preservation components and/or functionalities. These include, but are not limited to, evidentiary storage, learning capabilities, and modeling functionality. Each of these four core practices is interconnected within the platform, enabling dynamic spectrum utilization and sharing. Geolocation Geolocation is an additional aspect relating to electromagnetic (e.g., RF) analysis of an environment. The primary functions of the electromagnetic analysis of the environment include, but are not limited to, detection, classification, identification, learning, and/or geolocation. Additionally, the electromagnetic analysis is operable to output environmental awareness data. The system includes a geolocation engine, operable to use both passive and/or active methods of radio geolocation. In general, radio geolocation refers to the geographic location of man-made emitter sources propagating using radio (electromagnetic) waves as they impinge upon a man-made geo-locator, or receiver. Passive radio geolocation requires no transmission of signals by a geo-locator, whereas active radio geolocation involves a geolocator transmitting signals that interact with an emitter source. Passive methods of geolocation include, but are not limited to, single directional beam antenna response, multidirectional beam antenna response (Amplitude Ratio), multi-antenna element response (Array Processing), line of bearing (LOB)-to-position solutions, and/or general optimization. Multi-antenna element response methods include, but are not limited to, phase interferometry, beamforming, conventional array manifold processing approaches, and/or high-resolution array manifold processing approaches using signals subspace. While these passive methods primarily apply to approaches for Direction Finding (DF) as spatial filtering, passive methods that apply to approaches other than DF as spatial filtering are operable for use by the system. DF refers to the process of estimating the direction of arrival of propagating emitter signals as they impinge on a receiver. Passive methods further include DF approaches based on general optimization including, but not limited to, digital pre-distortion (DPD), convex programming, and/or distributed swarm approaches. In addition to the previously mentioned passive approaches, the system is operable to apply approaches based on ranging observations including, but not limited to, receiver signal strength indicators (RSSI), time of arrival (TOA), and/or time difference of arrival (TDOA) methods. RSSI approaches relate to the generation of observable data and/or location estimation. TOA and/or TDOA approaches relate to generating observable data from distributed multi antenna systems and/or single antenna systems, and/or location estimation using non-linear optimization and/or constraint linear optimization. In a preferred embodiment, geolocation is performed using Angle of Arrival (AOA), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), and power distribution ratio measurements. FIG.23is a table listing problems that are operable to be solved using the present invention, including serviceability, interference, monitoring and prediction, anomalous detection, planning, compliance, and/or spectrum sharing or leasing. FIG.24illustrates a passive geolocation radio engine system view according to one embodiment of the present invention. First, a radio frequency (RF) front end receives at least one RF signal. The RF front end includes, but is not limited to, a set of sensors, a sensor subsystem, at least one analog to digital converter (ADC), and/or an ADC sensor processing subsystem. Once the at least one RF signal has been analyzed by the RF front end and/or the sensor subsystem, the at least one RF signal becomes at least one analyzed RF signal. The at least one analyzed RF signal is output to a measurement subsystem. The measurement subsystem is operable to generate radio location measurements. The radio location measurements are envelope-based and/or signal characteristic-based. The measurement subsystem is further operable to generate contextual measurements and/or conventional measurements relating to TOA, AOA, TDOA, receiver signal strength (RSS), RSSI, and/or FDOA. The generated conventional measurements are then analyzed using position algorithms, further enhancing measurement accuracy. Once the contextual measurements are generated and/or the conventional measurements are analyzed using position algorithms, the at least one analyzed RF signal is sent to a position engine subsystem. The position engine subsystem includes a position display. Each of the previously mentioned components, systems, and/or subsystems are operable for network communication. The geolocation engine is operable to use a plurality of algorithms to determine a location of the at least one signal. The plurality of algorithms includes, but is not limited to, TDOA, FDOA, AOA, power level measurements, and/or graphical geolocation, which is described below. The geolocation is operable to autonomously decide what algorithm(s) to use to determine the location. FIG.25illustrates one embodiment of a method to autonomously select one or more of the plurality of algorithms. Timing and carrier frequency offset corrections are performed on I,Q data and sent to the signal detection engine. The I,Q data (e.g., I,Q0, I,Q1, I,Q2, I,Q3) is sent to the signal detection engine. Information from the signal detection engine is sent to the blind classification engine. Information from the blind classification engine is sent to the demodulation bank. Error estimates are performed on envelope (Doppler) measurements from the signal detection engine, signal (time) domain measurements from the blind classification engine, and timing, protocol, and Doppler measurements from the demodulation bank. An evaluation of fidelity is approximately equal to an SNR of the envelope measurements (λ1), signal measurements (λ2), and protocol measurements (λ3). Error analysis for AOA, TDOA, correlation ambiguity function (CAF) for graphical geolocation, FDOA, and power ratio are used in the evaluation of fidelity. Ctis calculated and minimized over all methods to select the at least one geolocation method, where Ctis the cost function to be minimized and t denotes a time block used to calculate the geolocation solution. In one embodiment, the geolocation engine uses graphical geolocation techniques. An area is pictorially represented in a grid. Resolution of the grid determines a position in space. The system is operable to detect the at least one signal in the space and determine a location of the at least one signal using the graphical geolocation techniques. In one embodiment, outputs (e.g., location) to a non-linear equation are used to determine possible inputs (e.g., power measurements). The possible outputs are placed on a two-dimensional map. Inputs are then mapped to form a hypothesis of possible outputs. In one embodiment, the graphical geolocation techniques include an image comparison between the two-dimensional map of the possible outputs and the signal data. In another embodiment, the graphical geolocation techniques further include topology (e.g., mountains, valleys, buildings, etc.) to create a three-dimensional map of the possible outputs. The graphical geolocation techniques in this embodiment include an image comparison between the three-dimensional map of the possible outputs and the signal data. The geolocation engine is operable to make use of spinning DF, through the use of rotating directional antennas and estimating the direction of arrival of an emitter. The rotating directional antennas measure the received power as a function of the direction, calculating a local maximum assumed direction of the emitter. The geolocation engine is also operable to account for any transient signals that escape detection based on rotation speed. This is accomplished by using at least one broad antenna, reducing the chance of the system missing a signal, as well as reducing angular resolution. Practical considerations for these calculations include, but are not limited to, antenna rotation speed (ω), a rate of arrival of signals (γ), and/or a spatial sampling rate (FPS). The system is further operable to use amplitude ratio methods for geolocation. These methods involve a multi-lobe amplitude comparison. This is performed using a set of fixed directional antennas pointing in different directions. A ratio corresponding to two responses is calculated, account for antenna patterns. This ratio is used to obtain a direction estimate. By not using moving parts and/or antennas, the system is more responsive to transient signals. However, this does require accurate antenna patterns, as these patterns also control system resolution. General antenna array processing assumes that a signal, s(t), remains coherent as it impinges at each antenna in the array. This enables the delay (τm) of the signal at an m-th sensor relative to the signal at the origin of the coordinate system can be expressed as: τm=−(qmsin(θ)+rmcos(θ))/c Where c is the propagation of light and Θ is the angle of the signal impinging in the sensor relative to the r-axis. Since the signal is assumed to have a Taylor series decomposition, the propagation delay, τm, is equivalent to the phase shift of: φm=−wτm=>ejφm Thus, the vector x(t) of antenna responses can be written as: [x1(t)⋮xM⁢(t)]=[ej⁢φ1⋮ej⁢φM]⁢ej⁡(w⁢t+ϕ)Where⁢φm(w,θ)=[qm⁢sin⁡(θ)+rm⁢cos⁡(θ)]⁢w/c More generally, the sensor has different directionality and frequency characteristics which are modeled by applying different gains and phases to the model above, where the gain and phase of the m-th sensor is denoted as: gm(w,θ) and ϕm(w,θ) Then, the above equation for x(t) can be expressed as: [x1(t)⋮xM⁢(t)]=[g1(w,θ)⁢ej⁢ϕ1(w,θ)⁢ej⁢φ1⋮gM(w,θ)⁢ej⁢ϕm(w,θ)⁢ej⁢φM]⁢ej⁡(w⁢t+ϕ)=a⁡(w,θ)⁢ej⁡(w⁢t+ϕ)Where α(w,θ) is known as the array response vector. The collection of all array response vectors for all angles Θ and all frequencies, w, is known as an array manifold (i.e., a vector space). In general, if the array manifold is known and it is free of ambiguities, then obtaining the k−1 angles (θ1. . . θk−1) of k−1 signals if their corresponding array response vector are linearly independent is performed by correlating x(t) with the array response vector of the appropriate angle. In one embodiment, ambiguities refer to the array manifold lacking rank deficiencies to k if the system is trying to resolve k−1 directions at the same frequency. The array manifold does not typically have a simple analytical form and thus the array manifold is approximated using discrete angles for each frequency of interest. In more general cases, where multiple sinusoidal signals arrive at the array with additive noise, then the x(t) can be expressed as: x⁡(t)=∑i=1Ia⁡(w,θi)⁢si(t)+n⁡(t)si(t)=ej⁡(wi⁢t+βi)=[a⁡(w,θ1)⁢a⁡(w,θI)][s1(t)⁢…⁢sI(t)]T+n⁡(t)=A⁡(w,Θ)⁢s⁡(t)+n⁡(t) In one embodiment, additive noise refers to thermal noise from sensors and associated electronics, background noise from the environment, and/or other man-made interference sources including, but not limited to, diffuse signals. Where one or more signals are non-sinusoidal (i.e., broadband), the equivalent can be expressed by its Taylor series over the relevant frequencies. However, when looking for a narrow frequency band of interest, the system is operable to assume an array response vector, α(w,θ), is approximately constant with respect to w over all angles, Θ. This implies that the reciprocal of the time required for the signal to propagate across the array is much less than the bandwidth of the signal. If sensor characteristics do not vary significantly across bandwidth, then the dependency on w can be dropped off of the array response vector and/or matrix, resulting in: x(t)=A(Θ)s(t)+n(t) For example, in an antenna array using a uniform linear array (ULA), a signal source, s(t)=ej(w t+ϕ), impinges in the ULA at angle Θ. Thus, if the received signal at a first sensor is x1(t)=s(t), then it is delayed at sensor m by: xm(t)=e-j⁢w⁡((m-1)⁢d⁢sin(θ)c)⁢s⁡(t) In vector form, this is represented as: x⁡(t)=[1e-j⁢w⁡(d⁢sin(θ)c)⋮e-j⁢w⁡((M-1)⁢d⁢sin(θ)c)]⁢s⁡(t)=a⁡(w,θ)⁢s⁡(t) If there are source signals received by the ULA, then: x⁡(t)=A⁡(Θ)⁢s⁡(t)+n⁡(t)Where⁢A⁡(Θ)=[1…1e-j⁢w⁡(d⁢sin(θ1)c)…e-j⁢w⁡(d⁢sin(θI)c)⋮⋮⋮e-j⁢w⁡((M-1)⁢d⁢sin(θ1)c)…e-j⁢w⁡((M-1)⁢d⁢sin(θI)c)] x(t) is the received signal vector (M by 1), s(t)=[s1(t) s1(t)]Tis the source signal vector (I by 1), n(t) is the noise signal vector (M by 1), and A(Θ)=[a(w,θ1), . . . , a(w,θI)] a (M by I) matrix=>Array manifold. In this example, typical assumptions include, but are not limited to, sources of signal(s) are independent and narrow band in relation to dimensions of the ULA (d, Md) and around the same max frequency, all antenna elements are the same, d<λmax2 to avoid rank ambiguities, the system can resolve M−1 direction angles without rank, and/or noises are uncorrelated. In another example, array processing is performed for DF using beamforming. Given knowledge of the array manifold, the array can be maneuvered by taking linear combinations of each element response. This is similar to how a fixed, single antenna can be maneuvered mechanically. Thus, y(t)=wHx(t), where w is interpreted as a Finite Impulse Response (FIR) of a filter in the spatial domain. To calculate the power of y(t), assuming a discretization to N samples, the system uses the following: Py=(|y(n)|2N=wH(x(n)x(n)HNw=wHRxxwWhere.Ndenotes the time averaging over N sample times and R—is the measured spatial autocorrelation matrix of the received array output data. In another example, array processing is performed for DF using beamforming, where Rxx=x(n)xH(n)nandRxx=((A(Θ)s(n)+n(n))(A(Θ)s(n)+n(n))HN In one embodiment, the system assumes a source signal is uncorrelated to a noise source, resulting in: Rxx=A(Θ)RssAH(Θ)+Rnn Thus, the power of the linear combination and/or spatial filtering of the array vector response elements are expressed as: Py=wH(A(Θ)RssAH(Θ)+Rnn)w In examples where array processing for DF is performed using beamforming, for a single unit magnitude sinusoid impinging the array at angle θ0with no noise becomes: Py(θ)=wHa(θ0)aH(θ0)w=|w H a(θo)|2 Accounting for the Cauchy-Schwarz inequality ∥2≤∥w∥2∥a(θo)|2, for all vectors w with equality if, and only if, w is proportional to a(θo), the spatial filter that matches the array response at the direction of arrival, θo, produces a maximum value for Py(θ). In addition, DF can be accomplished by searching over all possible angles to maximize Py(θ), and/or search over all filters w that are proportional to some array vectors responding to an impinging angle θ, a(θ), where Max{Py(θ)}over all angles=>filters w=a(θ). When this method is used, the system behaves like a spinning DF system where the resulting beam is changing for each search angle. Advantageously, this method encounters no blind spots due to the rotational and/or rate of arrival of the source signal. Moreover, when the system is using beamforming techniques and/or processes, the system is operable to search for multiple directions of arrival of different sources with resolutions depending on the width of the beam formed and the height of the sidelobes. For example, a local maximum of the average filter output power is operable to be shifted away from the true direction of arrival (DOA) of a weak signal by a strong source of interference in the vicinity of one of the sidelobes. Alternatively, two closely spaced signals results in only one peak or two peaks in the wrong location. In yet another example, array processing for DF is performed using a Capon Minimum Variance Distortionless Response (MVDR) approach. This is necessary in cases where multiple source signals are present. The system obtains more accurate estimates of the DOA when formatting the array beam using degrees of freedom to form a beam in the “look” direction and any remaining degrees of freedom to from “nulls” in remaining directions. The result is a simultaneous beam and null forming filter. Forming nulls in other directions is accomplished by minimizing Py(θ) while constraining a beam in the look direction. This avoids the trivial solution of w=0. Thus: minover all wPy(θ)subject towHa(θ)=1 The resulting filter, wc(θ), is shown as: wc(θ)=(aH(θ)Rxx−1a(θ))−1Rxx−1a(θ) Using this filter, the filter output power is expressed as: Pyc(θ)=wcH(θ)Rxxwc(θ)=(aH(θ)Rxx−1a(θ))−1 Therefore, the Capon approach searches over all DOA angles that the above power has maximized, using maxover all angles(aH(θ)Rxx−1a(θ))−1. A Capon approach is able to discern multiple signal sources because while looking at signals impinging at 0, the system attenuates a signal arrive at fifteen degrees by a formed beam. A Capon approach is one method for estimating an angular decomposition of the average power received by the array, sometimes referred to as a spatial spectrum of the array. The Capon approach is a similar approach to spectrum estimation and/or modeling of a linear system. The system is further operable to employ additional resolution techniques including, but not limited to, Multiple Signal Classifier (MUSIC), Estimation of Signal Parameters via Rotational Invariance Technique (ESPRITE), and/or any other high-resolution DOA algorithm. These resolution techniques enable the system to find DOAs for multiple sources simultaneously. In addition, these resolution techniques generate high spatial resolution when compared with more traditional methods. In one embodiment, these techniques apply only when determining DOAs for narrowband signal sources. For example, when using MUSIC-based methods, the system computes an N×N correlation matrix using Rx=E{x(t)xH(t)}=ARsAH+σ02I, where Rs=E{s(t)sH(t)}=diag. {σ12, . . . , σ12}. If the signal sources are correlated so that Rsis not diagonal, geolocation will still work while Rshas full rank. However, if the signal sources are correlated such that Rsis rank deficient, the system will then deploy spatial smoothing. This is important, as Rsdefines the dimension of the signal subspace. However, For N>I, the matrix ARsAHis singular, where det[ARsAH]=det[Rx−σ02I]=0. But this implies that σ02is an eigenvalue of Rx. Since the dimension of the null space ARsAHis N−I, there are N−I such eigenvalues σ02of Rx. In addition, since both Rxand ARsAHare non-negative, there are I other eigenvalues σi2such that σi2>σ02>0. In a preferred embodiment, geolocation is performed using Angle of Arrival (AOA), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), and power distribution ratio measurements. Advantageously, using all four measurements to determine geolocation results in a more accurate determination of location. In many instances, only one type of geolocation measurement is available that forces the use of one particular approach (e.g., AOA, TDOA, FDOA), but in many cases geolocation measurements are operable to be derived from behavior of the signals, thus allowing for the use of multiple measurements (e.g., all four measurements) that are combined to obtain a more robust geolocation solution. This is especially important when most of the measurements associated with each approach are extremely noisy. Learning Engine In addition, the system includes a learning engine, operable to incorporate a plurality of learning techniques including, but not limited to, machine learning (ML), artificial intelligence (AI), deep learning (DL), neural networks (NNs), artificial neural networks (ANNs), support vector machines (SVMs), Markov decision process (MDP), and/or natural language processing (NLP). The system is operable to use any of the aforementioned learning techniques alone or in combination. Advantageously, the system is operable for autonomous operation using the learning engine. In addition, the system is operable to continuously refine itself, resulting in increased accuracy relating to data collection, analysis, modeling, prediction, measurements, and/or output. The learning engine is further operable to analyze and/or compute a conditional probability set. The conditional probability set reflects the optimal outcome for a specific scenario, and the specific scenario is represented by a data model used by the learning engine. This enables the system, when given a set of data inputs, to predict an outcome using a data model, where the predicted outcome represents the outcome with the least probability of error and/or a false alarm. Without a learning engine, prior art systems are still operable to create parametric models for predicting various outcomes. However, these prior art systems are unable to capture all inputs and/or outputs, thereby creating inaccurate data models relating to a specific set of input data. This results in a system that continuously produces the same results when given completely different data sets. In contrast, the present invention utilizes a learning engine with a variety of fast and/or efficient computational methods that simultaneously calculate conditional probabilities that are most directly related to the outcomes predicted by the system. These computational methods are performed in real-time or near-real-time. Additionally, the system employs control theory concepts and methods within the learning engine. This enables the system to determine if every data set processed and/or analyzed by the system represents a sufficient statistical data set. Moreover, the learning engine includes a learning engine software development kit (SDK), enabling the system to prepare and/or manage the lifecycle of datasets used in any system learning application. Advantageously, the learning engine SDK is operable to manage system resources relating to monitoring, logging, and/or organizing any learning aspects of the system. This enables the system to train and/or run models locally and/or remotely using automated ML, AI, DL, and/or NN. The models are operable for configuration, where the system is operable to modify model configuration parameters and/or training data sets. By operating autonomously, the system is operable to iterate through algorithms and/or hyperparameter settings, creating the most accurate and/or efficient model for running predictive system applications. Furthermore, the learning engine SDK is operable to deploy webservices in order to convert any training models into services that can run in any application and/or environment. Thus, the system is operable to function autonomously and/or continuously, refining every predictive aspect of the system as the system acquires more data. While this functionality is controlled by the learning engine, the system is not limited to employing these learning techniques and/or methods in only the learning engine component, but rather throughout the entire system. This includes RF fingerprinting, RF spectrum awareness, autonomous RF system configuration modification, and/or autonomous system operations and maintenance. The learning engine uses a combination of physical models and convolutional neural networks algorithms to compute a set of possible conditional probabilities depicting the set of all possible outputs based on input measurements that provide the most accurate prediction of solution, wherein accurate means minimizing the false probability of the solution and also probability of error for the prediction of the solution. FIG.26is a diagram describing three pillars of a customer mission solution. The three pillars include environmental awareness, policy management, and spectrum management. The system obtains environmental awareness through a plurality of sensors. The plurality of sensors preferably captures real-time information about the electromagnetic environment. Additionally, the system includes machine learning and/or predictive algorithms to enhance environmental understanding and support resource scheduling. Policy management is flexible, adaptable, and dynamic, and preferably takes into account real-time information on device configurations and the electromagnetic environment. The system is preferably operable to manage heterogeneous networks of devices and applications. Spectrum management preferably makes use of advanced device capabilities including, but not limited to, directionality, waveforms, hopping, and/or aggregation. FIG.27is a block diagram of one example of a spectrum management tool. The spectrum management tool includes environment information obtained from at least one monitoring sensor and at least one sensor processor. The spectrum management tool further includes a policy manager, a reasoner, an optimizer, objectives, device information, and/or a device manager. The objectives include information from a mission information database. The policy manager obtains information from a policy information database. In another embodiment, the policy manager uses information (e.g., from the policy information database, measurements of the electromagnetic environment) to create policies and/or rules for conditional allowance of resources per signal using the spectrum. These policies and/or rules are then passed to the reasoner to determine optimization conditional constraints to be used by the optimizer with the goal of optimizing the utilization of the spectrum (e.g., based on mission information and objectives) by all signals present according to the policies and/or rules. At the output of the optimizer, resources (bandwidth, power, frequency, modulation, spatial azimuth and elevation focus for transmitter/receiver (TX/RX) sources) as well as interference levels per application is recommended for each signal source. After that, the loop of collecting and environmental awareness is fed to the policy manger and the reasoner. FIG.28is a block diagram of one embodiment of a resource brokerage application. As previously described, the resource brokerage application is preferably operable to use processed data from the at least one monitoring sensor and/or additional information to determine environmental awareness (e.g., environmental situational awareness). The environmental awareness and/or capabilities of a device and/or a resource are used to determine policies and/or reasoning to optimize the device and/or the resource. The resource brokerage application is operable to control the device and/or the resource. Additionally, the resource brokerage application is operable to control the at least one monitoring sensor. Semantic Engine The system further includes an automated semantic engine and/or translator as shown inFIG.29. The translator is operable to receive data input including, but not limited to, at least one use case, at least one objective, and/or at least one signal. In one embodiment, the at least one use case is a single signal use case. In another embodiment, the at least one use case is a multiple-signal use case. Once the translator receives data input, the translator uses natural language processing (NLP), and/or similar data translation processes and techniques, to convert the data input into actionable data for the automated semantic engine. By separating the data translation process from the automated semantic engine, the system is operable to provide more processing power once the data input is sent to the automated semantic engine, reducing the overall processing strain on the system. The automated semantic engine includes a rule component, a syntax component, a logic component, a quadrature (Q) component, and/or a conditional set component. In addition, the semantic engine is operable for network communication with a prior knowledge database, an analytics engine, and/or a monitoring and capture engine. Data is initially sent to the automated semantic engine via the translator. The automated semantic engine is operable to receive data from the translator in forms including, but not limited to, audio data, text data, video data, and/or image data. In one embodiment, the automated semantic engine is operable to receive a query from the translator. The logic component and/or the rule component are operable to establish a set of system rules and/or a set of system policies, where the set of system rules and/or the set of system policies is created using the prior knowledge database. Advantageously, the automated semantic engine is operable to run autonomously using any of the aforementioned learning and/or automation techniques. This enables the system to run continuously, without requiring user interaction and/or input, resulting in a system that is constantly learning and/or refining data inputs, creating more accurate predictions, models, and/or suggested actions. Moreover, the automated semantic engine enables the system to receive queries, searches, and/or any other type of search-related function using natural language, as opposed to requiring a user and/or customer to adapt to a particular computer language. This functionality is performed using a semantic search via natural language processing (NLP). The semantic search combines traditional word searches with logical relationships and concepts. In one embodiment, the automated semantic engine uses Latent Semantic Indexing (LSI) within the automated semantic engine. LSI organizes existing information within the system into structures that support high-order associations of words with text objects. These structures reflect the associative patterns found within data, permitting data retrieval based on latent semantic context in existing system data. Furthermore, LSI is operable to account for noise associated with any set of input data. This is done through LSI's ability to increase recall functionality, a constraint of traditional Boolean queries and vector space models. LSI uses automated categorization, assigning a set of input data to one or more predefined data categories contained within the prior knowledge database, where the categories are based on a conceptual similarity between the set of input data and the content of the prior knowledge database. Furthermore, LSI makes use of dynamic clustering, grouping the set of input data to data within the prior knowledge database using conceptual similarity without using example data to establish a conceptual basis for each cluster. In another embodiment, the automated semantic engine uses Latent Semantic Analysis (LSA) within the automated semantic engine. LSA functionalities include, but are not limited to, occurrence matrix creation, ranking, and/or derivation. Occurrence matrix creation involves using a term-document matrix describing the occurrences of terms in a set of data. Once the occurrence matrix is created, LSA uses ranking to determine the most accurate solution given the set of data. In one embodiment, low-rank approximation is used to rank data within the occurrence matrix. In another embodiment, the automated semantic engine uses semantic fingerprinting. Semantic fingerprinting converts a set of input data into a Boolean vector and creates a semantic map using the Boolean vector. The semantic map is operable for use in any context and provides an indication of every data match for the set of input data. This enables the automated semantic engine to convert any set of input data into a semantic fingerprint, where semantic fingerprints are operable to combine with additional semantic fingerprints, providing an accurate solution given the set of input data. Semantic fingerprint functionality further includes, but is not limited to, risk analysis, document search, classifier indication, and/or classification. In yet another embodiment, the automated semantic engine uses semantic hashing. By using semantic hashing, the automated semantic engine maps a set of input data to memory addresses using a neural network, where semantically similar sets of data inputs are located at nearby addresses. The automated semantic engine is operable to create a graphical representation of the semantic hashing process using counting vectors from each set of data inputs. Thus, sets of data inputs similar to a target query can be found by accessing all of the memory addresses that differ by only a few bits from the address of the target query. This method extends the efficiency of hash-coding to approximate matching much faster than locality sensitive hashing. In one embodiment, the automated semantic engine is operable to create a semantic map. The semantic map is used to create target data at the center of the semantic map, while analyzing related data and/or data with similar characteristics to the target data. This adds a secondary layer of analysis to the automated semantic engine, providing secondary context for the target data using similar and/or alternative solutions based on the target data. The system is operable to create a visualization of the semantic map. Traditional semantic network-based search systems suffer from numerous performance issues due to the scale of an expansive semantic network. In order for the semantic functionality to be useful in locating accurate results, a system is required to store a high volume of data. In addition, such a vast network creates difficulties in processing many possible solutions to a given problem. The system of the present invention solves these limitations through the various learning techniques and/or processes incorporated within the system. When combined with the ability to function autonomously, the system is operable to process a greater amount of data than systems making use of only traditional semantic approaches. By incorporating the automated semantic engine within the system, the system has a greater understanding of potential solutions, given a provided set of data. Semantic engines are regularly associated with semantic searches or searches with meaning or searches with understanding of overall meaning of the query, thus by understanding the searcher's intent and contextual meaning of the search to generate more relevant results. Semantic engines of the present invention, along with a spectrum specific ontology (vocabulary and operational domain knowledge), help automate spectrum utilization decisions based on dynamic observations and extracted environmental awareness, and create and extend spectrum management knowledge for multiple applications. Tip and Cue Processes The system uses a set of “tip and cue” processes, generally referring to detection, processing, and/or providing alerts using creating actionable data from acquired RF environmental awareness information in conjunction with a specific rule set, further enhancing the optimization capabilities of the system. The specific rule set is translated into optimization objectives, including constraints associated with signal characteristics. The tip and cue processes of the present invention produce actionable data to solve a plurality of user issues and/or objectives. Tip and cue processes are performed by an awareness system. The awareness system is operable to receive input data including, but not limited to, a set of use cases, at least one objective, and/or a rule set. The input data is then analyzed by a translator component, where the translator component normalizes the input data. Once normalized, the input data is sent to a semantic engine. The semantic engine is necessary for analyzing unstructured data inputs. Thus, a semantic engine is necessary to understand data inputs and apply contextual analysis as well, resulting in a more accurate output result. This accuracy is primarily accomplished using the previously mentioned learning techniques and/or technologies. The semantic engine uses the input data to create a set of updated rules, a syntax, a logic component, a conditional data set, and/or Quadrature (Q) data. The semantic engine is operable for network communication with components including, but not limited to, a prior knowledge database, an analytics engine, and/or a monitoring and capture engine. The monitoring and capture engine operates with an RF environment and includes a customer application programming interface (API), a radio server, and/or a coverage management component. The customer API and the radio server are operable to output a set of I-phase and Q-phase (I/Q) data using a Fast Fourier Transform (FFT). The set of I/Q data demonstrates the changes in amplitude and phase in a sine wave. The monitor and capture engine also serves as an optimization point for the system. The awareness engine operates as both a platform optimization unit and a client optimization unit. The awareness engine is operable to perform functions including, but not limited to, detection, classification, demodulation, decoding, locating, and/or signaling alarms. The detection and/or classification functions assist with incoming RF data acclimation and further includes a supervised learning component, where the supervised learning component is operable to make use of any of the aforementioned learning techniques and/or technologies. The demodulation and/or decode functionalities are operable to access RF data from WIFI, Land Mobile Radio (LMR), Long Term Evolution (LTE) networks, and/or Unmanned Aircraft Systems (UAS). The location component of the awareness engine is operable to apply location techniques including, but not limited to, DF, geolocation, and/or Internet Protocol (IP) based location. The awareness engine is operable to signal alarms using FASD and/or masks. In one embodiment, the masks are dynamic masks. The analytics engine is operable to perform functions including, but not limited to, data qualification, data morphing, and/or data computing. The awareness engine, analytics engine, and the semantic engine are all operable for network communication with the prior knowledge database. This enables each of the previously mentioned engines to compare input and/or output with data already processed and analyzed by the system. The various engines present within the Tip & Cue process further optimize client output in the form of dynamic spectrum utilization and/or allocation. The system uses the Tip & Cue process to provide actionable information and/or actionable knowledge to be utilized by at least one application to mitigate problems of the at least one application and/or to optimize services or goals of the at least one application. In a preferred embodiment, each customer has a service level agreement (SLA) with the system manager that specifies usage of the spectrum. The system manager is operable act as an intermediary between a first customer and a second customer in conflicts regarding the spectrum. If signals of the first customer interfere with signals of the second customer in violation of one or more of SLAs, the system is operable to provide an alert to the violation. Data regarding the violation is stored in at least one database within the system, which facilitates resolution of the violation. The control plane is operable to directly communicate the first customer (i.e., customer in violation of SLA) and/or at least one base station to modify parameters to resolve the violation. In one embodiment, the system is used to protect at least one critical asset. Each of the at least one critical asset is within a protection area. For example, a first critical asset is within a first protection area, a second critical asset is within a second protection area, etc. In one embodiment, the protection area is defined by sensor coverage from the at least one monitoring sensor. In other embodiments, the protection area is defined by sensor coverage from the at least one monitoring sensor, a geofence, and/or GPS coordinates. The system is operable to detect at least one signal within the protection area and send an alarm for the at least one signal when outside of allowed spectrum use within the protection area. The system is further operable to determine what information is necessary to provide actionable information. For example, sensor processing requires a large amount of power. Embedding only the sensors required to provide sufficient variables for customer goals reduces computational and/or power requirements. FIGS.30-32are flow diagrams illustrating the process of obtaining actionable data and using knowledge decision gates.FIG.30illustrates a flow diagram of a method to obtain actionable data based on customer goals3000. A goal is rephrased as a question in Step3002. Information required to answer the question is identified in Step3004. Next, quality, quantity, temporal, and/or spatial attributes are identified for each piece of information in Step3006. In a preferred embodiment, all four attributes (i.e., quality, quantity, temporal, and spatial) are identified in Step3006. The quality, quantity, temporal, and/or spatial attributes are ranked by importance in Step3008. For each information and attribute pair, corresponding physical layer information from the wireless environment is associated in Step3010. All information obtained in steps3004-3010is operable to be transmitted to the semantic engine. Further, wireless information is associated with a most statistically relevant combination of extracted measurements in at least one dimension in Step3012. The at least one dimension includes, but is not limited to, time, frequency, signal space and/or signal characteristics, spatial, and/or application goals and/or customer impact. In a preferred embodiment, the at least one dimension includes time, frequency, signal space and/or signal characteristics, spatial, and application goals and/or customer impact. The RF awareness measurements are then qualified in Step3014and actionable data is provided in Step3016based on the relationship established in Steps3002-3012. Actionable data efficiency is qualified in Step3018based on Step3014. All actionable data and its statistical significance is provided in Step3020. FIG.31illustrates a flow diagram of a method of implementation of actionable data and knowledge decision gates from total signal flow3100. A customer goal is rephrased as a question in Step3102. The customer goal is provided to the semantic engine having a proper dictionary in Step3104(as shown in Steps3002-3012ofFIG.30). Constraints with statistical relevance from Step3104and extracted electromagnetic (e.g., RF) awareness information from sensors in Step3106are used in an optimization cost function in Step3108(as shown in Step3014ofFIG.30). Results from the optimization cost function in Step3108are provided to an optimization engine in Step3110(as shown in Steps3016-3020ofFIG.30) to provide actionable data and its statistical relevance in Step3112. FIG.32illustrates a flow diagram of a method to identify knowledge decision gates based on operational knowledge3200. Customer operational description of utilization of actionable data is provided in Step3202. The customer operational description of utilization of actionable data from Step3202is used identify a common state of other information used to express the customer operational description and/or required to make decisions in Step3204. Further, the customer operational description of utilization of actionable data from Step3202is used to provide parameterization of customer operational utilization of actionable data in Step3206. The parameterization of customer operational utilization of actionable data from Step3206is used to identify conditions and create a conditional tree in Step3208. In one embodiment, the information from Step3204is used to identify the conditions and create the conditional tree in Step3208. Information from Steps3206-3208is operable to be transmitted to the semantic engine. Actionable data is provided in Step3210and used to compute statistical properties of the actionable data as it changes over time in Step3212. Information from Steps3208and3212is used by a decision engine to travel a decision tree to identify decision gates in Step3214. The identified decision gates from Step3214are provided along with the information in Step3204to allow the customer to make decisions in Step3216. FIG.33illustrates an overview of one example of information used to provide knowledge. Information including, but not limited to, network information (e.g., existing site locations, existing site configurations), real estate information (e.g., candidate site locations), signal data (e.g., LTE demodulation), signal sites, site issues, crowdsourced information (e.g., geographic traffic distribution), and/or geographic information services (GIS) is used to perform propagation modeling. The propagation models are used to evaluate candidate results and expected impact from any changes (e.g., addition of macrosites, tower). In one embodiment, additional analysis is performed on the candidate results and/or the expected impact. Example One In one example, the system is used by a tower company to evaluate if a carrier's performance can be improved by placing at least one additional macrosite on at least one additional tower. If the evaluation shows that the carrier's performance can be improved, it supports a pitch from the tower company to place the at least one macrosite on the at least one additional tower, which would generate revenue for the tower company. FIG.34is a map showing locations of three macrosites (“1” (green), “2” (orange), and “3” (purple)), 3 SigBASE units (orange diamond), and a plurality of locations evaluated for alternate or additional site deployment (green circles). FIG.35is a graph of distribution of users by average downlink Physical Resource Block (PRB) allocation. Real-time monitoring shows downlink resources allocated to each user. Allocations occur many times per second. A significant concentration of users on 739 MHz are allocated resources for voice service. Most users on 2165 MHz are allocated resources common for high-speed data. FIG.36illustrates rate of overutilization events and degree of overutilization. Real-time monitoring shows the percentage of downlink resources utilized when utilization exceeded 50%. Utilization statistics are generated per second as configured. The rate at which a sector utilization exceeds 50% (overutilized) is presented by hour. the average utilization levels when overutilization occurs describes the severity. FIG.37Ais a sector coverage map for the three macrosites (“1” (green), “2” (orange), and “3” (purple)). FIG.37Billustrates signal strength for the sector shown inFIG.37A. This figure displays areas of poor coverage. FIG.37Cillustrates subscriber density for the sector shown inFIG.37A. In one embodiment, data from external sources is used to determine subscriber distribution and density. This figure displays areas of high subscriber demand. FIG.37Dillustrates carrier-to-interference ratio for the sector shown inFIG.37A. This figure displays areas of poor quality. FIG.38Aillustrates the baseline scenario shown inFIG.34.FIG.38Bis a map showing locations of the three original macrosites (“1” (green), “2” (orange), and “3” (purple)) and two additional macrosites (“4” (dark blue) and “5” (light blue)). FIG.39illustrates signal strength of the baseline scenario fromFIG.38Aon the left and the scenario with two additional macrosites fromFIG.38Bon the right. The addition of a 2-sector eNodeB to a tower increases expected coverage by 3 km2as shown in Table 2 below. A total service area for the baseline is 9.89 km2and the total service area increases to 13.15 km2with the two additional macrosites. A total area with a carrier-to-interference ratio less than 5 dB decreases from 1.10 km2for the baseline to 0.38 km2with the two additional macrosites. A total area with a carrier-to-interference ratio greater than 5 dB increases from 8.79 km2for the baseline to 12.77 km2with the two additional macrosites. Traffic served without harmful interference increases from 16.73 Erlands for the baseline to 25.23 Erlands with the two additional macrosites. Additionally, an increase in traffic served of 40% is expected. Further utilization reduction of 30% is expected for pre-existing sectors. Areas of poor coverage are also reduced. TABLE 22-sectorMetricBaselinesite addedTotal service area, sq km9.8913.15Total area with C/I < 5 dB, km21.100.38Total area with C/I > 5 dB, km28.7912.77Traffic served without harmful interference,16.7325.23Erlands FIG.40Aillustrates carrier-to-interference ratio of the baseline scenario fromFIG.38A.FIG.40Billustrates carrier-to-interference ratio of the scenario with two additional macrosites. The additional two macrosites reduce areas with poor carrier-to-interference. Example Two In a second example, the system is also used by a tower company to evaluate if a carrier's performance can be improved by placing at least one additional macrosite on at least one additional tower. If the evaluation shows that the carrier's performance can be improved, it supports a pitch from the tower company to place the at least one macrosite on the at least one additional tower, which would generate revenue for the tower company. FIG.41illustrates a baseline scenario for the second example on the left and a map showing locations of the original macrosites from the baseline scenario with three additional proposed macrosites on the right. FIG.42illustrates signal strength of the baseline scenario fromFIG.41on the left and the scenario with three additional proposed macrosites fromFIG.41on the right. The addition of a 3-sector eNodeB to a tower increases expected coverage by 0.5 km2as shown in Table 3 below. A total service area for the baseline is 21.3 km2and the total service area increases to 21.8 km2with the three additional macrosites. A total area with a carrier-to-interference ratio less than 5 dB increases from 3.0 km2for the baseline to 3.1 km2with the three additional macrosites. A total area with a carrier-to-interference ratio greater than 5 dB increases from 18.3 km2for the baseline to 18.7 km2with the three additional macrosites. Traffic served without harmful interference increases from 79.7 Erlands for the baseline to 80.9 Erlands with the three additional macrosites. Additionally, an increase in traffic served of 2% is expected. Further utilization reduction of 2% is expected for pre-existing sectors. TABLE 3487044MetricBaselineAddedTotal service area, sq km21.321.8Total area with C/I < 5 dB, km23.03.1Total area with C/I > 5 dB, km218.318.7Traffic served without harmful interference, Erlands79.780.9 FIG.43illustrates carrier-to-interference ratio of the baseline scenario fromFIG.41on the left and carrier-to-interference ratio of the scenario with three additional proposed macrosites fromFIG.41on the right. The three additional proposed macrosites slightly reduce areas with poor carrier-to-interference. Although adding the 3-sector eNodeB does slightly improve performance, this performance improvement is not significant enough to support the addition of the three proposed macrosites to the tower. Example Three In a third example, the system is used to evaluate which carrier provides better service. FIG.44illustrates a signal strength comparison of a first carrier (“Carrier 1”) with a second carrier (“Carrier 2”) for 700 MHz. FIG.45illustrates carrier-to-interference ratio for Carrier 1 and Carrier 2. FIG.46is a graph of Area vs. RSSI and Traffic vs. RSSi for Carrier 1 and Carrier 2. Carrier 1 and Carrier 2 serve approximately the same amount of area in the sector. FIG.47is a graph of traffic difference for Carrier 1 versus Carrier 2. Carrier 2 serves more traffic than Carrier 1 at the extremes of coverage, while Carrier 1 serves more traffic in the middle range of coverage. FIGS.44-47illustrate traffic composition for each SigBASE. Different traffic types require different signal-to-noise ratios (SNRs) vs reference signals received power (RSRP). For voice traffic, the SNR is from −6 dB to 0 dB, while the SNR goes upwards of 20 dB for streaming video. FIG.48is a graph of SNR vs. RSRP for each SigBASE for the third example. FIG.49is another graph of SNR vs. RSRP for each SigBASE for the third example. FIG.50is a clustered graph of SNR vs. RSRP for each SigBASE for the third example. FIG.51is another clustered graph of SNR vs. RSRP for each SigBASE for the third example. FIG.52is a schematic diagram of an embodiment of the invention illustrating a computer system, generally described as800, having a network810, a plurality of computing devices820,830,840, a server850, and a database870. The server850is constructed, configured, and coupled to enable communication over a network810with a plurality of computing devices820,830,840. The server850includes a processing unit851with an operating system852. The operating system852enables the server850to communicate through network810with the remote, distributed user devices. Database870is operable to house an operating system872, memory874, and programs876. In one embodiment of the invention, the system800includes a network810for distributed communication via a wireless communication antenna812and processing by at least one mobile communication computing device830. Alternatively, wireless and wired communication and connectivity between devices and components described herein include wireless network communication such as WI-FI, WORLDWIDE INTEROPERABILITY FOR MICROWAVE ACCESS (WIMAX), Radio Frequency (RF) communication including RF identification (RFID), NEAR FIELD COMMUNICATION (NFC), BLUETOOTH including BLUETOOTH LOW ENERGY (BLE), ZIGBEE, Infrared (IR) communication, cellular communication, satellite communication, Universal Serial Bus (USB), Ethernet communications, communication via fiber-optic cables, coaxial cables, twisted pair cables, and/or any other type of wireless or wired communication. In another embodiment of the invention, the system800is a virtualized computing system capable of executing any or all aspects of software and/or application components presented herein on the computing devices820,830,840. In certain aspects, the computer system800is operable to be implemented using hardware or a combination of software and hardware, either in a dedicated computing device, or integrated into another entity, or distributed across multiple entities or computing devices. By way of example, and not limitation, the computing devices820,830,840are intended to represent various forms of electronic devices including at least a processor and a memory, such as a server, blade server, mainframe, mobile phone, personal digital assistant (PDA), smartphone, desktop computer, netbook computer, tablet computer, workstation, laptop, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the invention described and/or claimed in the present application. In one embodiment, the computing device820includes components such as a processor860, a system memory862having a random access memory (RAM)864and a read-only memory (ROM)866, and a system bus868that couples the memory862to the processor860. In another embodiment, the computing device830is operable to additionally include components such as a storage device890for storing the operating system892and one or more application programs894, a network interface unit896, and/or an input/output controller898. Each of the components is operable to be coupled to each other through at least one bus868. The input/output controller898is operable to receive and process input from, or provide output to, a number of other devices899, including, but not limited to, alphanumeric input devices, mice, electronic styluses, display units, touch screens, signal generation devices (e.g., speakers), or printers. By way of example, and not limitation, the processor860is operable to be a general-purpose microprocessor (e.g., a central processing unit (CPU)), a graphics processing unit (GPU), a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated or transistor logic, discrete hardware components, or any other suitable entity or combinations thereof that can perform calculations, process instructions for execution, and/or other manipulations of information. In another implementation, shown as840inFIG.52, multiple processors860and/or multiple buses868are operable to be used, as appropriate, along with multiple memories862of multiple types (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core). Also, multiple computing devices are operable to be connected, with each device providing portions of the necessary operations (e.g., a server bank, a group of blade servers, or a multi-processor system). Alternatively, some steps or methods are operable to be performed by circuitry that is specific to a given function. According to various embodiments, the computer system800is operable to operate in a networked environment using logical connections to local and/or remote computing devices820,830,840through a network810. A computing device830is operable to connect to a network810through a network interface unit896connected to a bus868. Computing devices are operable to communicate communication media through wired networks, direct-wired connections or wirelessly, such as acoustic, RF, or infrared, through an antenna897in communication with the network antenna812and the network interface unit896, which are operable to include digital signal processing circuitry when necessary. The network interface unit896is operable to provide for communications under various modes or protocols. In one or more exemplary aspects, the instructions are operable to be implemented in hardware, software, firmware, or any combinations thereof. A computer readable medium is operable to provide volatile or non-volatile storage for one or more sets of instructions, such as operating systems, data structures, program modules, applications, or other data embodying any one or more of the methodologies or functions described herein. The computer readable medium is operable to include the memory862, the processor860, and/or the storage media890and is operable be a single medium or multiple media (e.g., a centralized or distributed computer system) that store the one or more sets of instructions900. Non-transitory computer readable media includes all computer readable media, with the sole exception being a transitory, propagating signal per se. The instructions900are further operable to be transmitted or received over the network810via the network interface unit896as communication media, which is operable to include a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner as to encode information in the signal. Storage devices890and memory862include, but are not limited to, volatile and non-volatile media such as cache, RAM, ROM, EPROM, EEPROM, FLASH memory, or other solid state memory technology; discs (e.g., digital versatile discs (DVD), HD-DVD, BLU-RAY, compact disc (CD), or CD-ROM) or other optical storage; magnetic cassettes, magnetic tape, magnetic disk storage, floppy disks, or other magnetic storage devices; or any other medium that can be used to store the computer readable instructions and which can be accessed by the computer system800. In one embodiment, the computer system800is within a cloud-based network. In one embodiment, the server850is a designated physical server for distributed computing devices820,830, and840. In one embodiment, the server850is a cloud-based server platform. In one embodiment, the cloud-based server platform hosts serverless functions for distributed computing devices820,830, and840. In another embodiment, the computer system800is within an edge computing network. The server850is an edge server, and the database870is an edge database. The edge server850and the edge database870are part of an edge computing platform. In one embodiment, the edge server850and the edge database870are designated to distributed computing devices820,830, and840. In one embodiment, the edge server850and the edge database870are not designated for distributed computing devices820,830, and840. The distributed computing devices820,830, and840connect to an edge server in the edge computing network based on proximity, availability, latency, bandwidth, and/or other factors. It is also contemplated that the computer system800is operable to not include all of the components shown inFIG.52, is operable to include other components that are not explicitly shown inFIG.52, or is operable to utilize an architecture completely different than that shown inFIG.52. The various illustrative logical blocks, modules, elements, circuits, and algorithms described in connection with the embodiments disclosed herein are operable to be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application (e.g., arranged in a different order or partitioned in a different way), but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The above-mentioned examples are provided to serve the purpose of clarifying the aspects of the invention, and it will be apparent to one skilled in the art that they do not serve to limit the scope of the invention. By nature, this invention is highly adjustable, customizable and adaptable. The above-mentioned examples are just some of the many configurations that the mentioned components can take on. All modifications and improvements have been deleted herein for the sake of conciseness and readability but are properly within the scope of the present invention.
129,582
11943628
DETAILED DESCRIPTION The present invention is generally directed to spectrum analysis and management for electromagnetic signals, and more particularly for providing dynamic, prioritized spectrum utilization management. In one embodiment, the present invention provides a system for spectrum analysis in an electromagnetic environment including at least one monitoring sensor including at least one receiver channel operable to monitor the electromagnetic environment and create measured data based on the electromagnetic environment, a radio receiver front-end subsystem configured to process the measured data, thereby creating processed data, a frequency domain programmable channelizer configured to analyze the processed data, an in-phase and quadrature (UQ) buffer, a blind detection engine, and a noise floor estimator, wherein the frequency domain programmable channelizer includes buffer services, pre-processing of fast Fourier transform (FFT) bin samples, bin selection, at least one band pass filter (BPF), an inverse fast Fourier transform (IFFT) function to produce at least one IFFT, decomposition, and/or frequency down conversion and phase correction. In one embodiment, one or more of the at least one monitoring sensor is mounted on a drone, on a vehicle, in or on a street light, in or on a traffic pole, and/or on top of a building. In one embodiment, the frequency domain programmable channelizer includes a comparison at each of the at least one receiver channel, and wherein the comparison provides anomalous detection using a mask with frequency and power. In one embodiment, the frequency domain programmable channelizer includes channelization selector logic for a table lookup of filter coefficient and channelization vectors. In one embodiment, data from the table lookup of filter coefficient and channelization vectors undergoes preprocessing with a mix circular rotator to produce a plurality of blocks of a plurality of points. In one embodiment, a sum is taken of a block of the plurality of blocks and the at least one IFFT is taken of a point of the plurality of points to produce discor overlap samples, and wherein the discor overlap samples are transmitted to a classification engine. In one embodiment, data from the frequency domain programmable channelizer undergoes an N point FFT, wherein a power spectral density (PSD) is calculated for the N point FFT, wherein a complex average FFT is obtained for a plurality of blocks of the N point FFT. In one embodiment, the PSD is transmitted to the noise floor estimator. In one embodiment, the frequency domain programmable channelizer includes at least one channel definition, at least one channelization vector, at least one FFT configuration, at least one deference matrix, at least one detector configuration, and/or at least one channel detection. In one embodiment, the noise floor estimator is operable to estimate a bin-wise noise model, estimate a bin-wise noise plus signal model, determine a bin-level probability of false alarm, a bin-level threshold, a channel-level probability of false alarm, a channel-level level threshold, calculate a detection vector, count a number of elements above the bin-level threshold, determine a probability of false alarm, determine a probability of missed detection, and/or determine an overall detection probability. In one embodiment, the blind detection engine is operable to estimate a number of channels, corresponding bandwidths for the number of channels, and center frequencies using an averaged power spectral density (PSD) of at least one signal of interest. In one embodiment, the system further includes a classification engine, wherein the classification engine is operable to generate a query to a static database to classify at least one signal of interest based on information from the frequency domain programmable channelizer. In another embodiment, the present invention provides a system for spectrum analysis in an electromagnetic environment including at least one monitoring sensor including at least one receiver channel operable to monitor the electromagnetic environment and create measured data based on the electromagnetic environment, a radio receiver front-end subsystem configured to process the measured data, thereby creating processed data, a frequency domain programmable channelizer configured to analyze the processed data, an in-phase and quadrature (UQ) buffer, a blind detection engine, and a noise floor estimator, wherein the frequency domain programmable channelizer includes buffer services, pre-processing of the FFT bin samples, bin selection, at least one band pass filter (BPF), an inverse fast Fourier transform (IFFT) function, decomposition, and/or frequency down conversion and phase correction, and wherein the frequency domain programmable channelizer further includes at least one channel definition, at least one channelization vector, at least one FFT configuration, at least one deference matrix, at least one detector configuration, and/or at least one channel detection. In one embodiment, the at least one deference matrix is operable to identify at least one narrowband channel that is a subset of at least one wideband channel. In one embodiment, the at least one FFT configuration is operable to resolve ambiguities between at least two channels by employing a sufficient resolution bandwidth. In one embodiment, the at least one channelization vector is operable to specify normalized power levels per FFT bin for at least one channel. In one embodiment, power levels of the at least one channelization vector are normalized with respect to peak power in a spectrum envelope of at least one channel. In one embodiment, the at least one detector configuration includes a minimum acceptable probability of false alarm and/or a minimum acceptable probability of missed detection. In one embodiment, the at least one channel detection is operable to perform a hypothesis test for at least one bin using information from the noise floor estimator and a maximum probability of false alarm. In yet another embodiment, the present invention provides a system for spectrum analysis in an electromagnetic environment including at least one monitoring sensor including at least one receiver channel operable to monitor the electromagnetic environment and create measured data based on the electromagnetic environment, a radio receiver front-end subsystem configured to process the measured data, thereby creating processed data, a frequency domain programmable channelizer configured to analyze the processed data, an in-phase and quadrature (UQ) buffer, a blind detection engine, a noise floor estimator, and a classification engine, wherein the frequency domain programmable channelizer includes buffer services, pre-processing of the FFT bin samples, bin selection, at least one band pass filter (BPF), an inverse fast Fourier transform (IFFT) function, decomposition, and/or frequency down conversion and phase correction, wherein the frequency domain programmable channelizer further includes at least one channel definition, at least one channelization vector, at least one FFT configuration, at least one deference matrix, at least one detector configuration, and/or at least one channel detection, and wherein the classification engine is operable to generate a query to a static database to classify at least one signal of interest based on information from the frequency domain programmable channelizer. Traditional management of spectrum is static, based on licenses that are geographical and band specific. The Federal Communications Commission (FCC) has allocated spectrum into a table. Utilization is increased by slicing the spectrum into finer slices. Additionally, interference is limited by imposing penalties by strict geographical band utilization rules and licenses. However, these traditional methods of spectrum management do not work with increasing demand and new services coming out. The new services would have to be at higher frequencies (e.g., above 10 GHz), which is very expensive and requires costly transceiver with a limited distance range. Spectrum is valuable because it is a finite resource. Further, the demand for spectrum is ever-increasing. The Shannon-Hartley theorem calculates the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise as follows: C=BWlog2(1+SNR) where C is the channel capacity in bits per second, BW is the bandwidth of the channel in Hz, and SNR is the signal-to-noise ratio. Early attempts at managing spectrum include developing technology that increases spectrum efficiency (i.e., maximizing SNR). Although this results in more bits per Hz, the logarithmic function limits the gains in channel capacity resulting from improving technology. Additional attempts at managing spectrum also include developing technology to enable use of alternate spectrum (e.g., free-space optical (FSO) communication). However, using alternate spectrum, such as higher frequencies, leads to smaller ranges, line of sight limitations, increased elevation of transmission structures, and/or expensive infrastructure. The missing component to spectrum management is bandwidth management. Bandwidth management provides flexible utilization of the spectrum, enables management of spectrum resources and users, while allowing spectrum usage to be quantified. The majority of applications using the spectrum can coexist if each application knows about the spectrum needs of other applications and how they plan to use the spectrum. However, because the needs of each application are dynamic, a dynamic spectrum management system is needed. The present invention allows autonomous, dynamic sharing of the electromagnetic spectrum to allow maximum utilization by diverse applications according to specific utilization rules (dynamic and/or static) while maintaining minimum interference between applications. This requires new tools that provide dynamic environmental spectral awareness of all signals present in the electromagnetic (e.g., radio frequency (RF)) environment to properly execute utilization rules, which are operable to describe or facilitate sharing spectrum resources among several competing users or protect one service user from others, among others. 5G requires spectrum awareness. Larger blocks of spectrum are required to support higher speeds. Dynamic spectrum sharing is necessary to make the spectrum assets available. Further, visibility of spectrum activity is required to support reliability targets. Interference avoidance and resolution must be embedded. Internet of Things (IoT)/machine communication wireless dependency elevates the need for real-time RF visibility to avoid disruption and safety concerns. The system of the present invention provides scalable processing capabilities at the edge. Edge processing is fast and reliable with low latency. Environmental sensing processes optimize collection and analytics, making data sets manageable. Advantageously, the system minimizes backhaul requirements, allowing for actionable data to be delivered faster and more efficiently. Deep learning techniques extract and deliver knowledge from large data sets in near-real time. These deep learning techniques are critical for identifying and classifying signals. Edge analytics further allow third party data (e.g., social media, population information, real estate information, traffic information, geographic information system) to further enrich captured data sets. A semantic engine and inference reasoner leverages insights generated by machine learning and edge analytics. Ontologies are established allowing for the creation of knowledge operable to inform and direct actions and/or decisions. Referring now to the drawings in general, the illustrations are for the purpose of describing one or more preferred embodiments of the invention and are not intended to limit the invention thereto. The present invention provides systems, methods, and apparatuses for spectrum analysis and management by identifying, classifying, and cataloging at least one or a multiplicity of signals of interest based on electromagnetic spectrum measurements (e.g., radiofrequency spectrum measurements), location, and other measurements. The present invention uses real-time and/or near real-time processing of signals (e.g., parallel processing) and corresponding signal parameters and/or characteristics in the context of historical, static, and/or statistical data for a given spectrum, and more particularly, all using baseline data and changes in state for compressed data to enable near real-time analytics and results for individual monitoring sensors and for aggregated monitoring sensors for making unique comparisons of data. The systems, methods, and apparatuses according to the present invention preferably are operable to detect in near real time, and more preferably to detect, sense, measure, and/or analyze in near real time, and more preferably to perform any near real time operations within about 1 second or less. In one embodiment, near real time is defined as computations completed before data marking an event change. For example, if an event happens every second, near real time is completing computations in less than one second. Advantageously, the present invention and its real time functionality described herein uniquely provide and enable the system to compare acquired spectrum data to historical data, to update data and/or information, and/or to provide more data and/or information on open space. In one embodiment, information (e.g., open space) is provided on an apparatus unit or a device that is occupying the open space. In another embodiment, the system compares data acquired with historically scanned (e.g., 15 min to 30 days) data and/or or historical database information in near-real time. Also, the data from each monitoring sensor, apparatus unit, or device and/or aggregated data from more than one monitoring sensor, apparatus unit, and/or device are communicated via a network to at least one server computer and stored on a database in a virtualized or cloud-based computing system, and the data is available for secure, remote access via the network from distributed remote devices having software applications (apps) operable thereon, for example by web access (mobile app) or computer access (desktop app). The at least one server computer is operable to analyze the data and/or the aggregated data. The system is operable to monitor the electromagnetic (e.g., RF) environment via at least one monitoring sensor. The system is then operable to analyze data acquired from the at least one monitoring sensor to detect, classify, and/or identify at least one signal in the electromagnetic environment. The system is operable to learn the electromagnetic environment, which allows the system to extract environmental awareness. In a preferred embodiment, the system extracts environmental awareness by including customer goals. The environmental awareness is combined with the customer goals, customer defined policies, and/or rules (e.g., customer defined rules, government defined rules) to extract actionable information to help the customer optimize performance according to the customer goals. The actionable information is combined and correlated with additional information sources to enhance customer knowledge and user experience through dynamic spectrum utilization and prediction models. The systems, methods, and apparatuses of the various embodiments enable spectrum utilization management by identifying, classifying, and cataloging signals of interest based on electromagnetic (e.g., radio frequency) measurements. In one embodiment, signals and parameters of the signals are identified. In another embodiment, indications of available frequencies are presented to a user and/or user equipment. In yet another embodiment, protocols of signals are also identified. In a further embodiment, the modulation of signals, data types carried by the signals, and estimated signal origins are identified. Identification, classification, and cataloging signals of interest preferably occurs in real time or near-real time. Embodiments are directed to a spectrum monitoring unit that is configurable to obtain spectrum data over a wide range of wireless communication protocols. Embodiments also provide for the ability to acquire data from and send data to database depositories that are used by a plurality of spectrum management customers and/or applications or services requiring spectrum resources. In one embodiment, the system includes at least one spectrum monitoring unit. Each of the at least one spectrum monitoring unit includes at least one monitoring sensor that is preferably in network communication with a database system and spectrum management interface. In one embodiment, the at least one spectrum monitoring unit and/or the at least one monitoring sensor is portable. In a preferred embodiment, one or more of the at least one spectrum monitoring unit and/or the at least one monitoring sensor is a stationary installation. The at least one spectrum monitoring unit and/or the at least one monitoring sensor is operable to acquire different spectrum information including, but not limited to, frequency, bandwidth, signal power, time, and location of signal propagation, as well as modulation type and format. The at least one spectrum monitoring unit is preferably operable to provide signal identification, classification, and/or geo-location. Additionally, the at least one spectrum monitoring unit preferably includes a processor to allow the at least one spectrum monitoring unit to process spectrum power density data as received and/or to process raw In-Phase and Quadrature (UQ) complex data. Alternatively, the at least one spectrum monitoring unit and/or the at least one monitoring sensor transmits the data to at least one data analysis engine for storage and/or processing. In a preferred embodiment, the transmission of the data is via a backhaul operation. The spectrum power density data and/or the raw I/Q complex data are operable to be used to further signal processing, signal identification, and data extraction. The system preferably is operable to manage and prioritize spectrum utilization based on five factors: frequency, time, spatial, signal space, and application goals. The frequency range is preferably as large as possible. In one embodiment, the system supports a frequency range between 1 MHz and 6 GHz. In another embodiment, the system supports a frequency range with a lower limit of 9 kHz. In yet another embodiment, the system supports a frequency range with a higher limit of 12.4 GHz. In another embodiment, the system supports a frequency range with a higher limit of 28 GHz or 36 GHz. Alternatively, the system supports a frequency range with a higher limit of 60 GHz. In still another embodiment, the system supports a frequency range with a higher limit of 100 GHz. The system preferably has an instantaneous processing bandwidth (IPBW) of 40 MHz, 80 MHz, 100 MHz, or 250 MHz per channel. The time range is preferably as large as possible. In one embodiment, the number of samples per dwell time in a frequency band is calculated. In one example, the system provides a minimum coverage of 2 seconds. The number of samples per dwell in time in the frequency band is calculated as follows: Ns≥(IPBW)(2)/channel The storage required in a buffer is a minimum of 2 seconds per channel per dwell time, which is calculated as follows: storage=(IPBW)(2)(2Bytes)(channels)/(dwell time) Spatial processing is used to divide an area of coverage by a range of azimuth and elevation angles. The area of coverage is defined as an area under a certain azimuth and range. This is implemented by antenna arrays processing, steerable beamforming, array processing, and/or directional antennas. In one embodiment, the directional antennas include at least one steerable electrical or mechanical antenna. Alternatively, the directional antennas include an array of steerable antennas. More antennas require more signal processing. Advantageously, spatial processing allows for better separation of signals, reduction of noise and interference signals, geospatial separation, increasing signal processing gains, and provides a spatial component to signal identification. Further, this allows for simple integration of geolocation techniques, such as time difference of arrival (TDOA), angle of arrival (AOA), and/or frequency difference of arrival (FDOA). This also allows for implementation of a geolocation engine, which will be discussed in detail infra. Each signal has inherent signal characteristics including, but not limited to a modulation type (e.g., frequency modulation (FM), amplitude modulation (AM), quadrature phase-shift keying (QPSK), quadrature amplitude modulation (QAM), binary phase-shift keying (BPSK), etc.), a protocol used (e.g., no protocol for analog signals, digital mobile radio (DMR), land mobile radio (LMR), Project 25 (P25), NXDN, cellular, long-term evolution (LTE), universal mobile telecommunications system (UMTS), 5G), an envelope behavior (e.g., bandwidth (BW), center frequency (Fc), symbol rate, data rate, constant envelope, peak power to average power ratio (PAR), cyclostationary properties), an interference index, and statistical properties (e.g., stationary, cyclostationary, higher moment decomposition, non-linear decomposition (e.g., Volterra series to cover non-linearities, learning basic model). The application goals are dependent on the particular application used within the system. Examples of applications used in the system include, but are not limited to, traffic management, telemedicine, virtual reality, streaming video for entertainment, social media, autonomous and/or unmanned transportation, etc. Each application is operable to be prioritized within the system according to customer goals. For example, traffic management is a higher priority application than streaming video for entertainment. As previously described, the system is operable to monitor the electromagnetic (e.g., RF) environment, analyze the electromagnetic environment, and extract environmental awareness of the electromagnetic environment. In a preferred embodiment, the system extracts the environmental awareness of the electromagnetic environment by including customer goals. In another embodiment, the system uses the environmental awareness with the customer goals and/or user defined policies and rules to extract actionable information to help the customer optimize the customer goals. The system combines and correlates other information sources with the extracted actionable information to enhance customer knowledge through dynamic spectrum utilization and prediction models. FIG.1illustrates one embodiment of an RF awareness and analysis system. The system includes an RF awareness subsystem. The RF awareness subsystem includes, but is not limited to, an antenna subsystem, an RF conditioning subsystem, at least one front end receiver, a programmable channelizer, a blind detection engine, a blind classification engine, an envelope feature extraction module, a demodulation bank, an automatic gain control (AGC) double loop subsystem, a signal identification engine, a feature extraction engine, a learning engine, a geolocation engine, a data analysis engine, and/or a database storing information related to at least one signal (e.g., metadata, timestamps, power measurements, frequencies, etc.). The system further includes an alarm system, a visualization subsystem, a knowledge engine, an operational semantic engine, a customer optimization module, a database of customer goals and operational knowledge, and/or a database of actionable data and decisions. The antenna subsystem monitors the electromagnetic (e.g., RF) environment to produce monitoring data. The monitoring data is then processed through the RF conditioning subsystem before being processed through the front end receivers. The AGC double loop subsystem is operable to perform AGC adjustment. Data is converted from analog to digital by the front end receivers. The digital data is then sent through the programmable channelizer, and undergoes I,Q buffering and masking. A fast Fourier transform (FFT) is performed and the blind detection engine performs blind detection. Additionally, the blind classification engine performs blind classification. Information (e.g., observed channels) is shared from the blind detection engine to the blind classification and/or the programmable channelizer (e.g., to inform logic and selection processes). Information from the blind detection engine is also sent to the envelope feature extraction module. Information from the blind classification engine is sent to the demodulation bank. Information from the envelope feature extraction module, the demodulation bank, and/or the blind classification engine are operable to be used by the signal identification engine, the feature extraction engine, the learning engine, and/or the geolocation engine. Information from the AGC double loop subsystem, the I,Q buffer, masking, the programmable channelizer, the signal identification engine, the feature extraction engine, the learning engine, and the geolocation engine, the envelope feature extraction module, the demodulation bank, and/or the blind classification engine is operable to be stored in the database storing information related to the at least one signal (e.g., signal data, metadata, timestamps). Information from the database (i.e., the database storing information related to the at least one signal), the signal identification engine, the feature extraction engine, the learning engine, and/or the geolocation engine is operable to be sent to the data analysis engine for further processing. The alarm system includes information from the database storing information related to the at least one signal and/or the database of customer goals and operational knowledge. Alarms are sent from the alarm system to the visualization subsystem. In a preferred embodiment, the visualization subsystem customizes a graphical user interface (GUI) for each customer. The visualization system is operable to display information from the database of actionable data and decisions. In one embodiment, the alarms are sent via text message and/or electronic mail. In one embodiment, the alarms are sent to at least one internet protocol (IP) address. The database of customer goals and operational knowledge is also operable to send information to a semantic engine (e.g., customer alarm conditions and goals) and/or an operational semantic engine (e.g., customer operational knowledge). The semantic engine translates information into constraints and sends the constraints to the customer optimization module, which also receives information (e.g., signal metadata) from the data analysis engine. The customer optimization module is operable to send actionable data related to the electromagnetic environment to the operational semantic engine. The customer optimization module is operable to discern which information (e.g., environmental information) has the largest statistically sufficient impact related to the customer goals and operation. In one embodiment, the system includes at least one monitoring sensor, at least one data analysis engine, at least one application, a semantic engine, a programmable rules and policy editor, a tip and cue server, and/or a control panel as shown inFIG.2. The at least one monitoring sensor includes at least one radio server and/or at least one antenna. The at least one antenna is a single antenna (e.g., uni-directional or directional) or an antenna array formed of multiple antennas resonating at different frequency bands and configured in a 1D (linear), 2D (planar), or 3D (area) antenna configuration. The at least one monitoring sensor is operable to scan the electromagnetic (e.g., RF) spectrum and measure properties of the electromagnetic spectrum, including, but not limited to, receiver I/Q data. The at least one monitoring unit is preferably operable to autonomously capture the electromagnetic spectrum with respect to frequency, time, and/or space. In one embodiment, the at least one monitoring sensor is operable to perform array processing. In another embodiment, the at least one monitoring sensor is mobile. In one embodiment, the at least one monitoring sensor is mounted on a vehicle or a drone. Alternatively, the at least one monitoring sensor is fixed. In one embodiment, the at least one monitoring sensor is fixed in or on a street light and/or a traffic pole. In yet another embodiment, the at least one monitoring sensor is fixed on top of a building. In one embodiment, the at least one monitoring sensor is integrated with at least one camera. In one embodiment, the at least one camera captures video and/or still images. In another embodiment, the at least one monitoring sensor includes at least one monitoring unit. Examples of monitoring units include those disclosed in U.S. Pat. Nos. 10,122,479, 10,219,163, 10,231,206, 10,237,770, 10,244,504, 10,257,727, 10,257,728, 10,257,729, 10,271,233, 10,299,149, 10,498,951, and 10,529,241, and U.S. Publication Nos. 20190215201, 20190364533, and 20200066132, each of which is incorporated herein by reference in its entirety. In a preferred embodiment, the system includes at least one data analysis engine to process data captured by the at least one monitoring sensor. An engine is a collection of functions and algorithms used to solve a class of problems. The system preferably includes a detection engine, a classification engine, an identification engine, a geo-location engine, a learning engine, and/or a statistical inference and machine learning engine. For example, the geolocation engine is a group of functions and geolocation algorithms that are used together to solve multiple geolocation problems. The detection engine is preferably operable to detect at least one signal of interest in the electromagnetic (e.g., RF) environment. In a preferred embodiment, the detection engine is operable to automatically detect the at least one signal of interest. In one embodiment, the automatic signal detection process includes mask creation and environment analysis using masks. Mask creation is a process of elaborating a representation of the electromagnetic environment by analyzing a spectrum of signals over a certain period of time. A desired frequency range is used to create a mask, and FFT streaming data is also used in the mask creation process. A first derivative is calculated and used for identifying possible maximum power values. A second derivative is calculated and used to confirm the maximum power values. A moving average value is created as FFT data is received during a time period selected by the user for mask creation. For example, the time period is 10 seconds. The result is an FFT array with an average of the maximum power values, which is called a mask. The classification engine is preferably operable to classify the at least one signal of interest. In one embodiment, the classification engine generates a query to a static database to classify the at least one signal of interest based on its components. For example, the information stored in static database is preferably used to determine spectral density, center frequency, bandwidth, baud rate, modulation type, protocol (e.g., global system for mobile (GSM), code-division multiple access (CDMA), orthogonal frequency-division multiplexing (OFDM), LTE, etc.), system or carrier using licensed spectrum, location of the signal source, and/or a timestamp of the at least one signal of interest. In an embodiment, the static database includes frequency information gathered from various sources including, but not limited to, the Federal Communication Commission, the International Telecommunication Union, and data from users. In one example, the static database is an SQL database. The data store is operable to be updated, downloaded or merged with other devices or with its main relational database. In one embodiment, software application programming interface (API) applications are included to allow database merging with third-party spectrum databases that are only operable to be accessed securely. In a preferred embodiment, the classification engine is operable to calculate second, third, and fourth order cumulants to classify modulation schemes along with other parameters, including center frequency, bandwidth, baud rate, etc. The identification engine is preferably operable to identify a device or an emitter transmitting the at least one signal of interest. In one embodiment, the identification engine uses signal profiling and/or comparison with known database(s) and previously recorded profile(s) to identify the device or the emitter. In another embodiment, the identification engine states a level of confidence related to the identification of the device or the emitter. The geolocation engine is preferably operable to identify a location from which the at least one signal of interest is emitted. In one embodiment, the geolocation engine uses statistical approximations to remove error causes from noise, timing and power measurements, multipath, and non-line of sight (NLOS) measurements. By way of example, the following methods are used for geolocation statistical approximations and variances: maximum likelihood (nearest neighbor or Kalman filter); least squares approximation; Bayesian filter if prior knowledge data is included; and the like. In another embodiment, time difference of arrival (TDOA) and frequency difference of arrival (FDOA) equations are derived to assist in solving inconsistencies in distance calculations. In still another embodiment, angle of arrival (AOA) is used to determine geolocation. In yet another embodiment, power distribution ratio versus azimuth measurements are used to determine geolocation. In a preferred embodiment, geolocation is performed using Angle of Arrival (AOA), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), and power distribution ratio measurements. Several methods or combinations of these methods are operable to be used with the present invention because geolocation is performed in different environments, including but not limited to indoor environments, outdoor environments, hybrid (stadium) environments, inner city environments, etc. The learning engine is preferably operable to learn the electromagnetic environment. In one embodiment, the learning engine uses statistical learning techniques to observe and learn an electromagnetic environment over time and identify temporal features of the electromagnetic environment (e.g., signals) during a learning period. In a preferred embodiment, the learning engine is operable to learn information from the detection engine, the classification engine, the identification engine, and/or the geolocation engine. In one embodiment, the learning function of the system is operable to be enabled and disabled. When the learning engine is exposed to a stable electromagnetic environment and has learned what is normal in the electromagnetic environment, it will stop its learning process. In a preferred embodiment, the electromagnetic environment is periodically reevaluated. In one embodiment, the learning engine reevaluates and/or updates the electromagnetic environment at a predetermined timeframe. In another embodiment, the learning engine reevaluates and/or updates the electromagnetic environment is updated after a problem is detected. The statistical inference and machine learning (ML) engine utilizes statistical learning techniques and/or control theory to learn the electromagnetic environment and make predictions about the electromagnetic environment. The survey occupancy application is operable to determine occupancy in frequency bands. In another embodiment, the survey occupancy application is operable to schedule occupancy in a frequency band. The survey occupancy application is also used to preprocess at least two signals that exist in the same band based on interference between the at least two signals. The resource brokerage application is operable to optimize resources to improve application performance. In a preferred embodiment, the resource brokerage application is operable to use processed data from the at least one monitoring sensor and/or additional information to determine environmental awareness (e.g., environmental situational awareness). The environmental awareness and/or capabilities of a device and/or a resource are used to determine policies and/or reasoning to optimize the device and/or the resource. The resource brokerage application is operable to control the device and/or the resource. Additionally, the resource brokerage application is operable to control the at least one monitoring sensor. The certification and compliance application is operable to determine if applications and/or devices are behaving according to rules and/or policies (e.g., customer policies and/or rules, government rules). In another embodiment, the certification and compliance application is operable to determine if the applications and/or the devices are sharing frequency bands according to the rules and/or the policies. In yet another embodiment, the certification and compliance application is operable to determine if the applications and/or the devices are behaving according to non-interferences rules and/or policies. The sharing application is operable to determine optimization of how applications and/or devices share the frequency bands. In a preferred embodiment, the sharing application uses a plurality of rules and/or policies (e.g., a plurality of customer rules and/or policies, government rules) to determine the optimization of how the applications and/or the devices share the frequency bands. Thus, the sharing application satisfies the plurality of rules and/or policies as defined by at least one customer and/or the government. The statistical inference and prediction utilization application is operable to utilize predictive analytics techniques including, but not limited to, machine learning (ML), artificial intelligence (AI), neural networks (NNs), historical data, and/or data mining to make future predictions and/or models. The system is preferably operable to recommend and/or perform actions based on historical data, external data sources, ML, AI, NNs, and/or other learning techniques. The semantic engine is operable to receive data in forms including, but not limited to, audio data, text data, video data, and/or image data. In one embodiment, the semantic engine utilizes a set of system rules and/or a set of system policies. In another embodiment, the set of system rules and/or the set of system policies is created using a prior knowledge database. The semantic engine preferably includes an editor and a language dictionary. The semantic engine preferably further includes a programmable rules and policy editor. The programmable rules and policy editor is operable to include at least one rule and/or at least one policy. In one embodiment, the at least one rule and/or the at least one policy is defined by at least one customer. Advantageously, this allows the at least one customer to dictate rules and policies related to customer objectives. The system further includes a tip and cue server. The tip and cue server is operable utilize the environmental awareness from the data processed by the at least one data analysis engine in combination with additional information to create actionable data. In a preferred embodiment, the tip and cue server utilizes information from a specific rule set (e.g., customer defined rule set), further enhancing the optimization capabilities of the system. The specific rule set is translated into optimization objectives, including constraints associated with signal characteristics. In a preferred embodiment, the tip and cue server is operable to activate at least one alarm and/or provide at least one report. In another embodiment, the tip and cue server is operable to activate the at least one alarm and/or provide the at least one report according to the specific rule set. Advantageously, the system is operable to run autonomously and continuously. The system learns from the environment, and, without operator intervention, is operable to detect anomalous signals that either were not there before, or have changed in power or bandwidth. Once detected, the system is operable to send alerts (e.g., by text or email) and begin high resolution spectrum capture, or UQ capture of the signal of interest. Additionally, the system is operable to optimize and prioritize applications using the learning engine. FIG.3is a flow diagram of the system according to one embodiment. FIG.4illustrates the acquisition component of the system. The system includes an antenna subsystem including at least one antenna, an analog front-end conditioning system, a radio receiver front-end system, and a UQ buffer. The system is operable to perform control functions including, but not limited to, controlling a radio server, conditioning the radio server, UQ flow control and/or time stamping, and/or buffer management. FIG.5illustrates one embodiment of an analog front end of the system. In one embodiment, electromagnetic waves are sent directly to a radio receiver front-end subsystem as shown in Path A. Alternatively, the electromagnetic waves are sent through an analog filter bank and amplifier/channel with a filter (SSS), an amplifier (e.g., variable gain amplifier), and an automatic gain controller as shown in Path B before reaching the radio receiver front-end subsystem. In one embodiment, the BCU is 80 MHz. Alternatively, the BCU is 150 MHz. The radio receiver front-end subsystem is described inFIG.6. FIG.6illustrates one embodiment of a radio receiver front-end subsystem. Path A and Path B continue into a radio-frequency integrated circuit (RFIC), and then proceed to a digital down-converter (DDC) before downsampling (e.g., decimation) and moving through a field programmable gate array (FPGA). In one embodiment, signals from the FPGA are operable to be sent to a digital to analog converter (DAC). Alternatively, signals are sent via bus to a Universal Software Radio Peripheral hardware driver (UHD) host and SD controller before continuing to Path E, which is described inFIG.7. FIG.7continues the embodiment of the radio receiver front-end shown inFIG.6after digitization. In one embodiment, Path E continues to the I,Q buffer. In another embodiment, Path E continues to a baseband receiver. In one embodiment, signals are further processed using signal processing software (e.g., GNU Radio software). In yet another embodiment, the baseband receiver is connected to inputs and/or outputs. In one embodiment, the inputs include, but are not limited to, MicroSD Flash memory and/or a Universal Serial Bus (USB) console. In one embodiment, the outputs include, but are not limited to, USB 2.0 host and/or audio. Alternatively, data from the baseband receiver is sent to the I,Q buffer via the IGbE port. The system preferably uses multiple receiver channels for the front end. In one embodiment, there are 4 receiver channels. Alternatively, there are 8, 12, 16, or 32 receiver channels. I,Q data is preferably tagged by the receiver channel and receiver antenna (e.g., bandwidth, gain, etc.) and then stored in the I,Q buffer before analysis is completed. Advantageously, the system is hardware agnostic. The system is operable to provide a suggestion for hardware for a particular frequency set. Additionally, the hardware agnostic nature of the system allows for established architecture to persist. The system is cost effective because it also allows for cheaper antennas to be used, as well as less expensive filters, because calibration can be done using the system rather than the antennas and/or filters, as well as post-ADC processing to rectify any performance loss. Because the system processes all signals present in the spectrum and their inter-relationships to extract environmental awareness, so the analog front end does not require elaborate filtering to avoid interference and provide optimum dynamic range. Additionally, the analog front end does not require optimal antennas for all frequency bands and ranges to obtain environmental awareness. For a time domain programmable channelizer, all filters' impulse responses must be programmable and the number of filters must be programmable. Additionally, the channel bandwidth resolution must be programmable starting from a minimum bandwidth. The center frequency of each channel must also be programmable. Decimation is based on channel bandwidth and desired resolution. However, these requirements are difficult to implement for channels with variable bandwidth and center frequency. Wavelet filters can be used effectively if the center frequency and channel's bandwidth follow a tree structure (e.g., Harr and Deubauchi wavelets).FIG.8is an example of a time domain programmable channelizer. In a preferred embodiment, the system includes a frequency domain programmable channelizer as shown inFIG.9. The programmable channelizer includes buffer services, pre-processing of the FFT bin samples, bin selection, at least one band pass filter (BPF), an inverse fast Fourier transform (IFFT) function, decomposition, and/or frequency down conversion and phase correction to yield baseband I,Q for channels1through R. The IFFT function and decimation function are done to obtain each decomposed channel I,Q at the proper sampling rate. Advantageously, the frequency domain programmable channelizer is more computationally efficient than a time domain programmable channelizer because each filter is just a vector in the frequency domain and the filtering operation is just a vector multiplication, decomposing the input signal into multiple channels of differing bandwidths is parsing the vector representing the input signal frequency domain content into a subvector of different length. FIG.10is another embodiment of a programmable channelizer. Data enters the filter and channel generators with channelization selector logic for a table lookup of filter coefficient and channelization vectors. The programmable channelizer includes a comparison at each channel, which provides anomalous detection using a mask with frequency and power, which is then sent to the learning engine and/or the alarm system (“A”). Data processed with the FFT is sent to the blind detection engine and/or for averaging processing (“B”). In one embodiment, average processing includes blind detection of channel bandwidths and center frequency and comparison to resulting frequency domain channelization. Data from the table lookup of filter coefficient and channelization vectors undergoes preprocessing with a mix circular rotator to produce D1blocks of R1points. A sum is taken of the D1block, and an R1point IFFT is taken to produce discor overlap samples OL1. This process occurs (e.g., in parallel) for D1blocks of R1points through DRblocks of RRpoints to produce OL1through OLR, which are then sent to the classification engine (“C”). All data from the I,Q buffer is preferably stored in a buffered database (“D”). In one embodiment, the I,Q buffer is partitioned into N blocks with L oversamples. In one embodiment, the original sample rate is decimated by Diwhere i is from 1 to R. FIG.53illustrates one embodiment of a flow diagram of a channelizer configuration. Channel Definitions define the channels to detect. Channelization Vectors define the channels within the context of FFT. FFT Configuration configures FFT with sufficient resolution bandwidth (RBW) to resolve ambiguity among the Channel Definitions. A Deference Matrix identifies narrowband (NB) channels that are subsets of wideband (WB) channels to avoid false detection and/or to detect simultaneous channels. Detector Configuration sets acceptable probability of False Alarm and probability thresholds for detection. Channel Detection determines which channels are present subject to the Detector Configuration and the Deference Matrix. In one embodiment, the FFT Configuration is operable to capture channels in their entirety. In one embodiment, the FFT Configuration is operable to capture channels with a minimum number of points to support detection probabilities. In one embodiment, the FFT Configuration is operable to resolve ambiguities between channels by employing a sufficiently small RBW. In one embodiment, a channelization vector is operable to specify normalized power levels per FFT bin for at least one channel (e.g., each channel). In one embodiment, the channelization vector power levels are preferably normalized with respect to peak power in the channel's spectrum envelope. In one embodiment, a deference matrix is operable to identify channels with bandwidth that falls within the bandwidth of other channels (i.e., channels with wider bands). In one embodiment, the channels defer to the channels with wider bands. In one embodiment, the detector is operable to evaluate narrower band channels before the detector evaluates wider band channels. In one embodiment, positive detection of wider band channels is operable to further constrain criteria that defines the positive detection of deferring narrower band channels. In one embodiment, noise floor estimation is operable to define the mean and standard deviation of noise from the average power FFT (e.g., block). In one embodiment, statistics are operable to be on a channel span level, a block containing multiple channels, or a bin level. In one embodiment, the statistics are operable to be used by the channel detector to set criteria for asserting channel detection. In one embodiment, detector configuration is operable to set a minimum acceptable probability of false alarm. In one embodiment, the detector configuration is operable to set a minimum acceptable probability of missed detection. In one embodiment, channel detection is operable to evaluate presence of channels in order of increasing bandwidths (i.e., narrowband channels before wideband channels). In one embodiment, the channel detection is operable to perform a hypothesis test for each bin using Noise Floor Estimation and maximum probability of false alarm. In one embodiment, the hypothesis test includes a first hypothesis that a bin contains only noise (H0) and/or a second hypothesis that a bin contains noise and signal (Ha). In one embodiment, the channel detection is operable to determine the probability that S≤s bins within a given channel's bandwidth reject the “Noise only” hypothesis. In one embodiment, the channel detection is operable to assert that a channel is detected if the probability is greater than pmin. In one embodiment, the channel detection is operable to adjust the hypothesis test for deferring narrowband channels if detection probability is greater than p. In one embodiment, the channel detection is operable to set a new mean and a new standard deviation. FIG.54illustrates another embodiment of a flow diagram of a channelizer. The Detector Configuration is operable to set a probability of false alarm per bin (αbin) and a minimum probability (probmin). The noise modeler is operable to estimate a mean noise per bin (μN) and a standard deviation per bin (σN). In one embodiment, the noise modeler is operable to determine a channelization vector (CVbin). The channelizer is operable to slice the FFT per channel and determine kobsas the number of bins above the threshold in the channelization vector. The probability calculator is operable to estimate βbin, determine the probability of missed detection (pmd), determine the probability of false alarm (pfaor p-value), and determine the probability of detection being correct. The detector is operable to determine that a signal is detected if the probability is greater or equal to the minimum probability. The detector is operable to determine that a signal is not detected if the probability is less than the minimum probability. FIG.55illustrates yet another embodiment of a flow diagram of a channelizer configuration. The noise modeler is operable to estimate a mean noise per bin (μN) and a standard deviation per bin (σN). The Detector Configuration is operable to set a probability of false alarm level per bin (αbin), a power threshold (Tpwr), and a minimum probability of being correct (probmin). The channelizer is operable to slice the FFT per channel and determine kobsas the number of bins above the threshold in the channelization vector. The channelizer is operable to determine a mean of the signal and noise per bin (μN+S) and a standard deviation of the signal and noise per bin (σN+S). The probability calculator is operable to estimate βbin, determine the probability of missed detection, determine the p-value, and determine the probability of being correct. The detector is operable to determine that a signal is detected if the probability is greater or equal to the minimum probability. The detector is operable to determine that a signal is not detected if the probability is less than the minimum probability. FIG.56Aillustrates one embodiment of probability density functions per bin for noise and a signal with the noise.FIG.56Aincludes a mean of the noise (μH0), a mean of the signal with the noise (μHa), a standard deviation of the noise (σH0), a standard deviation of the signal with the noise (σHa), a probability of false detection (α), a probability of missed detection (β), and a carrier-to-noise ratio (CNR). A minimum CNR is determined that satisfies α and β given σH0and σHa. Additionally, given N bins of channel bandwidth (chbw), a number of bins that reject H0to determine presence of a channel is determined. In one embodiment, the probability of false detection (α) is calculated using the following equation: Pr{{circumflex over (μ)}=μx|H0∩(X≥CV)}=α In one embodiment, the probability of correctly rejecting the noise only hypothesis is equal to the probability of getting as many as s bins with power above the threshold if the noise only hypothesis is true. In one embodiment, the probability of correctly rejecting the noise only hypothesis is calculated using the following equation: Pr⁢{ABT≤abt|H0⋂NBins=nbins}=(nbinsabt)⁢αabt(1-α)nbins-abt⁢where⁢⁢abt=∑i=1NBins(Xi≥CV) In one embodiment, the CNR is determined as follows: C⁢N⁢R=μHaμH0 In another embodiment, the CNR is determined as follows: C⁢N⁢R=(μHa-C⁢V)(C⁢V-μH0) In one embodiment, CV=[μH0+ϕ(1−α)σH0]. The above equation is operable to be substituted as follows: C⁢N⁢R=(μHa-[μH0+ϕ⁡(1-α)⁢σH0])([μH0+ϕ⁡(1-α)⁢σH0]-μH0) The above equation is operable to be substituted as follows: C⁢N⁢R=(C⁢V+ϕ⁡(1-β)⁢σHa-[μH0+ϕ⁡(1-α)⁢σH0])([μH0+ϕ⁡(1-α)⁢σH0]-μH0) The above equation is operable to be substituted as follows: C⁢N⁢R=([μH0+ϕ⁡(1-α)⁢σH0]+ϕ⁡(1-β)⁢σHa-[μH0+ϕ⁡(1-α)⁢σH0])([μH0+ϕ⁡(1-α)⁢σH0]-μH0) The above equation is operable to be substituted as follows: C⁢N⁢R=ϕ⁡(1-β)⁢σHaϕ⁡(1-α)⁢σH0 In yet another embodiment, the CNR is calculated as follows: C⁢N⁢R=(s+n)n=sn+1=1⁢0SNR_dB/10+1 The above equation is operable to be substituted as follows: CNRdB=10 log10(10SNR_dB/10+1) The above equation is operable to be substituted as follows: 10CNR_dB/10=10SNR_dB/10+1 The above equation is operable to be substituted as follows: 10CNR_dB/10−1=10SNR_dB/10 The above equation is operable to be substituted as follows: 10 log10(10CNR_dB/10−1)[dB]=10 log10(10SNR_dB/10)=SNR[dB] The above equation is operable to be substituted as follows: 10{circumflex over ( )}(10 log10(10CNR_dB/10−1))=SNR[mW] FIG.56Billustrates an example of a plurality of frequency bins and a critical value or threshold. In one embodiment, a frequency bin is assigned a value of “1” if it is above the critical value or threshold and a value of “0” if it is below the critical value or threshold. In the example shown inFIG.56B, nine frequency bins are assigned a value of “1” and one frequency bin is assigned a value of “0.” FIG.56Cillustrates another example of a plurality of frequency bins and a critical value or threshold. In the example shown inFIG.56C, three frequency bins are assigned a value of “1” and seven frequency bins are assigned a value of “O.” FIG.57Aillustrates one example of probability on the channel level. In one embodiment, the system is operable to determine H0, which is the hypothesis that the set of bins contains noise only. In one embodiment, the system is operable to determine Ha, which is the hypothesis that the set of bins contains noise plus signal, making a possible channel. In one embodiment, the system is operable to calculate a threshold for the probability of false alarm (k0) on the channel level (αch). In one embodiment, k0is calculated as follows: k0=k|H0(αch) In one embodiment, the system is operable to calculate a threshold for the probability of missed detection (ka) on the channel level (βch). In one embodiment, kais equal to kobs. In one embodiment, kais calculated as follows: ka=k|H0(βch) In one embodiment, the system is operable to select a probmin, αch, and βch(e.g., via manual input). In one embodiment, the probmin, the αch, and the βchare set manually based on desired goals. For example, and not limitation, in one embodiment, the probability of false alarm for a channel is set at <5% and the probability of missed detection for a channel is set at <5%. In one embodiment, the system is operable to determine k0given n bins, αch, and αbn. In one embodiment, the system is operable to determine βch. FIG.57Billustrates one example of probability on the bin level. In one embodiment, a power threshold for the probability of false alarm on the bin level is calculated. In one embodiment, the channelization vector is calculated using the following equation: CV=dBm/H0(αbin) FIG.58Aillustrates an example showing a critical value (γ), power levels of a noise signal, and power levels of a signal with noise. In the example shown inFIG.58A, the received noise mean and variance is estimated. The noise power is assumed to be Gaussian and independent and identically distributed. The probability of false alarm at the bin level (α) is specified. The critical value is set accordingly. The received mean and variance is measured, and the signal is placed into N adjacent frequency bins. The signal with noise power is also assumed to be Gaussian and independent and identically distributed. The probability of missed detection at the bin level (β) is determined with respect to the critical value. FIG.58Billustrates an example of the probability of false alarm for a noise signal. FIG.58Cillustrates an example of the probability of missed detection for a signal with noise. FIG.59illustrates one example of probabilities. In the example shown inFIG.59, the probability of false detection (α) is equal to 0.05 and the probability of missed detection (β) is equal to 0.05. The probability of channel presence is estimated as follows when the probability of false detection and the probability of missed detection are small: probability of channel presence=[1−(α+β)] Thus, for α=0.05 and β=0.05, the probability of channel presence=0.9. FIG.60illustrates an example of equations for hypothesis testing with channels for the following scenarios: (1) Noise Only (No Signal), Assert Noise Only, (2) Noise Only (No Signal), Assert Noise and Signal, (3) Noise and Signal, Assert Noise Only, and (4) Noise and Signal, Assert Noise and Signal. Assert Noise Only is used when n-k bins or more below the threshold. Assert Noise and Signal is used when k bins or more are above the threshold. In one embodiment, Noise Only (No Signal), Assert Noise Only uses the following equation: Pr⁢{N"\"\!\(\*StyleBox[\"N\",AutoStyleWords->{},FontSlant->Italic]\)\""|N}=1-∑u=kn(nu)⁢αu(1-α)n-u In one embodiment, Noise Only (No Signal), Assert Noise and Signal uses the following equation: Pr⁢{N+S"\"\!\(\*StyleBox[\"N\",AutoStyleWords->{},FontSlant->Italic]\)+\!\(\*StyleBox[\"S\",AutoStyleWords->{},FontSlant->Italic]\)\""|N}=∑u=kn(nu)⁢αu(1-α)n-u In one embodiment, Noise and Signal, Assert Noise Only uses the following equation: Pr⁢{N"\"\!\(\*StyleBox[\"N\",AutoStyleWords->{},FontSlant->Italic]\)\""|N+S}=∑c=n-kn(nc)⁢βc(1-β)n-c In one embodiment, Noise and Signal, Assert Noise and Signal uses the following equation: Pr⁢{N+S"\"\!\(\*StyleBox[\"N\",AutoStyleWords->{},FontSlant->Italic]\)\!\(\*StyleBox[\"+\",AutoStyleWords->{},FontSlant->Italic]\)\!\(\*StyleBox[\"S\",AutoStyleWords->{},FontSlant->Italic]\)\""|N+S}=1-∑c=n-kn(nc)⁢βc(1-β)n-c FIG.61illustrates one example of an algorithm used in a channelizer. The algorithm6100includes estimating the bin-wise noise model6102. In one embodiment, the bin-wise noise model is obtained from a noise floor estimator. In one embodiment, the bin-wise noise model is calculated using the following equation: H0=(μN,σN2) The algorithm6100includes estimating the bin-wise noise model6104. In one embodiment, the bin-wise noise and signal model is obtained from the FFT or block FFT. In one embodiment, the bin-wise noise and signal model is calculated using the following equation: Ha=(μNS,σNS2) The algorithm6100includes determining a bin-level probability of false alarm (PFA=αbin)6106. The algorithm6100further includes determining a bin-level threshold6108. In one embodiment, the bin-level threshold is calculated using the following equation: τbin=Q−1(αbin|H0) where Q(·) is the complementary error function (ERFC) for Gaussian distributions. The algorithm6100includes determining a channel-level probability of false alarm (αchan)6110. The algorithm6100further includes determining a channel-level threshold6112. In one embodiment, the channel-level threshold is calculated using the following equation: τchan=Q−1(αchan|[H0,n,αbin]) The algorithm6100includes calculating a detection vector (v)6114. In one embodiment, an element of the detection vector is calculated as follows: ={0,if⁢xi<τbin1,if⁢xi≥τbin The algorithm6100includes counting the number of detection vector elements equal to 1 and comparing to k6116. The algorithm6100further includes determining the p-value6118, where the p-value is the probability of false alarm. In one embodiment, the p-value is calculated using the following equation: p⁢‐⁢value=1-∑i=0k⁢(ni)⁢αb⁢i⁢ni(1-αb⁢i⁢n)n-i+(nk)⁢αb⁢i⁢nk(1-αb⁢i⁢n)n-k The algorithm6100includes determining the probability of missed detection ((3 am)6120. In one embodiment, the probability of missed detection is calculated using the following equation: PM⁢D=βc⁢h⁢a⁢n=ϕ⁡(Ha,n,τc⁢han,αb⁢i⁢n)=∑i=0τc⁢h⁢a⁢n(ni)⁢αi(1-α)n-i The algorithm6100includes determining the overall detection probability6122. In one embodiment, the overall detection probability is calculated using the following equation: probability=(1−PFA)(1−PMD) FIGS.62A-62Gillustrate examples of information provided in at least one graphical user interface (GUI).FIG.62Aillustrates one example of a simulated FFT, noise floor estimation, and comparison with channelization vectors. In the example shown inFIG.62A, FFT Frame #1has a threshold of −59.7 dBm and a noise floor estimate of −99.7 dBm. FIG.62Billustrates channelization vectors and comparisons for the FFT frame shown inFIG.62A. A smaller channel bandwidth leads to a higher probability of detection than a larger channel bandwidth. The smaller channel bandwidth has a probability of 100% in the lower frequency bins and a probability of 53% in the higher frequency bins for the signal above the threshold. The larger channel bandwidth has a probability of 34% in the lower frequency bins. FIG.62Cillustrates amplitude probability distribution for the FFT frame shown inFIG.62A. FIG.62Dillustrates FFT frames grouped by block for the FFT frame shown inFIG.62A. FIG.62Eillustrates average power FFT by block for the FFT frame shown inFIG.62A. FIG.62Fillustrates the channelization vectors for the FFT frame shown inFIG.62A. The x-axis indicates the frequency of the bin. FIG.62Gillustrates the comparison vectors for the FFT frame shown inFIG.62A. FIG.63Aillustrates one embodiment of preemption by wideband detection. In one embodiment, the system determines which narrowband channels are proper subsets of wideband channels. In one embodiment, this is recursive across multiple layers. In one embodiment, the FFT is compared to wideband channels after narrowband channels. In one embodiment, if the minimum threshold for detection is satisfied for a channel, that channel is considered detected. In one embodiment, detection of narrowband channels that are proper subsets of wideband channels is preempted. In the example shown inFIG.63A, channels D, E, and B are detected, and channels A, F, C, and G are not detected. Detection of channel F is preempted by detection of B. FIG.63Billustrates the example shown inFIG.63Aaccounting for wideband detection. In one embodiment, the system determines which narrowband channels are proper subsets of wideband channels. In one embodiment, this is recursive across multiple layers. In one embodiment, the FFT is compared to wideband channels after narrowband channels. In one embodiment, if the minimum threshold for detection is satisfied for a channel, that channel is considered detected. In the embodiment shown inFIG.63B, the average power of the wideband channel is subtracted from the power observed in the FFT slice within the narrowband channel. Alternatively, the bin-level threshold is adjusted. In one embodiment, if the minimum threshold for detection is satisfied for the adjusted narrowband channel, it is considered detected. In the example shown inFIG.63B, channels D, E, B, and F are detected, and channels A, C, and G are not detected. FIG.64illustrates one embodiment of maximum resolution bandwidth (RBW) for vector resolution. The maximum RBW is greatest common factor of all spacings (δ) from all channels and of the span. A deference matrix is operable to be created showing which overlapping channels, if both individually detected, take precedence in an either-or detection scheme. In one embodiment, it is also possible to detect both channels with varying levels of likelihood. FIG.65Aillustrates one example of a spectrum scenario. The set includes 0 and/or 1. That is, S={0,1}. The number of permutations is equal to 2N, where N is the number of bins. For example, if the number of bins is 50, the number of permutations is equal to 250. If the number of bins is 100, the number of permutations is equal to 2100. FIG.65Billustrates an embodiment of channelization vectors. In one embodiment, the channelization vector includes a smaller bandwidth (bw1), resulting in two channels (channel1.1and channel1.2). In another embodiment, the channelization vector includes one channel (channel2.1). FIG.66Aillustrates one example of bandwidth selection. In the example shown inFIG.66A, bandwidth1(bw1) is 100 bins, bandwidth2(bw2) is 200 bins, and bandwidth3(bw3) is 600 bins. FIG.66Billustrates an example using the bandwidth selection inFIG.66A. In the example shown inFIG.66B, the first two channels detect a signal in bw1, the first channel detects a signal in bw2, and the channel only detects a signal in ⅓ of bw3. Therefore, no signal is detected in bw3. FIGS.67A-67Dillustrate additional examples of information provided in at least one graphical user interface (GUI).FIG.67Aillustrates another example of an FFT frame illustrating the center frequency on the x-axis and the frequency bin mean power level (Fbpower_mean) on the y-axis. The example shown inFIG.67Aincludes a plurality of threshold powers (e.g., Tpwr3, Tpwr8, Tpwr13). FIG.67Billustrates channelization vectors and comparisons for the FFT frame shown inFIG.67A. The graph illustrates possible channel flow composition (chflo) and their probabilities. As shown inFIG.67B, Tpwr3and Tpwr8are detected, while Twpr13is not detected. FIG.67Cillustrates a graph for the FFT frame shown inFIG.67A. The graph illustrates the number of bins of a particular power level (mass function) per channel. The x-axis is the percentage of bins at a particular power level for the channel. For example, around 15% of the bins in the channel are at a level of −95 dbfs. FIG.67Dillustrates a table for the FFT frame shown inFIG.67A. The table provides a channel index, a count, and a number of bins above the threshold for that channel. The table shows for different channel indexes a minimum number of bins of signal with noise to determine that the channel is occupied with a probability of false alarm below the desired threshold. FIGS.68A-68Cillustrate further examples of information provided in at least one graphical user interface (GUI).FIG.68Aillustrates an example of spectrum by channel. FIG.68Billustrates an example of channel detection for the spectrum shown inFIG.68A. FIG.68Cillustrates an example of a cascade for the spectrum shown inFIG.68A. FIG.69Aillustrates an example of a graph of total signal power. In one embodiment, total signal power is calculated using the following equation: Total⁢Signal⁢Power=1c⁢h⁢b⁢w⁢∑b⁢Ps,i FIG.69Billustrates an example of a graph of total noise power. In one embodiment, total noise power is calculated using the following equation: Total⁢Noise⁢Power=1c⁢h⁢b⁢w⁢∑b⁢Pn,i In one embodiment, the signal to noise ratio is calculated using the following equation: S⁢N⁢R=∑b⁢Ps,i⁢δ⁢f∑b⁢Pn,i⁢δ⁢f FIGS.70A-70Cillustrate further examples of information provided in at least one graphical user interface (GUI).FIG.70Aillustrates another example of a simulated FFT, noise floor estimation, and comparison with channelization vectors. In the example shown inFIG.70A, FFT Frame #6has a threshold of −59.2 dBm and a noise floor estimate of −99.2 dBm. FIG.70Billustrates channelization vectors and comparisons for the FFT frame shown inFIG.70A. As described previously, a smaller channel bandwidth leads to a higher accuracy of detection than a larger channel bandwidth. The smaller channel bandwidth has a probability of 100% and 95% in the two lowest frequency bins and a probability of 74% in the highest frequency bin for the signal above the threshold. The larger channel bandwidth has a probability of 74% in the lowest frequency bin and 98% in the second frequency bin. However, the larger channel bandwidth does not account for the drop in signal shown in the third frequency bin of the smaller channel bandwidth, which results in only a 40% probability of detection in the smaller channel bandwidth. FIG.70Cillustrates channelization vectors and comparisons for the FFT frame shown inFIG.70A. FIG.11illustrates one embodiment of a blind detection engine. In one embodiment, the blind detection engine is operable to estimate a number of channels and their bandwidths and center frequencies using only an averaged power spectral density (PSD) of the captured signal. Data from the programmable channelizer undergoes an N point FFT. A power spectral density (PSD) is calculated for each N point FFT and then a complex average FFT is obtained for the P blocks of N point FFT. The PSD is sent to a noise floor estimator, an edge detection algorithm, and/or an isolator. Noise floor estimates from the noise floor estimator are sent to the signal database. The edge detection algorithm passes information to a signal separator (e.g., bandwidth, center frequency). The isolator obtains information including, but not limited to, PSD, the bandwidth and center frequency per channel, the complex average FFT, and/or the N point FFT. Information from the isolator is sent to the programmable channelizer, the envelope feature extraction module, and/or the classification engine. FIG.12illustrates one embodiment of an edge detection algorithm. Peaks are detected for all power values above the noise floor. Peaks are recorded in a power array and/or an index array. Consecutive power values are found by looping through the arrays. For each group of consecutive power values, a sub-power array and/or a sub-index array are created. The blind detection engine steps through each power value starting with a default rising threshold. If N consecutive values are increasing above the rising threshold, a first value of N values is set as the rising edge and the index of the first value of N values is recorded. The Nth value is recorded as a rising reference point. The rising threshold is updated based on the rising reference point, and the blind detection engine continues to scan for rising values. If the blind detection engine does not detect rising values and detects M consecutive values decreasing below a falling threshold, a first value of M values is set as the falling edge and the index of the first value of M values is recorded. The Mth value is recorded as a falling reference point. The falling threshold is updated based on the falling reference point. In one embodiment, x is a value between 1 dB and 2.5 dB. In one embodiment, y is a value between 1 dB and 2.5 dB. The blind classification engine receives information from the blind detection engine as shown inFIG.13. Signals are separated based on bandwidth and/or other envelope properties (e.g., duty cycle). An IFFT is performed on R signals for narrowband and/or broadband signals. Decimation is then performed based on bandwidth. Moment calculations are performed for each signal I,Q using the decimated values and/or information from the channelizer. In a preferred embodiment, the moment calculations include a second moment and/or a fourth moment for each signal. A match based on cumulants is selected for each I,Q stream, which is sent to the demodulation bank and/or the geolocation engine. From the definitions of the second and fourth moments, the following equations are used to calculate the cumulants: Cˆ2⁢0=1N⁢∑n=1n=N❘"\[LeftBracketingBar]"Y⁡(n)❘"\[RightBracketingBar]"2Cˆ2⁢1=1N⁢∑n=1n=NY2(n)Cˆ4⁢0=1N⁢∑n=1n=NY4(n)-3⁢Cˆ2⁢02Cˆ4⁢1=1N⁢∑n=1n=NY3(n)⁢Y*(n)-3⁢Cˆ2⁢0⁢Cˆ2⁢1Cˆ4⁢2=1N⁢∑n=1n=N❘"\[LeftBracketingBar]"Y⁡(n)❘"\[RightBracketingBar]"4-❘"\[LeftBracketingBar]"Cˆ2⁢0❘"\[RightBracketingBar]"2-2⁢Cˆ2⁢12 If it assumed that transmitted constellations are normalized to unity average power, which is easily completed by a power factor equal to 0 dB, this results in Ĉ21≈1. To calculate a normalized fourth moment is calculated using the following equation: Cˆ˜4⁢J=ΔCˆ4⁢J/Cˆ2⁢1⁢for⁢J=0,1,2 Advantageously, normalizing the fourth moment cumulants removes any scaling power problems. FIG.14illustrates details on selection match based on cumulants for modulation selection. As previously described, the cumulants preferably include a second moment and/or a fourth moment for each signal. For example, a fourth moment between −0.9 and 0.62 is a quadrature amplitude modulation (QAM) signal, a fourth moment greater than or equal to 1 is an amplitude modulation (AM) signal, a fourth moment equal to −1 is a constant envelope signal (e.g., frequency modulation (FM), Gaussian minimum-shift keying (GMSK), frequency-shift keying (FSK), or phase-shift keying (PSK)), a fourth moment between −1.36 and 1.209 is a pulse-amplitude modulation (PAM) signal, and a fourth moment equal to −2 is a binary phase-shift keying (BPSK) signal. A type is selected using a look up table, the signal I,Q is labeled with the type, and the information is sent to the demodulation bank. Additional information about selection match based on cumulants for modulation selection is available in Table 1 below. TABLE 1Type4042σ(40)σ(42)AM>1.0FM−1GMSK−1FSK−1BPSK−2.00−2.0000PAM (4)−1.36−1.362.562.56PAM (8)−1.238−1.2384.824.82PAM (16)−1.2094−1.20945.525.52PSK (4)−1.00−1.00QAM (4)−0.68−0.68QAM (16)−0.64−0.643.832.24QAM (32)−0.61−0.613.892.31 FIG.15illustrates a flow diagram according to one embodiment of the present invention. Data in the UQ buffer is processed using a library of functions. The library of functions includes, but is not limited to, FFT, peak detection, characterization, and/or rate adjustment. As previously described, the system preferably includes at least one data analysis engine. In one embodiment, the at least one data analysis engine includes a plurality of engines. In one embodiment, the plurality of engines includes, but is not limited to, a detection engine, a classification engine, an identification engine, a geolocation engine, and/or a learning engine. Each of the plurality of engines is operable to interact with the other engines in the plurality of engines. The system is operable to scan for occupancy of the spectrum, create a mask, detect drones, and/or analyze data. The control panel manages all data flow between the UQ buffer, library functions, the plurality of engines, applications, and user interface. A collection of basic functions and a particular sequence of operations are called from each of the plurality of engines. Each of the plurality of engines is operable to pass partially processed and/or analyzed data to other engines to enhance functionality of other engines and/or applications. The data from the engines are then combined and processed to build applications and/or features that are customer or market specific. In one embodiment, a plurality of state machines performs a particular analysis for a customer application. In one embodiment, the plurality of state machines is a plurality of nested state machines. In another embodiment, one state machine is utilized per each engine application. The plurality of state machines is used to control flow of functions and/or an engine's input/output utilization to perform required analyses. FIG.16illustrates control panel functions according to one embodiment. The control panel is operable to detect occupation of the spectrum, activate an alarm, perform drone detection and direction finding, geolocation, artificial spectrum verification, and provide at least one user interface. The at least one user interface is preferably a graphical user interface (GUI). The at least one user interface (UI) is operable to display output data from the plurality of engines and/or applications. In one embodiment, the at least one UI incorporates third party GIS for coordinate display information. The at least one UI is also operable to display alarms, reports, utilization statistics, and/or customer application statistics. In one embodiment, the at least one UI includes an administrator UI and at least one customer UI. The at least one customer UI is specific to each customer. In one embodiment, the systems and methods of the present invention provide unmanned vehicle (e.g., drone) detection. The overall system is capable of surveying the spectrum from 20 MHz to at least 6 GHz, not just the common 2.4 GHz and 5.8 GHz bands as in the prior art. The systems and methods of the present invention are operable to detect UVs and their controllers by protocol. In one embodiment, the systems and methods of the present invention maintain a state-of-the-art learning system and a protocol library for classifying detected signals by manufacturer and controller type. The state-of-the-art learning system and the protocol library are updated as new protocols emerge. In one embodiment, classification by protocol chipset is utilized to provide valuable intelligence and knowledge for risk mitigation and threat defense. The valuable intelligence and knowledge include effective operational range, supported peripherals (e.g., external or internal camera, barometers, global positioning system (GPS) and dead reckoning capabilities), integrated obstacle avoidance systems, and interference mitigation techniques. Advantageously, the system is operable to detect drones that are not in the protocol library. Further, the system is operable to detect drones without demodulating command and control protocols. In one embodiment, the system does not include a protocol library. New protocols and new drones are constantly being released. Additionally, a nefarious operator can switch out the chipset of a drone, which would leave an area vulnerable to the modified drone because a system would not be able to identify the signal as a drone if the protocol is not in the protocol library. In one embodiment, the system generates actionable data that indicates that at least one signal is behaving like a drone. The system performs blind detection, which allows the system to detect the drone signal without the protocol library. In one embodiment, the system is operable to detect drones by evaluating an envelope of the command and control signal. In one embodiment, the system detects the drone signal based on a duty cycle and/or changes in power levels of the signal envelope. In one example, an LTE signal is classified by the system as a drone when moving at a high velocity. FIG.17illustrates one embodiment of an RF analysis sub-architecture of the system. The control panel interacts with the UQ buffer, library functions, engines, applications, and/or user interface. The engines include a data analysis engine. Analyzed data from the data analysis engine results in an alarm when an alarm condition is met. The alarm is transmitted via text and/or email, or is visualized on a graphical user interface (GUI) of at least one remote device (e.g., smartphone, tablet, laptop computer, desktop computer). FIG.18illustrates one embodiment of a detection engine of the system. The detection engine receives data from the at least one monitoring unit. the detection engine includes blind feature extraction algorithms. A mask is created. The detection engine then performs a mask utilization rating and the mask is compared to previous masks. Anomalies are then detected. As previously described, in one embodiment, the data analysis engine is operable to perform mask creation and analyze an electromagnetic (e.g., RF) environment using masks. Mask creation is a process of elaborating a representation of an electromagnetic environment by analyzing a spectrum of signals over a certain period of time. A mask is created with a desired frequency range (e.g., as entered into the system via user input), and FFT streaming data is also used in the mask creation process. A first derivative is calculated and used for identifying maximum power values. A moving average value is created as FFT data is received during a selected time period for mask creation (e.g., via user input). For example, the time period is 10 seconds. The result is an FFT array with an average of maximum power values, which is called a mask.FIG.19illustrates a mask according to one embodiment of the present invention. In one embodiment, the mask is used for electromagnetic environment analysis. In one embodiment, the mask is used for identifying potential unwanted signals in an electromagnetic (e.g., RF) environment. The system is operable to utilize masks based on a priori knowledge and/or masks based on expected behavior of the electromagnetic environment. Each mask has an analysis time. During its analysis time, a mask is scanned and live FFT streaming data is compared against the mask before next mask arrives. If a value is detected over the mask range, a trigger analysis is performed. Each mask has a set of trigger conditions, and an alarm is triggered into the system if the trigger conditions are met. In one embodiment, there are three main trigger conditions including an alarm duration, a decibel (dB) offset, and a count. The alarm duration is a time window an alarm needs to appear to be considered a trigger condition. For example, the time window is 2 seconds. If a signal is seen for 2 seconds, it passes to the next condition. The dB offset is a threshold value (i.e., dB value) a signal needs to be above the mask to be considered as a potential alarm. The count is the number of times the first two conditions need to happen before an alarm is triggered into the system. FIG.20illustrates a workflow of automatic signal detection according to one embodiment of the present invention. A mask definition is specified by a user for an automatic signal detection process including creating masks, saving masks, and performing electromagnetic (e.g., RF) environment analysis based on the masks created and FFT data stream from a radio server. In one embodiment, if trigger conditions are met, alarms are triggered and stored to a local database for visualization. FIG.21illustrates components of a Dynamic Spectrum Utilization and Sharing model according to one embodiment of the present invention. By employing the Dynamic Spectrum Utilization and Sharing model, the present invention is operable to perform a plurality of radio frequency (RF) environmental awareness functionalities including, but not limited to, monitoring and/or detection, identification, and/or classification. Monitoring and/or detection functionalities include, but are not limited to, broadband frequency range detection, wideband capture in real-time or near-real-time, initial processing and/or post event processing, 24-hour autonomous monitoring, and/or reconfiguration options relating to time, frequency, and spatial settings. Identification functionalities include, but are not limited to, anomalous signal detection, anomalous signal flagging, anomalous signal time stamp recording, providing an anomalous signal database, and/or utilization of a spectrum mask. In one embodiment, the spectrum mask is a dynamic spectrum mask. Classification functionalities include, but are not limited to, correlating signal events with known signal protocols, correlating signal events with known variables, correlating signal events with known databases, correlating signal events with existing wireless signal formats, and/or correlating signal events with existing cellular protocol formats. Each of the aforementioned functionalities incorporates learning processes and/or procedures. These include, but are not limited to, historical data analysis, data preservation tools, and/or learning analytics. Incorporation of machine learning (ML), artificial intelligence (AI), and/or neural networks (NN) ensures that every aspect of detection, monitoring, identification, and/or classification is performed autonomously. This is compounded through the use of the learning analytics, enabling the use of utilization masks for continual ML, prediction modeling, location analysis, intermodulation analysis, and/or the integration of third-party data sets for increasing overall learning capabilities and/or functionalities of the platform. Moreover, these capabilities and/or functionalities are backed up through secure data preservation services, providing both a secure platform environment and/or data enforcement documentation (i.e., legal documents). Furthermore, the platform is operable to provide automated notifications, programmable event triggers, customizable rules and/or policies, and Tip and Cue practices. Automated notifications include, but are not limited to, alerts, alarms, and/or reports. Advantageously, this functionality enables the platform to react to specific rules and/or policies, as well as incorporating the platform's own awareness and knowledge, creating an optimized platform for any RF environment and/or mission. Prediction models used by the platform provide an accurate insight into the dynamic spectrum allocation and utilization functionalities. These prediction models enable the platform to autonomously create forecasts for future spectrum usage. In addition, the prediction models used by the platform incorporate descriptive analytics, diagnostic analytics, predictive analytics, and/or prescriptive analytics. Descriptive analytics refers specifically to the data stored, analyzed, and/or used by the platform. Descriptive analytics provides data enabling the platform to act and/or provide a suggested action. Diagnostic analytics refers to how and/or why the descriptive analytics acted and/or suggested an action. Predictive analytics specifically refers to the utilization of techniques including, but not limited to, ML, AI, NNs, historical data, and/or data mining to make future predictions and/or models. Prescriptive analytics refers to the act and/or the suggested act generated by the descriptive analytics. Once this predictive model is in place, the platform is operable to recommend and/or perform actions based on historical data, external data sources, ML, AI, NNs, and/or other learning techniques. FIG.22illustrates a Results model according to one embodiment of the present invention. The Results model provided by the present invention is centered around four core practices: proactive, predictive, preventative, and preservation. The predictive practice refers to using the aforementioned learning functionalities and capabilities to evolve the platform, enabling the characterization of events that led up to an interference scenario and/or performing interference source modeling to forecast future probabilities and/or conflicting events. The predictive practice is intertwined with the platform remaining proactive, identifying possible signals of interference. While identifying possible signals of interference is a combination of the platform's predictive and proactive capabilities, the platform also remains proactive in performing wireless location characterization for both pre- and post-event scenarios. In addition, the platform's proactive capabilities include, but are not limited to, identifying all possible sources of conflict based on prior events. Furthermore, the platform also focuses on preventative practices. These include, but are not limited to, maintaining a set of de-confliction rules, providing trigger warning notifications and/or early warning notifications, and/or maintaining compatibility with multiple government agencies, including corresponding government project management offices (PMOs) and any interfering sources. In one embodiment, the platform automatically establishes the set of de-confliction rules, where the set of de-confliction rules are operable for editing. In one embodiment, the platform is operable to autonomously edit the set of de-confliction rules. In another embodiment, the platform enables editing of the set of de-confliction rules via user input. Finally, the platform includes preservation components and/or functionalities. These include, but are not limited to, evidentiary storage, learning capabilities, and modeling functionality. Each of these four core practices is interconnected within the platform, enabling dynamic spectrum utilization and sharing. Geolocation Geolocation is an additional aspect relating to electromagnetic (e.g., RF) analysis of an environment. The primary functions of the electromagnetic analysis of the environment include, but are not limited to, detection, classification, identification, learning, and/or geolocation. Additionally, the electromagnetic analysis is operable to output environmental awareness data. The system includes a geolocation engine, operable to use both passive and/or active methods of radio geolocation. In general, radio geolocation refers to the geographic location of man-made emitter sources propagating using radio (electromagnetic) waves as they impinge upon a man-made geo-locator, or receiver. Passive radio geolocation requires no transmission of signals by a geo-locator, whereas active radio geolocation involves a geolocator transmitting signals that interact with an emitter source. Passive methods of geolocation include, but are not limited to, single directional beam antenna response, multidirectional beam antenna response (Amplitude Ratio), multi-antenna element response (Array Processing), line of bearing (LOB)-to-position solutions, and/or general optimization. Multi-antenna element response methods include, but are not limited to, phase interferometry, beamforming, conventional array manifold processing approaches, and/or high-resolution array manifold processing approaches using signals subspace. While these passive methods primarily apply to approaches for Direction Finding (DF) as spatial filtering, passive methods that apply to approaches other than DF as spatial filtering are operable for use by the system. DF refers to the process of estimating the direction of arrival of propagating emitter signals as they impinge on a receiver. Passive methods further include DF approaches based on general optimization including, but not limited to, digital pre-distortion (DPD), convex programming, and/or distributed swarm approaches. In addition to the previously mentioned passive approaches, the system is operable to apply approaches based on ranging observations including, but not limited to, receiver signal strength indicators (RSSI), time of arrival (TOA), and/or time difference of arrival (TDOA) methods. RSSI approaches relate to the generation of observable data and/or location estimation. TOA and/or TDOA approaches relate to generating observable data from distributed multi antenna systems and/or single antenna systems, and/or location estimation using non-linear optimization and/or constraint linear optimization. In a preferred embodiment, geolocation is performed using Angle of Arrival (AOA), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), and power distribution ratio measurements. FIG.23is a table listing problems that are operable to be solved using the present invention, including serviceability, interference, monitoring and prediction, anomalous detection, planning, compliance, and/or spectrum sharing or leasing. FIG.24illustrates a passive geolocation radio engine system view according to one embodiment of the present invention. First, a radio frequency (RF) front end receives at least one RF signal. The RF front end includes, but is not limited to, a set of sensors, a sensor subsystem, at least one analog to digital converter (ADC), and/or an ADC sensor processing subsystem. Once the at least one RF signal has been analyzed by the RF front end and/or the sensor subsystem, the at least one RF signal becomes at least one analyzed RF signal. The at least one analyzed RF signal is output to a measurement subsystem. The measurement subsystem is operable to generate radio location measurements. The radio location measurements are envelope-based and/or signal characteristic-based. The measurement subsystem is further operable to generate contextual measurements and/or conventional measurements relating to TOA, AOA, TDOA, receiver signal strength (RSS), RSSI, and/or FDOA. The generated conventional measurements are then analyzed using position algorithms, further enhancing measurement accuracy. Once the contextual measurements are generated and/or the conventional measurements are analyzed using position algorithms, the at least one analyzed RF signal is sent to a position engine subsystem. The position engine subsystem includes a position display. Each of the previously mentioned components, systems, and/or subsystems are operable for network communication. The geolocation engine is operable to use a plurality of algorithms to determine a location of the at least one signal. The plurality of algorithms includes, but is not limited to, TDOA, FDOA, AOA, power level measurements, and/or graphical geolocation, which is described below. The geolocation is operable to autonomously decide what algorithm(s) to use to determine the location. FIG.25illustrates one embodiment of a method to autonomously select one or more of the plurality of algorithms. Timing and carrier frequency offset corrections are performed on I,Q data and sent to the signal detection engine. The I,Q data (e.g., I,Q0, I,Q1, I,Q2, I,Q3) is sent to the signal detection engine. Information from the signal detection engine is sent to the blind classification engine. Information from the blind classification engine is sent to the demodulation bank. Error estimates are performed on envelope (Doppler) measurements from the signal detection engine, signal (time) domain measurements from the blind classification engine, and timing, protocol, and Doppler measurements from the demodulation bank. An evaluation of fidelity is approximately equal to an SNR of the envelope measurements (λ1), signal measurements (λ2), and protocol measurements (λ3). Error analysis for AOA, TDOA, correlation ambiguity function (CAF) for graphical geolocation, FDOA, and power ratio are used in the evaluation of fidelity. Ctis calculated and minimized over all methods to select the at least one geolocation method, where C t is the cost function to be minimized and t denotes a time block used to calculate the geolocation solution. In one embodiment, the geolocation engine uses graphical geolocation techniques. An area is pictorially represented in a grid. Resolution of the grid determines a position in space. The system is operable to detect the at least one signal in the space and determine a location of the at least one signal using the graphical geolocation techniques. In one embodiment, outputs (e.g., location) to a non-linear equation are used to determine possible inputs (e.g., power measurements). The possible outputs are placed on a two-dimensional map. Inputs are then mapped to form a hypothesis of possible outputs. In one embodiment, the graphical geolocation techniques include an image comparison between the two-dimensional map of the possible outputs and the signal data. In another embodiment, the graphical geolocation techniques further include topology (e.g., mountains, valleys, buildings, etc.) to create a three-dimensional map of the possible outputs. The graphical geolocation techniques in this embodiment include an image comparison between the three-dimensional map of the possible outputs and the signal data. The geolocation engine is operable to make use of spinning DF, through the use of rotating directional antennas and estimating the direction of arrival of an emitter. The rotating directional antennas measure the received power as a function of the direction, calculating a local maximum assumed direction of the emitter. The geolocation engine is also operable to account for any transient signals that escape detection based on rotation speed. This is accomplished by using at least one broad antenna, reducing the chance of the system missing a signal, as well as reducing angular resolution. Practical considerations for these calculations include, but are not limited to, antenna rotation speed (ω), a rate of arrival of signals (γ), and/or a spatial sampling rate (FPS). The system is further operable to use amplitude ratio methods for geolocation. These methods involve a multi-lobe amplitude comparison. This is performed using a set of fixed directional antennas pointing in different directions. A ratio corresponding to two responses is calculated, account for antenna patterns. This ratio is used to obtain a direction estimate. By not using moving parts and/or antennas, the system is more responsive to transient signals. However, this does require accurate antenna patterns, as these patterns also control system resolution. General antenna array processing assumes that a signal, s(t), remains coherent as it impinges at each antenna in the array. This enables the delay (τm) of the signal at an m-th sensor relative to the signal at the origin of the coordinate system can be expressed as: τm=−(qmsin(θ)+rmcos(θ))/c Where c is the propagation of light and ⊖ is the angle of the signal impinging in the sensor relative to the r-axis. Since the signal is assumed to have a Taylor series decomposition, the propagation delay, τm, is equivalent to the phase shift of: φm=−wτm=>ejφm Thus, the vector x(t) of antenna responses can be written as: [x1(t)⋮xM(t)]=[ej⁢φ1⋮ej⁢φM]⁢ej⁡(wt+φ) Where φm(w,θ)=[qmsin(θ)+rmcos(θ)]w/c More generally, the sensor has different directionality and frequency characteristics which are modeled by applying different gains and phases to the model above, where the gain and phase of the m-th sensor is denoted as: gm(w, θ) and ϕm(w, θ) Then, the above equation for x(t) can be expressed as: [x1(t)⋮xM(t)]=[g1(w,θ)⁢ej⁢ϕ1(w,θ)⁢ej⁢φ1⋮gM(w,θ)⁢ej⁢ϕm(w,θ)⁢ej⁢φM]⁢ej⁡(wt+ϕ)=a⁡(w,θ)⁢ej⁡(wt+ϕ) Where α(w, θ) is known as the array response vector. The collection of all array response vectors for all angles ⊖ and all frequencies, w, is known as an array manifold (i.e., a vector space). In general, if the array manifold is known and it is free of ambiguities, then obtaining the k−1 angles (θ1. . . θk−1) of k−1 signals if their corresponding array response vector are linearly independent is performed by correlating x(t) with the array response vector of the appropriate angle. In one embodiment, ambiguities refer to the array manifold lacking rank deficiencies to k if the system is trying to resolve k−1 directions at the same frequency. The array manifold does not typically have a simple analytical form and thus the array manifold is approximated using discrete angles for each frequency of interest. In more general cases, where multiple sinusoidal signals arrive at the array with additive noise, then the x(t) can be expressed as: x⁡(t)=∑i=1Ia⁡(w,θi)⁢si(t)+n⁡(t)si(t)=ej⁡(wt⁢t+βi)=[a⁡(w,θ1)⁢⋯⁢a⁡(w,θI)][s1(t)⁢⋯⁢sI(t)]T+n⁡(t)=A⁡(w,Θ)⁢s⁡(t)+n⁡(t) In one embodiment, additive noise refers to thermal noise from sensors and associated electronics, background noise from the environment, and/or other man-made interference sources including, but not limited to, diffuse signals. Where one or more signals are non-sinusoidal (i.e., broadband), the equivalent can be expressed by its Taylor series over the relevant frequencies. However, when looking for a narrow frequency band of interest, the system is operable to assume an array response vector, α(w, θ), is approximately constant with respect to w over all angles, ⊖. This implies that the reciprocal of the time required for the signal to propagate across the array is much less than the bandwidth of the signal. If sensor characteristics do not vary significantly across bandwidth, then the dependency on w can be dropped off of the array response vector and/or matrix, resulting in: x(t)=A(Θ)s(t)+n(t) For example, in an antenna array using a uniform linear array (ULA), a signal source, s(t)=ej(wt+ϕ), impinges in the ULA at angle ⊖. Thus, if the received signal at a first sensor is x1(t)=s(t), then it is delayed at sensor m by: xm(t)=e-j⁢w⁡((m-1)⁢d⁢sin(θ)c)⁢s⁡(t) In vector form, this is represented as: x⁡(t)=[1e-jw⁡(d⁢sin⁡(θ)c)⋮e-jw⁡((M-1)⁢d⁢sin(θ)c)]⁢s⁡(t)=a⁡(w,θ)⁢s⁡(t) If there are source signals received by the ULA, then: x(t)=A(Θ)s(t)+n(t) Where A⁡(Θ)=[1⋯1e-jw⁡(d⁢sin(θ1)c)⋯e-jw⁡(d⁢sin(θ1)c)⋮⋮⋮e-jw⁡((M-1)⁢d⁢sin(θ1)c)⋯e-jw⁡((M-1)⁢d⁢sin(θI)c)] x(t) is the received signal vector (M by 1), s(t)=[s1(t) . . . s1(t)]Tis the source signal vector (I by 1), n(t) is the noise signal vector (M by 1), and A(Θ)=[α(w, θ1), . . . , α(w, θ1)] a (M by I) matrix=>Array manifold. In this example, typical assumptions include, but are not limited to, sources of signal(s) are independent and narrow band in relation to dimensions of the ULA (d, Md) and around the same max frequency, all antenna elements are the same, d<λmax2 to avoid rank ambiguities, the system can resolve M−1 direction angles without rank, and/or noises are uncorrelated. In another example, array processing is performed for DF using beamforming. Given knowledge of the array manifold, the array can be maneuvered by taking linear combinations of each element response. This is similar to how a fixed, single antenna can be maneuvered mechanically. Thus, y(t)=wHx(t), where w is interpreted as a Finite Impulse Response (FIR) of a filter in the spatial domain. To calculate the power of y(t), assuming a discretization to N samples, the system uses the following: Py=|y(n)|2N=wHx(n)x(n)HNw=wHRxxw Where·Ndenotes the time averaging over N sample times and Rxxis the measured spatial autocorrelation matrix of the received array output data. In another example, array processing is performed for DF using beamforming, where Rxx=x(n)xH(n)nandRxx=(A(Θ)s(n)+n(n))(A(Θ)s(n)+n(n))HN In one embodiment, the system assumes a source signal is uncorrelated to a noise source, resulting in: Rxx=A(Θ)RssAH(Θ)+Rnn Thus, the power of the linear combination and/or spatial filtering of the array vector response elements are expressed as: Py=wH(A(Θ)RssAH(Θ)+Rnn)w In examples where array processing for DF is performed using beamforming, for a single unit magnitude sinusoid impinging the array at angle θowith no noise becomes: Py(θ)=wHa(θo)aH(θo)w=|wHa(θo)|2 Accounting for the Cauchy-Schwarz inequality |wHa(θo)|2≤∥w∥2∥a(θo)∥2, for all vectors w with equality if, and only if, w is proportional to α(θo), the spatial filter that matches the array response at the direction of arrival, θo, produces a maximum value for Py(θ). In addition, DF can be accomplished by searching over all possible angles to maximize Py(θ), and/or search over all filters w that are proportional to some array vectors responding to an impinging angle θ, α(θ), where Max {Py(θ)}over all angles=>filters w=α(θ). When this method is used, the system behaves like a spinning DF system where the resulting beam is changing for each search angle. Advantageously, this method encounters no blind spots due to the rotational and/or rate of arrival of the source signal. Moreover, when the system is using beamforming techniques and/or processes, the system is operable to search for multiple directions of arrival of different sources with resolutions depending on the width of the beam formed and the height of the sidelobes. For example, a local maximum of the average filter output power is operable to be shifted away from the true direction of arrival (DOA) of a weak signal by a strong source of interference in the vicinity of one of the sidelobes. Alternatively, two closely spaced signals results in only one peak or two peaks in the wrong location. In yet another example, array processing for DF is performed using a Capon Minimum Variance Distortionless Response (MVDR) approach. This is necessary in cases where multiple source signals are present. The system obtains more accurate estimates of the DOA when formatting the array beam using degrees of freedom to form a beam in the “look” direction and any remaining degrees of freedom to from “nulls” in remaining directions. The result is a simultaneous beam and null forming filter. Forming nulls in other directions is accomplished by minimizing Py(θ) while constraining a beam in the look direction. This avoids the trivial solution of w=0. Thus: minover all wPy(θ)subject towHa(θ)=1 The resulting filter, wc(θ), is shown as: wc(θ)=(aH(θ)Rxx−1a(θ))−1Rxx−1a(θ) Using this filter, the filter output power is expressed as: Pyc(θ)=wcH(θ)Rxxwc(θ)=(aH(θ)Rxx−1a(θ))−1 Therefore, the Capon approach searches over all DOA angles that the above power has maximized, using maxover all angles(αH(θ)Rxx−1α(θ))−1. A Capon approach is able to discern multiple signal sources because while looking at signals impinging at 0, the system attenuates a signal arrive at fifteen degrees by a formed beam. A Capon approach is one method for estimating an angular decomposition of the average power received by the array, sometimes referred to as a spatial spectrum of the array. The Capon approach is a similar approach to spectrum estimation and/or modeling of a linear system. The system is further operable to employ additional resolution techniques including, but not limited to, Multiple Signal Classifier (MUSIC), Estimation of Signal Parameters via Rotational Invariance Technique (ESPRITE), and/or any other high-resolution DOA algorithm. These resolution techniques enable the system to find DOAs for multiple sources simultaneously. In addition, these resolution techniques generate high spatial resolution when compared with more traditional methods. In one embodiment, these techniques apply only when determining DOAs for narrowband signal sources. For example, when using MUSIC-based methods, the system computes an N×N correlation matrix using Rx=E{x(t)xH(t)}=ARsAH+σ02I, where Rs=E {s(t)sH(t)}=diag. {σ12, . . . , σ12}. If the signal sources are correlated so that Rsis not diagonal, geolocation will still work while R s has full rank. However, if the signal sources are correlated such that Rsis rank deficient, the system will then deploy spatial smoothing. This is important, as Rsdefines the dimension of the signal subspace. However, For N>I, the matrix ARSAHis singular, where det[ARSAH]=det[Rx−σ02I]=0. But this implies that σ02is an eigenvalue of Rx. Since the dimension of the null space ARSAHis N−I, there are N−I such eigenvalues σ02of Rx. In addition, since both Rxand ARSAHare non-negative, there are I other eigenvalues σi2such that σi2>σ02>0. In a preferred embodiment, geolocation is performed using Angle of Arrival (AOA), Time Difference of Arrival (TDOA), Frequency Difference of Arrival (FDOA), and power distribution ratio measurements. Advantageously, using all four measurements to determine geolocation results in a more accurate determination of location. In many instances, only one type of geolocation measurement is available that forces the use of one particular approach (e.g., AOA, TDOA, FDOA), but in many cases geolocation measurements are operable to be derived from behavior of the signals, thus allowing for the use of multiple measurements (e.g., all four measurements) that are combined to obtain a more robust geolocation solution. This is especially important when most of the measurements associated with each approach are extremely noisy. Learning Engine In addition, the system includes a learning engine, operable to incorporate a plurality of learning techniques including, but not limited to, machine learning (ML), artificial intelligence (AI), deep learning (DL), neural networks (NNs), artificial neural networks (ANNs), support vector machines (SVMs), Markov decision process (MDP), and/or natural language processing (NLP). The system is operable to use any of the aforementioned learning techniques alone or in combination. Advantageously, the system is operable for autonomous operation using the learning engine. In addition, the system is operable to continuously refine itself, resulting in increased accuracy relating to data collection, analysis, modeling, prediction, measurements, and/or output. The learning engine is further operable to analyze and/or compute a conditional probability set. The conditional probability set reflects the optimal outcome for a specific scenario, and the specific scenario is represented by a data model used by the learning engine. This enables the system, when given a set of data inputs, to predict an outcome using a data model, where the predicted outcome represents the outcome with the least probability of error and/or a false alarm. Without a learning engine, prior art systems are still operable to create parametric models for predicting various outcomes. However, these prior art systems are unable to capture all inputs and/or outputs, thereby creating inaccurate data models relating to a specific set of input data. This results in a system that continuously produces the same results when given completely different data sets. In contrast, the present invention utilizes a learning engine with a variety of fast and/or efficient computational methods that simultaneously calculate conditional probabilities that are most directly related to the outcomes predicted by the system. These computational methods are performed in real-time or near-real-time. Additionally, the system employs control theory concepts and methods within the learning engine. This enables the system to determine if every data set processed and/or analyzed by the system represents a sufficient statistical data set. Moreover, the learning engine includes a learning engine software development kit (SDK), enabling the system to prepare and/or manage the lifecycle of datasets used in any system learning application. Advantageously, the learning engine SDK is operable to manage system resources relating to monitoring, logging, and/or organizing any learning aspects of the system. This enables the system to train and/or run models locally and/or remotely using automated ML, AI, DL, and/or NN. The models are operable for configuration, where the system is operable to modify model configuration parameters and/or training data sets. By operating autonomously, the system is operable to iterate through algorithms and/or hyperparameter settings, creating the most accurate and/or efficient model for running predictive system applications. Furthermore, the learning engine SDK is operable to deploy webservices in order to convert any training models into services that can run in any application and/or environment. Thus, the system is operable to function autonomously and/or continuously, refining every predictive aspect of the system as the system acquires more data. While this functionality is controlled by the learning engine, the system is not limited to employing these learning techniques and/or methods in only the learning engine component, but rather throughout the entire system. This includes RF fingerprinting, RF spectrum awareness, autonomous RF system configuration modification, and/or autonomous system operations and maintenance. The learning engine uses a combination of physical models and convolutional neural networks algorithms to compute a set of possible conditional probabilities depicting the set of all possible outputs based on input measurements that provide the most accurate prediction of solution, wherein accurate means minimizing the false probability of the solution and also probability of error for the prediction of the solution. FIG.26is a diagram describing three pillars of a customer mission solution. The three pillars include environmental awareness, policy management, and spectrum management. The system obtains environmental awareness through a plurality of sensors. The plurality of sensors preferably captures real-time information about the electromagnetic environment. Additionally, the system includes machine learning and/or predictive algorithms to enhance environmental understanding and support resource scheduling. Policy management is flexible, adaptable, and dynamic, and preferably takes into account real-time information on device configurations and the electromagnetic environment. The system is preferably operable to manage heterogeneous networks of devices and applications. Spectrum management preferably makes use of advanced device capabilities including, but not limited to, directionality, waveforms, hopping, and/or aggregation. FIG.27is a block diagram of one example of a spectrum management tool. The spectrum management tool includes environment information obtained from at least one monitoring sensor and at least one sensor processor. The spectrum management tool further includes a policy manager, a reasoner, an optimizer, objectives, device information, and/or a device manager. The objectives include information from a mission information database. The policy manager obtains information from a policy information database. In another embodiment, the policy manager uses information (e.g., from the policy information database, measurements of the electromagnetic environment) to create policies and/or rules for conditional allowance of resources per signal using the spectrum. These policies and/or rules are then passed to the reasoner to determine optimization conditional constraints to be used by the optimizer with the goal of optimizing the utilization of the spectrum (e.g., based on mission information and objectives) by all signals present according to the policies and/or rules. At the output of the optimizer, resources (bandwidth, power, frequency, modulation, spatial azimuth and elevation focus for transmitter/receiver (TX/RX) sources) as well as interference levels per application is recommended for each signal source. After that, the loop of collecting and environmental awareness is fed to the policy manger and the reasoner. FIG.28is a block diagram of one embodiment of a resource brokerage application. As previously described, the resource brokerage application is preferably operable to use processed data from the at least one monitoring sensor and/or additional information to determine environmental awareness (e.g., environmental situational awareness). The environmental awareness and/or capabilities of a device and/or a resource are used to determine policies and/or reasoning to optimize the device and/or the resource. The resource brokerage application is operable to control the device and/or the resource. Additionally, the resource brokerage application is operable to control the at least one monitoring sensor. Semantic Engine The system further includes an automated semantic engine and/or translator as shown inFIG.29. The translator is operable to receive data input including, but not limited to, at least one use case, at least one objective, and/or at least one signal. In one embodiment, the at least one use case is a single signal use case. In another embodiment, the at least one use case is a multiple-signal use case. Once the translator receives data input, the translator uses natural language processing (NLP), and/or similar data translation processes and techniques, to convert the data input into actionable data for the automated semantic engine. By separating the data translation process from the automated semantic engine, the system is operable to provide more processing power once the data input is sent to the automated semantic engine, reducing the overall processing strain on the system. The automated semantic engine includes a rule component, a syntax component, a logic component, a quadrature (Q) component, and/or a conditional set component. In addition, the semantic engine is operable for network communication with a prior knowledge database, an analytics engine, and/or a monitoring and capture engine. Data is initially sent to the automated semantic engine via the translator. The automated semantic engine is operable to receive data from the translator in forms including, but not limited to, audio data, text data, video data, and/or image data. In one embodiment, the automated semantic engine is operable to receive a query from the translator. The logic component and/or the rule component are operable to establish a set of system rules and/or a set of system policies, where the set of system rules and/or the set of system policies is created using the prior knowledge database. Advantageously, the automated semantic engine is operable to run autonomously using any of the aforementioned learning and/or automation techniques. This enables the system to run continuously, without requiring user interaction and/or input, resulting in a system that is constantly learning and/or refining data inputs, creating more accurate predictions, models, and/or suggested actions. Moreover, the automated semantic engine enables the system to receive queries, searches, and/or any other type of search-related function using natural language, as opposed to requiring a user and/or customer to adapt to a particular computer language. This functionality is performed using a semantic search via natural language processing (NLP). The semantic search combines traditional word searches with logical relationships and concepts. In one embodiment, the automated semantic engine uses Latent Semantic Indexing (LSI) within the automated semantic engine. LSI organizes existing information within the system into structures that support high-order associations of words with text objects. These structures reflect the associative patterns found within data, permitting data retrieval based on latent semantic context in existing system data. Furthermore, LSI is operable to account for noise associated with any set of input data. This is done through LSI's ability to increase recall functionality, a constraint of traditional Boolean queries and vector space models. LSI uses automated categorization, assigning a set of input data to one or more predefined data categories contained within the prior knowledge database, where the categories are based on a conceptual similarity between the set of input data and the content of the prior knowledge database. Furthermore, LSI makes use of dynamic clustering, grouping the set of input data to data within the prior knowledge database using conceptual similarity without using example data to establish a conceptual basis for each cluster. In another embodiment, the automated semantic engine uses Latent Semantic Analysis (LSA) within the automated semantic engine. LSA functionalities include, but are not limited to, occurrence matrix creation, ranking, and/or derivation. Occurrence matrix creation involves using a term-document matrix describing the occurrences of terms in a set of data. Once the occurrence matrix is created, LSA uses ranking to determine the most accurate solution given the set of data. In one embodiment, low-rank approximation is used to rank data within the occurrence matrix. In another embodiment, the automated semantic engine uses semantic fingerprinting. Semantic fingerprinting converts a set of input data into a Boolean vector and creates a semantic map using the Boolean vector. The semantic map is operable for use in any context and provides an indication of every data match for the set of input data. This enables the automated semantic engine to convert any set of input data into a semantic fingerprint, where semantic fingerprints are operable to combine with additional semantic fingerprints, providing an accurate solution given the set of input data. Semantic fingerprint functionality further includes, but is not limited to, risk analysis, document search, classifier indication, and/or classification. In yet another embodiment, the automated semantic engine uses semantic hashing. By using semantic hashing, the automated semantic engine maps a set of input data to memory addresses using a neural network, where semantically similar sets of data inputs are located at nearby addresses. The automated semantic engine is operable to create a graphical representation of the semantic hashing process using counting vectors from each set of data inputs. Thus, sets of data inputs similar to a target query can be found by accessing all of the memory addresses that differ by only a few bits from the address of the target query. This method extends the efficiency of hash-coding to approximate matching much faster than locality sensitive hashing. In one embodiment, the automated semantic engine is operable to create a semantic map. The semantic map is used to create target data at the center of the semantic map, while analyzing related data and/or data with similar characteristics to the target data. This adds a secondary layer of analysis to the automated semantic engine, providing secondary context for the target data using similar and/or alternative solutions based on the target data. The system is operable to create a visualization of the semantic map. Traditional semantic network-based search systems suffer from numerous performance issues due to the scale of an expansive semantic network. In order for the semantic functionality to be useful in locating accurate results, a system is required to store a high volume of data. In addition, such a vast network creates difficulties in processing many possible solutions to a given problem. The system of the present invention solves these limitations through the various learning techniques and/or processes incorporated within the system. When combined with the ability to function autonomously, the system is operable to process a greater amount of data than systems making use of only traditional semantic approaches. By incorporating the automated semantic engine within the system, the system has a greater understanding of potential solutions, given a provided set of data. Semantic engines are regularly associated with semantic searches or searches with meaning or searches with understanding of overall meaning of the query, thus by understanding the searcher's intent and contextual meaning of the search to generate more relevant results. Semantic engines of the present invention, along with a spectrum specific ontology (vocabulary and operational domain knowledge), help automate spectrum utilization decisions based on dynamic observations and extracted environmental awareness, and create and extend spectrum management knowledge for multiple applications. Tip and Cue Processes The system uses a set of “tip and cue” processes, generally referring to detection, processing, and/or providing alerts using creating actionable data from acquired RF environmental awareness information in conjunction with a specific rule set, further enhancing the optimization capabilities of the system. The specific rule set is translated into optimization objectives, including constraints associated with signal characteristics. The tip and cue processes of the present invention produce actionable data to solve a plurality of user issues and/or objectives. Tip and cue processes are performed by an awareness system. The awareness system is operable to receive input data including, but not limited to, a set of use cases, at least one objective, and/or a rule set. The input data is then analyzed by a translator component, where the translator component normalizes the input data. Once normalized, the input data is sent to a semantic engine. The semantic engine is necessary for analyzing unstructured data inputs. Thus, a semantic engine is necessary to understand data inputs and apply contextual analysis as well, resulting in a more accurate output result. This accuracy is primarily accomplished using the previously mentioned learning techniques and/or technologies. The semantic engine uses the input data to create a set of updated rules, a syntax, a logic component, a conditional data set, and/or Quadrature (Q) data. The semantic engine is operable for network communication with components including, but not limited to, a prior knowledge database, an analytics engine, and/or a monitoring and capture engine. The monitoring and capture engine operates with an RF environment and includes a customer application programming interface (API), a radio server, and/or a coverage management component. The customer API and the radio server are operable to output a set of I-phase and Q-phase (UQ) data using a Fast Fourier Transform (FFT). The set of I/Q data demonstrates the changes in amplitude and phase in a sine wave. The monitor and capture engine also serves as an optimization point for the system. The awareness engine operates as both a platform optimization unit and a client optimization unit. The awareness engine is operable to perform functions including, but not limited to, detection, classification, demodulation, decoding, locating, and/or signaling alarms. The detection and/or classification functions assist with incoming RF data acclimation and further includes a supervised learning component, where the supervised learning component is operable to make use of any of the aforementioned learning techniques and/or technologies. The demodulation and/or decode functionalities are operable to access RF data from WIFI, Land Mobile Radio (LMR), Long Term Evolution (LTE) networks, and/or Unmanned Aircraft Systems (UAS). The location component of the awareness engine is operable to apply location techniques including, but not limited to, DF, geolocation, and/or Internet Protocol (IP) based location. The awareness engine is operable to signal alarms using FASD and/or masks. In one embodiment, the masks are dynamic masks. The analytics engine is operable to perform functions including, but not limited to, data qualification, data morphing, and/or data computing. The awareness engine, analytics engine, and the semantic engine are all operable for network communication with the prior knowledge database. This enables each of the previously mentioned engines to compare input and/or output with data already processed and analyzed by the system. The various engines present within the Tip & Cue process further optimize client output in the form of dynamic spectrum utilization and/or allocation. The system uses the Tip & Cue process to provide actionable information and/or actionable knowledge to be utilized by at least one application to mitigate problems of the at least one application and/or to optimize services or goals of the at least one application. In a preferred embodiment, each customer has a service level agreement (SLA) with the system manager that specifies usage of the spectrum. The system manager is operable act as an intermediary between a first customer and a second customer in conflicts regarding the spectrum. If signals of the first customer interfere with signals of the second customer in violation of one or more of SLAs, the system is operable to provide an alert to the violation. Data regarding the violation is stored in at least one database within the system, which facilitates resolution of the violation. The control plane is operable to directly communicate the first customer (i.e., customer in violation of SLA) and/or at least one base station to modify parameters to resolve the violation. In one embodiment, the system is used to protect at least one critical asset. Each of the at least one critical asset is within a protection area. For example, a first critical asset is within a first protection area, a second critical asset is within a second protection area, etc. In one embodiment, the protection area is defined by sensor coverage from the at least one monitoring sensor. In other embodiments, the protection area is defined by sensor coverage from the at least one monitoring sensor, a geofence, and/or GPS coordinates. The system is operable to detect at least one signal within the protection area and send an alarm for the at least one signal when outside of allowed spectrum use within the protection area. The system is further operable to determine what information is necessary to provide actionable information. For example, sensor processing requires a large amount of power. Embedding only the sensors required to provide sufficient variables for customer goals reduces computational and/or power requirements. FIGS.30-32are flow diagrams illustrating the process of obtaining actionable data and using knowledge decision gates.FIG.30illustrates a flow diagram of a method to obtain actionable data based on customer goals3000. A goal is rephrased as a question in Step3002. Information required to answer the question is identified in Step3004. Next, quality, quantity, temporal, and/or spatial attributes are identified for each piece of information in Step3006. In a preferred embodiment, all four attributes (i.e., quality, quantity, temporal, and spatial) are identified in Step3006. The quality, quantity, temporal, and/or spatial attributes are ranked by importance in Step3008. For each information and attribute pair, corresponding physical layer information from the wireless environment is associated in Step3010. All information obtained in steps3004-3010is operable to be transmitted to the semantic engine. Further, wireless information is associated with a most statistically relevant combination of extracted measurements in at least one dimension in Step3012. The at least one dimension includes, but is not limited to, time, frequency, signal space and/or signal characteristics, spatial, and/or application goals and/or customer impact. In a preferred embodiment, the at least one dimension includes time, frequency, signal space and/or signal characteristics, spatial, and application goals and/or customer impact. The RF awareness measurements are then qualified in Step3014and actionable data is provided in Step3016based on the relationship established in Steps3002-3012. Actionable data efficiency is qualified in Step3018based on Step3014. All actionable data and its statistical significance is provided in Step3020. FIG.31illustrates a flow diagram of a method of implementation of actionable data and knowledge decision gates from total signal flow3100. A customer goal is rephrased as a question in Step3102. The customer goal is provided to the semantic engine having a proper dictionary in Step3104(as shown in Steps3002-3012ofFIG.30). Constraints with statistical relevance from Step3104and extracted electromagnetic (e.g., RF) awareness information from sensors in Step3106are used in an optimization cost function in Step3108(as shown in Step3014ofFIG.30). Results from the optimization cost function in Step3108are provided to an optimization engine in Step3110(as shown in Steps3016-3020ofFIG.30) to provide actionable data and its statistical relevance in Step3112. FIG.32illustrates a flow diagram of a method to identify knowledge decision gates based on operational knowledge3200. Customer operational description of utilization of actionable data is provided in Step3202. The customer operational description of utilization of actionable data from Step3202is used identify a common state of other information used to express the customer operational description and/or required to make decisions in Step3204. Further, the customer operational description of utilization of actionable data from Step3202is used to provide parameterization of customer operational utilization of actionable data in Step3206. The parameterization of customer operational utilization of actionable data from Step3206is used to identify conditions and create a conditional tree in Step3208. In one embodiment, the information from Step3204is used to identify the conditions and create the conditional tree in Step3208. Information from Steps3206-3208is operable to be transmitted to the semantic engine. Actionable data is provided in Step3210and used to compute statistical properties of the actionable data as it changes over time in Step3212. Information from Steps3208and3212is used by a decision engine to travel a decision tree to identify decision gates in Step3214. The identified decision gates from Step3214are provided along with the information in Step3204to allow the customer to make decisions in Step3216. FIG.33illustrates an overview of one example of information used to provide knowledge. Information including, but not limited to, network information (e.g., existing site locations, existing site configurations), real estate information (e.g., candidate site locations), signal data (e.g., LTE demodulation), signal sites, site issues, crowdsourced information (e.g., geographic traffic distribution), and/or geographic information services (GIS) is used to perform propagation modeling. The propagation models are used to evaluate candidate results and expected impact from any changes (e.g., addition of macrosites, tower). In one embodiment, additional analysis is performed on the candidate results and/or the expected impact. Example One In one example, the system is used by a tower company to evaluate if a carrier's performance can be improved by placing at least one additional macrosite on at least one additional tower. If the evaluation shows that the carrier's performance can be improved, it supports a pitch from the tower company to place the at least one macrosite on the at least one additional tower, which would generate revenue for the tower company. FIG.34is a map showing locations of three macrosites (“1” (green), “2” (orange), and “3” (purple)), 3 SigBASE units (orange diamond), and a plurality of locations evaluated for alternate or additional site deployment (green circles). FIG.35is a graph of distribution of users by average downlink Physical Resource Block (PRB) allocation. Real-time monitoring shows downlink resources allocated to each user. Allocations occur many times per second. A significant concentration of users on 739 MHz are allocated resources for voice service. Most users on 2165 MHz are allocated resources common for high-speed data. FIG.36illustrates rate of overutilization events and degree of overutilization. Real-time monitoring shows the percentage of downlink resources utilized when utilization exceeded 50%. Utilization statistics are generated per second as configured. The rate at which a sector utilization exceeds 50% (overutilized) is presented by hour. the average utilization levels when overutilization occurs describes the severity. FIG.37Ais a sector coverage map for the three macrosites (“1” (green), “2” (orange), and “3” (purple)). FIG.37Billustrates signal strength for the sector shown inFIG.37A. This figure displays areas of poor coverage. FIG.37Cillustrates subscriber density for the sector shown inFIG.37A. In one embodiment, data from external sources is used to determine subscriber distribution and density. This figure displays areas of high subscriber demand. FIG.37Dillustrates carrier-to-interference ratio for the sector shown inFIG.37A. This figure displays areas of poor quality. FIG.38Aillustrates the baseline scenario shown inFIG.34.FIG.38Bis a map showing locations of the three original macrosites (“1” (green), “2” (orange), and “3” (purple)) and two additional macrosites (“4” (dark blue) and “5” (light blue)). FIG.39illustrates signal strength of the baseline scenario fromFIG.38Aon the left and the scenario with two additional macrosites fromFIG.38Bon the right. The addition of a 2-sector eNodeB to a tower increases expected coverage by 3 km2as shown in Table 2 below. A total service area for the baseline is 9.89 km2and the total service area increases to 13.15 km2with the two additional macrosites. A total area with a carrier-to-interference ratio less than 5 dB decreases from 1.10 km2for the baseline to 0.38 km2with the two additional macrosites. A total area with a carrier-to-interference ratio greater than 5 dB increases from 8.79 km2for the baseline to 12.77 km2with the two additional macrosites. Traffic served without harmful interference increases from 16.73 Erlands for the baseline to 25.23 Erlands with the two additional macrosites. Additionally, an increase in traffic served of 40% is expected. Further utilization reduction of 30% is expected for pre-existing sectors. Areas of poor coverage are also reduced. TABLE 22-sectorMetricBaselinesite addedTotal service area, sq km9.8913.15Total area with C/I <5 dB, km21.100.38Total area with C/I>5 dB, km28.7912.77Traffic served without harmful16.7325.23interference, Erlands FIG.40Aillustrates carrier-to-interference ratio of the baseline scenario fromFIG.38A.FIG.40Billustrates carrier-to-interference ratio of the scenario with two additional macrosites. The additional two macrosites reduce areas with poor carrier-to-interference. Example Two In a second example, the system is also used by a tower company to evaluate if a carrier's performance can be improved by placing at least one additional macrosite on at least one additional tower. If the evaluation shows that the carrier's performance can be improved, it supports a pitch from the tower company to place the at least one macrosite on the at least one additional tower, which would generate revenue for the tower company. FIG.41illustrates a baseline scenario for the second example on the left and a map showing locations of the original macrosites from the baseline scenario with three additional proposed macrosites on the right. FIG.42illustrates signal strength of the baseline scenario fromFIG.41on the left and the scenario with three additional proposed macrosites fromFIG.41on the right. The addition of a 3-sector eNodeB to a tower increases expected coverage by 0.5 km2as shown in Table 3 below. A total service area for the baseline is 21.3 km2and the total service area increases to 21.8 km2with the three additional macrosites. A total area with a carrier-to-interference ratio less than 5 dB increases from 3.0 km2for the baseline to 3.1 km2with the three additional macrosites. A total area with a carrier-to-interference ratio greater than 5 dB increases from 18.3 km2for the baseline to 18.7 km2with the three additional macrosites. Traffic served without harmful interference increases from 79.7 Erlands for the baseline to 80.9 Erlands with the three additional macrosites. Additionally, an increase in traffic served of 2% is expected. Further utilization reduction of 2% is expected for pre-existing sectors. TABLE 3487044MetricBaselineAddedTotal service area, sq km21.321.8Total area with C/I <5 dB, km23.03.1Total area with C/I>5 dB, km218.318.7Traffic served without harmful79.780.9interference, Erlands FIG.43illustrates carrier-to-interference ratio of the baseline scenario fromFIG.41on the left and carrier-to-interference ratio of the scenario with three additional proposed macrosites fromFIG.41on the right. The three additional proposed macrosites slightly reduce areas with poor carrier-to-interference. Although adding the 3-sector eNodeB does slightly improve performance, this performance improvement is not significant enough to support the addition of the three proposed macrosites to the tower. Example Three In a third example, the system is used to evaluate which carrier provides better service. FIG.44illustrates a signal strength comparison of a first carrier (“Carrier1”) with a second carrier (“Carrier2”) for 700 MHz. FIG.45illustrates carrier-to-interference ratio for Carrier1and Carrier2. FIG.46is a graph of Area vs. RSSI and Traffic vs. RSSi for Carrier1and Carrier2. Carrier1and Carrier2serve approximately the same amount of area in the sector. FIG.47is a graph of traffic difference for Carrier1versus Carrier2. Carrier2serves more traffic than Carrier1at the extremes of coverage, while Carrier1serves more traffic in the middle range of coverage. FIGS.44-47illustrate traffic composition for each SigBASE. Different traffic types require different signal-to-noise ratios (SNRs) vs reference signals received power (RSRP). For voice traffic, the SNR is from −6 dB to 0 dB, while the SNR goes upwards of 20 dB for streaming video. FIG.48is a graph of SNR vs. RSRP for each SigBASE for the third example. FIG.49is another graph of SNR vs. RSRP for each SigBASE for the third example. FIG.50is a clustered graph of SNR vs. RSRP for each SigBASE for the third example. FIG.51is another clustered graph of SNR vs. RSRP for each SigBASE for the third example. FIG.52is a schematic diagram of an embodiment of the invention illustrating a computer system, generally described as800, having a network810, a plurality of computing devices820,830,840, a server850, and a database870. The server850is constructed, configured, and coupled to enable communication over a network810with a plurality of computing devices820,830,840. The server850includes a processing unit851with an operating system852. The operating system852enables the server850to communicate through network810with the remote, distributed user devices. Database870is operable to house an operating system872, memory874, and programs876. In one embodiment of the invention, the system800includes a network810for distributed communication via a wireless communication antenna812and processing by at least one mobile communication computing device830. Alternatively, wireless and wired communication and connectivity between devices and components described herein include wireless network communication such as WI-FI, WORLDWIDE INTEROPERABILITY FOR MICROWAVE ACCESS (WIMAX), Radio Frequency (RF) communication including RF identification (RFD)), NEAR FIELD COMMUNICATION (NFC), BLUETOOTH including BLUETOOTH LOW ENERGY (BLE), ZIGBEE, Infrared (IR) communication, cellular communication, satellite communication, Universal Serial Bus (USB), Ethernet communications, communication via fiber-optic cables, coaxial cables, twisted pair cables, and/or any other type of wireless or wired communication. In another embodiment of the invention, the system800is a virtualized computing system capable of executing any or all aspects of software and/or application components presented herein on the computing devices820,830,840. In certain aspects, the computer system800is operable to be implemented using hardware or a combination of software and hardware, either in a dedicated computing device, or integrated into another entity, or distributed across multiple entities or computing devices. By way of example, and not limitation, the computing devices820,830,840are intended to represent various forms of electronic devices including at least a processor and a memory, such as a server, blade server, mainframe, mobile phone, personal digital assistant (PDA), smartphone, desktop computer, netbook computer, tablet computer, workstation, laptop, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the invention described and/or claimed in the present application. In one embodiment, the computing device820includes components such as a processor860, a system memory862having a random access memory (RAM)864and a read-only memory (ROM)866, and a system bus868that couples the memory862to the processor860. In another embodiment, the computing device830is operable to additionally include components such as a storage device890for storing the operating system892and one or more application programs894, a network interface unit896, and/or an input/output controller898. Each of the components is operable to be coupled to each other through at least one bus868. The input/output controller898is operable to receive and process input from, or provide output to, a number of other devices899, including, but not limited to, alphanumeric input devices, mice, electronic styluses, display units, touch screens, signal generation devices (e.g., speakers), or printers. By way of example, and not limitation, the processor860is operable to be a general-purpose microprocessor (e.g., a central processing unit (CPU)), a graphics processing unit (GPU), a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated or transistor logic, discrete hardware components, or any other suitable entity or combinations thereof that can perform calculations, process instructions for execution, and/or other manipulations of information. In another implementation, shown as840inFIG.52, multiple processors860and/or multiple buses868are operable to be used, as appropriate, along with multiple memories862of multiple types (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core). Also, multiple computing devices are operable to be connected, with each device providing portions of the necessary operations (e.g., a server bank, a group of blade servers, or a multi-processor system). Alternatively, some steps or methods are operable to be performed by circuitry that is specific to a given function. According to various embodiments, the computer system800is operable to operate in a networked environment using logical connections to local and/or remote computing devices820,830,840through a network810. A computing device830is operable to connect to a network810through a network interface unit896connected to a bus868. Computing devices are operable to communicate communication media through wired networks, direct-wired connections or wirelessly, such as acoustic, RF, or infrared, through an antenna897in communication with the network antenna812and the network interface unit896, which are operable to include digital signal processing circuitry when necessary. The network interface unit896is operable to provide for communications under various modes or protocols. In one or more exemplary aspects, the instructions are operable to be implemented in hardware, software, firmware, or any combinations thereof. A computer readable medium is operable to provide volatile or non-volatile storage for one or more sets of instructions, such as operating systems, data structures, program modules, applications, or other data embodying any one or more of the methodologies or functions described herein. The computer readable medium is operable to include the memory862, the processor860, and/or the storage media890and is operable be a single medium or multiple media (e.g., a centralized or distributed computer system) that store the one or more sets of instructions900. Non-transitory computer readable media includes all computer readable media, with the sole exception being a transitory, propagating signal per se. The instructions900are further operable to be transmitted or received over the network810via the network interface unit896as communication media, which is operable to include a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner as to encode information in the signal. Storage devices890and memory862include, but are not limited to, volatile and non-volatile media such as cache, RAM, ROM, EPROM, EEPROM, FLASH memory, or other solid state memory technology; discs (e.g., digital versatile discs (DVD), HD-DVD, BLU-RAY, compact disc (CD), or CD-ROM) or other optical storage; magnetic cassettes, magnetic tape, magnetic disk storage, floppy disks, or other magnetic storage devices; or any other medium that can be used to store the computer readable instructions and which can be accessed by the computer system800. In one embodiment, the computer system800is within a cloud-based network. In one embodiment, the server850is a designated physical server for distributed computing devices820,830, and840. In one embodiment, the server850is a cloud-based server platform. In one embodiment, the cloud-based server platform hosts serverless functions for distributed computing devices820,830, and840. In another embodiment, the computer system800is within an edge computing network. The server850is an edge server, and the database870is an edge database. The edge server850and the edge database870are part of an edge computing platform. In one embodiment, the edge server850and the edge database870are designated to distributed computing devices820,830, and840. In one embodiment, the edge server850and the edge database870are not designated for distributed computing devices820,830, and840. The distributed computing devices820,830, and840connect to an edge server in the edge computing network based on proximity, availability, latency, bandwidth, and/or other factors. It is also contemplated that the computer system800is operable to not include all of the components shown inFIG.52, is operable to include other components that are not explicitly shown inFIG.52, or is operable to utilize an architecture completely different than that shown inFIG.52. The various illustrative logical blocks, modules, elements, circuits, and algorithms described in connection with the embodiments disclosed herein are operable to be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application (e.g., arranged in a different order or partitioned in a different way), but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. The above-mentioned examples are provided to serve the purpose of clarifying the aspects of the invention, and it will be apparent to one skilled in the art that they do not serve to limit the scope of the invention. By nature, this invention is highly adjustable, customizable and adaptable. The above-mentioned examples are just some of the many configurations that the mentioned components can take on. All modifications and improvements have been deleted herein for the sake of conciseness and readability but are properly within the scope of the present invention.
153,518
11943629
DETAILED DESCRIPTION OF THE INVENTION The present invention is directed to an apparatus and method of operation of an Orthogonal Frequency-Division Multiple Access (OFDMA) cellular communications system such as the 3GPP Long Term Evolution (LTE) in radio frequencies shared with a primary transceiver. The primary transceiver may be a naval, automotive radio, or other transceiver of higher priority. Many modifications and other embodiments of this invention will come to mind to one skilled in the art to which the invention pertains having the benefit of the teachings presented in the descriptions and the associated drawings. Therefore, it is to be understood that the present invention is not limited to the specific embodiments disclosed. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.The following abbreviations are used throughout the instant specification.ASA: Authorized Shared AccesseNB: evolved Node B or base stationUE: User EquipmentCQI: Channel Quality IndicatorCRS: Cell-specific Reference SignalCSI: Channel State InformationCSI-RS: Channel State Information Reference SignalCSMA/CA: Carrier Sense Multiple Access with Collision AvoidanceDCI: Downlink Control InformationDFS: Dynamic Frequency SelectionDRS: Discovery Reference SignalDL: DownLinkDwPTS: Downlink Pilot Time SlotE-UTRAN: Evolved Universal Terrestrial Radio Access NetworkLBT: Listen Before TalkLTE: Long Term EvolutionMAC: Medium Access Control protocolMIMO: Multiple-Input Multiple-OutputOFDMA: Orthogonal Frequency Division Multiple AccessOOR: Out Of RangePBCH: Physical Broadcast ChannelPCell: Primary CellPCFICH: Physical Control Format Indicator ChannelPDCCH: Physical Downlink Control ChannelPDSCH: Physical Downlink Shared ChannelPHICH: Physical Hybrid ARQ Indicator ChannelPMCH: Physical Multicast ChannelPSS: Primary Synchronization SignalPUCCH: Physical Uplink Control ChannelPUSCH: Physical Uplink Shared ChannelRI: Rank IndicatorRRC: Radio Resource ControlRRM: Radio Resource ManagementRSRP: Reference Signal Received PowerSCell: Secondary CellSRS: Sounding Reference SignalSSS: Secondary Synchronization SignalTDD: Time Division DuplexTRS: Tracking Reference SignalUL: UpLink Dynamic Frequency Selection (DFS) The 3GPP Long Term Evolution (LTE) communications standard cannot be readily deployed in shared access spectrum. This is because the radio resource management function resides in, and the radio resources are solely controlled by, the eNodeBs in the network. Dynamic Frequency Selection (DFS) schemes typically allow sufficient time (e.g. several seconds) to change a frequency band or carrier upon detection of a primary user. Thus, handover based RRC signaling and SCell activation or deactivation under MAC control are sufficient to vacate a band for a primary user. The 3GPP LTE communications standard currently lacks protocols, procedures and measurements that would let a UE take any action in case a primary user is detected on a carrier on which the UE is configured to transmit data. Furthermore, mobility control in LTE is fully controlled by the eNodeB, although other wireless cellular communications standards do allow UEs to initiate handovers. Mobility here incorporates the case of load balancing where the eNodeB may add or remove SCells or change the PCell for stationary UEs. For both ASA based schemes with a primary user and CSMA/CA based schemes without a primary user, so called “hidden stations” may exist. Hidden stations are transmitters such as primary users, whose transmissions can only be detected at the receiving end of a communications link which shares the wireless medium. In LTE, for example, only the UE may detect waveforms transmitted from a “hidden station” whereas the eNodeB is completely oblivious to the existence of the hidden station. Referring toFIG.2, there is a diagram showing operation a first embodiment of the present invention. A UE is initialized at step200to operate in conjunction with a PCell on an LTE band. An ASA band is configured and operated as a regular LTE band by the eNodeB, and the UE operates on the ASA band202. UEs are barred from camping on cells operating in the ASA band through existing means, such as barring through broadcast of system information. Consequently, all UEs connected on the ASA band are in RRC_CONNECTED mode and thus under full control of an eNodeB. The eNodeB configures all UEs connected on the ASA band to perform RRM measurements204as per existing LTE specifications (e.g. Releases 8 through 12). DFS is supported by each UE through non-standardized (proprietary) implementations. If a UE detects a hidden station (from the UE perspective, all primary users are hidden stations)206, the UE Physical Layer (PHY) indicates to the higher layers of its protocol stack to trigger an RRM measurement report as per existing LTE Rel. 8/9/10/11/12 procedures. Through specification, a “DFS event,” for example, an RRM measurement report triggered through the non-standardized (proprietary) DFS function at the UE would be tied to a specific value of the RRC Information Element (IE) RSRP-Range. For example, a DFS event could be indicated by the lowest value in the RRC IE RSRP-Range and could be thought of as an Out-of-Range (OOR) indication. The UE would use existing RRM measurement reporting procedures to report the DFS event (i.e., the RSRP measurement report with the OOR indicator signifying the DFS event) to the eNodeB208. The eNodeB RRM function would re-interpret the RSRP measurement report as a DFS/OOR event as per the standardized linkage and subsequently, to vacate the ASA band210for the primary user, and would reconfigure the UE via existing RRC signaling212. Such RRC signaling encompasses handovers in the case of PCells or SCell reconfigurations in the case of SCells. Alternatively, if the RRM function at the eNodeB believes the ASA band needs to be only temporarily vacated for the primary user, it could simply let the sCellDeactivationTimer at the UE expire, or it can send a deactivation command in a MAC control element in order to deactivate an SCell configured on the ASA band. 3GPP LTE specifications would introduce performance requirements that can be used to test UEs if they report DFS/OOR events according to requirements put forward by regulatory bodies worldwide for each ASA band but no new measurements would be defined in the specifications to support DFS in 3GPP LTE. In another embodiment of the present invention, instead of re-interpreting an existing measurement report as DFS/OOR event, a new measurement report and associated procedures are defined specifically for the purpose of indicating to the E-UTRAN the existence of a hidden station or primary user. All UEs connected to cells on an ASA band would be configured to measure and report this new DFS measurement. The eNodeB RRC layer can configure UEs to report the DFS measurement either periodically or triggered, or periodically triggered. The eNodeB would configure measurement events and associated thresholds and offsets to control the DFS measurement reporting of UEs connected on ASA bands. The exact measurement procedure would thus be determined by specification. However, the actions taken by the network could be similar to those in the previous embodiment above to include UE handover, SCell reconfiguration, and SCell deactivation. Reporting a measurement rather than binary information would let the eNodeB RRM function learn from historical data and let it apply its own threshold for improved protection of the primary user. Since the eNodeB can analyze and combine DFS measurements from various UEs connected to it, the decision to select a different carrier for a given UE ultimately resides at the eNodeB. However, if the decision is made at each UE, the network would have to follow whatever a UE indicates in order to guarantee protection of a potential primary user. Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) Turning now toFIG.3, there is a diagram showing communication between UE300and eNodeB320according to the present invention. UE300may be a cell phone, computer, or other wireless network device. UE300includes a processor306coupled to a memory304and a transceiver310. Processor306may include several processors adapted to various operational tasks of the UE including signal processing and channel measurement and computation. The memory stores application software302that the processor may execute as directed by the user as well as operating instructions for the UE. Processor306is also coupled to input/output (I/O) circuitry308, which may include a microphone, speaker, display, and related software. Transceiver310includes receiver312and transmitter314, suitable for wireless communication with eNodeB320. Transceiver310typically communicates with eNB320over various communication channels. For example, transceiver310sends uplink information to eNodeB320over physical uplink control channel PUCCH and physical uplink shared channel PUSCH. Correspondingly, transceiver310receives downlink information from eNodeB320over physical downlink control channel PDCCH and physical downlink shared channel PDSCH. Base station320includes a processor326coupled to a memory324, a symbol processing circuit328, and a transceiver330via bus336. Processor326and symbol processing circuit328may include several processors adapted to various operational tasks including signal processing and channel measurement and computation. The memory stores application software322that the processor may execute for specific users as well as operating instructions for eNodeB320. Transceiver330includes receiver332and transmitter334, suitable for wireless communication with UE300. Transceiver330typically communicates with UE300over various communication channels. For example, transceiver330sends downlink information to UE300over physical downlink control channel PDCCH and physical downlink shared channel PDSCH. Transceiver330also sends special downlink information to UE300over physical broadcast channel PBCH, physical hybrid ARQ indicator channel PHICH, physical control format indicator channel PCFICH, and physical multicast channel PMCH. Correspondingly, transceiver330receives uplink information from UE300over physical uplink control channel PUCCH and physical uplink shared channel PUSCH. According to the present invention, E-UTRAN cells such as eNodeB320may be deployed in unlicensed or ASA bands where LTE user equipment shares the radio resources with other users of equal priority but which follow strict Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) procedures/protocols. There is a fundamental problem in that the 3GPP Long Term Evolution was specifically designed to operate in licensed spectrum. Referring now toFIG.4A, in the downlink direction the situation is similar to DFS as explained with reference toFIG.2. Here, CSMA/CA is implemented as a non-standardized proprietary function according to the present invention. The UE is initialized on an LTE band400. The eNodeB monitors the CSMA/CA band402. If the eNodeB senses an ongoing transmission404, it does not transmit any downlink channels. The eNodeB monitors a timeout reference408and continues to monitor the CSMA/CA band402. If the ongoing transmission ends before the timeout reference408, the eNodeB transmits to the UE on the CSMA/CA band406. Otherwise, if the timeout reference expires, RRC signaling directs the UE to vacate the CSMA/CA band410and initiates a handover412. The eNodeB may, however, have to transmit some signals without regard to whether an ongoing transmission is detected. The eNodeB transmits Discovery Reference Signal (DRS) bursts with a periodicity in the order of hundreds of milliseconds. The DRS burst may just be one subframe and comprises at least PSS, SSS, and CRS to allow UEs to discover the cell and perform measurements. For shared cell ID scenarios, CSI-RS may also be transmitted during a DRS occasion. The periodic PSS/SSS transmissions also let UEs obtain coarse time and frequency synchronization with that cell. At the network side, the DRS based RRM measurement reports let the eNodeB decide whether to configure a cell on a certain unlicensed or ASA band for a given UE. In addition to DRS, the eNodeB needs to periodically transmit some kind of Tracking Reference Signal (TRS) with a much smaller periodicity than that of DRS, such as 5 ms or 10 ms. The TRS waveforms let UEs perform Automatic Gain Control (AGC) and fine time and frequency synchronization (“tracking”). Such TRS waveforms may be based on existing CRS waveforms. This would have the additional benefit that it could be used for channel state information acquisition in case of CRS-based transmission modes. Additionally, the eNodeB may periodically transmit Channel State Information Reference Signals (CSI-RS) to allow channel state information acquisition at the UE for CSI-RS based transmission modes. UEs would be configured for CSI measurement and reporting in accordance with the CSI transmissions at the eNodeB. Referring back toFIG.3, it may be preferable not to use some downlink channels with CSMA/CA. For example, the Physical Broadcast Channel (PBCH) would not be transmitted in a cell on an unlicensed or ASA band. Accordingly, UEs would not be able to camp on such a cell. Similarly, system information would also not be transmitted. Such cells can thus only be configured as SCells and PCells would always be configured on licensed spectrum. It may also be beneficial not to transmit the Physical Hybrid ARQ Indicator CHannel (PHICH) in unlicensed or ASA spectrum. Alternatively, UL grants transmitted in DCI could serve as implicit ACK/NACK indication by scheduling retransmissions of previous UL grants. The Physical Control Format Indicator Channel (PCFICH) may or may not be transmitted in unlicensed or ASA spectrum. If extended PHICH duration is configured, the Control Format Indicator (CFI) is known through specification. Similarly, the PCFICH is not needed for PDSCH transmissions in transmission mode 10 (TM10) scheduled by an Enhanced Physical Downlink Control Channel (EPDCCH). And for cross-carrier scheduled PDSCH transmissions the CFI is known through configuration. On the other hand, since the PCFICH is transmitted in the same subframe as a PDCCH it could be transmitted whenever a PDCCH is transmitted. Finally, since the Physical Multicast Channel (PMCH) is scheduled semi-statically by the MBMS Coordination Entity (MCE) on reserved resources, it may be beneficial not to transmit the PMCH in unlicensed or ASA spectrum. Otherwise, for unicast downlink transmissions, when the CSMA/CA function at the eNodeB indicates that a given subframe can be used for (E)PDCCH or PDSCH transmissions, the eNodeB transmits as per LTE Release 12. In one embodiment, the CSMA/CA function at the eNodeB returns a binary indication. If the CSMA/CA function for a given cell on a given carrier indicates BUSY, the eNodeB does not transmit (E)PDCCH or PDSCH to any UE. The eNodeB may still transmit other signals or channels as per the above recommendations. Alternatively, if the CSMA/CA function for a given cell on a given carrier indicates IDLE, the eNodeB may transmit (E)PDCCH and/or PDSCH transmissions, the eNodeB transmits as per LTE Release 12. Referring toFIG.4B, uplink operation on CSMA/CA bands is similar to downlink operation. The UE is initialized on an LTE band400. The UE monitors the CSMA/CA band420. If the UE senses an ongoing transmission422, it does not transmit any uplink channels. The UE monitors a timeout reference426and continues to monitor the CSMA/CA band420. If the ongoing transmission ends before the timeout reference426, the UE transmits to the eNodeB on the CSMA/CA band424. Otherwise, if the timeout reference expires, the UE sends a BUSY report to the eNodeB428. RRC signaling directs the UE to vacate the CSMA/CA band430and initiates a handover432. When the CSMA/CA function at the UE indicates that a given subframe cannot be used for uplink transmissions, it may be beneficial to drop any configured Sounding Reference Signal (SRS) transmission in order to not interfere with the ongoing transmission. It may also be beneficial not to transmit the Physical Uplink Control Channel (PUCCH) in unlicensed or ASA spectrum. In this case, the PUCCH is transmitted on the PCell in licensed spectrum only. If PUCCH transmissions are allowed in unlicensed or ASA spectrum, several UE behaviors are envisioned. In one case, the UE follows existing UE procedures for PUCCH transmissions independent of the indication of the CSMA/CA function at the UE for the subframe for which the PUCCH transmission is scheduled. Collisions with on-going transmissions cannot be avoided in general and the PUCCH may not be properly received at the eNodeB. Alternatively, the UE could base any PUCCH transmissions on the indication of the CSMA/CA function at the UE for the subframe for which the PUCCH transmission is scheduled. If the CSMA/CA function at the UE indicates BUSY, the UE does not transmit on the PUCCH in the subframe under consideration. Otherwise, if the CSMA/CA function at the UE indicates IDLE, the UE transmits the PUCCH as scheduled. The same principles may be applied to the Physical Uplink Shared Channel (PUSCH). In one embodiment, the UE follows existing UE procedures for PUSCH transmissions independent of the indication of the CSMA/CA function at the UE for the subframe for which the PUSCH transmission is scheduled. Collisions with on-going transmissions cannot be avoided in general and the PUSCH may not be properly received at the eNodeB. Alternatively, the UE could base any PUSCH transmissions on the indication of the CSMA/CA function at the UE for the subframe for which the PUSCH transmission is scheduled. If the CSMA/CA function at the UE indicates BUSY, the UE does not transmit on the PUSCH in the subframe under consideration. Otherwise, if the CSMA/CA function at the UE indicates IDLE, the UE transmits the PUSCH as scheduled. Similar to the case of DFS, hidden stations must be considered. The above solutions for PUSCH and PUCCH transmissions are concerned with the UE behavior in case the CSMA/CA function at the UE indicates BUSY for the subframe for which the PUSCH/PUCCH transmission is scheduled. In case of a hidden station whose waveform is detectable at the UE but not at the eNodeB, the eNodeB may continue scheduling that UE. In case the UE follows regular LTE Rel. 12 operation, this would result in deteriorated performance for both the eNodeB-to-UE link as well as for the link to/from the hidden station, as the respective transmissions would continue to collide potentially creating excessive interference such that reliable communication is no longer feasible or at least, acceptable Quality-of-Service (QoS) could no longer be provided. The opposite case, where the UE does not transmit on PUSCH or PUCCH in a subframe if the CSMA/CA function at the UE indicates BUSY, would equally deteriorate performance due to the dropped packages and HARQ ACK/NACK transmissions in BUSY subframes. In theory, the aforementioned DFS schemes could be reused to allow the UE to inform an eNodeB about the BUSY state of a cell or carrier such that the eNodeB MAC (or RRC) layer could take actions to schedule the UE on a different CC in order to prevent further collisions. In other words, instead of the “DFS event” triggered by the DFS function, the CSMA/CA function would indicate BUSY but otherwise the procedures could be reused. Recall, however, that the time scales for DFS are generally much larger than for LBT as in the case of CSMA/CA. Thus, the present invention provides separate procedures to address hidden stations in the case of CSMA/CA. An objective of the present invention is to let the UE higher layers inform the eNodeB higher layers about the indication of the UE CSMA/CA function in subframes in which the UE is scheduled for uplink transmissions. Since the UE can always follow existing LTE Rel. 12 specifications in case the UE CSMA/CA function indicates IDLE, this state is not signaled to the eNodeB higher layers. Thus, several embodiments of the present invention provide actions the eNodeB higher layers, such as the eNodeB MAC scheduler, may take in a subframe for which a PUSCH or PUCCH transmission is scheduled and the UE CSMA/CA function indicates BUSY. Since overall system performance and particularly the perceived user throughput at the UE are maximized the faster the eNodeB can take action by avoiding scheduling the UE on a carrier occupied by a hidden station, it is preferable to either use PHY or MAC layer mechanisms whereby the former have lower latency than the latter. First, in order to reduce latencies, it is assumed that the UE is already configured with up to five serving cells (FIG.1) on corresponding component carriers. According to the present invention, the serving cells are ordered in ascending order based on the ServCellIndex configured through RRC signaling, however, other orderings and addressing mechanisms are not precluded. Then, the four serving cells, excluding the PCell, are assigned the symbols {00,01,10,11}, such that the serving cell (SCell) with the lowest ServCellIndex corresponds to 00, the serving cell (SCell) with the second lowest ServCellIndex corresponds to 01, and so forth. If less than four SCells are configured the unused symbols, e.g., {01,10,11} in case a single SCell is configured, are reserved. Other mappings are not precluded as they do not alter the invention. In order to guarantee lowest latencies, L1 (PHY) signaling is introduced to inform the eNodeB higher layers about the BUSY indication from the UE CSMA/CA function in a subframe for which a PUSCH or PUCCH transmission is scheduled. More precisely, a new PUCCH format is introduced which is always transmitted on the PCell. The new PUCCH format is identical to the existing PUCCH format 1b, however, instead of representing ACK/ACK, ACK/NACK, NACK/ACK, and NACK/NACK/DTX, the QPSK symbols encode the four serving cell indices {00,01,10,11}. For purposes of illustration this new PUCCH format is referred to as format 1c. The eNodeB receiver may distinguish between PUCCH formats 1b and 1c through Code Division Multiplexing such that the two PUCCH formats can share the same time and frequency resources. Alternatively, the new PUCCH format can have its own time and frequency resources of the PUCCH region. If PUCCH capacity is not an issue, as is the case for small cells, CDM is preferred for improved spectral efficiency. In case the CSMA/CA function at the UE indicates BUSY in a subframe for which a PUSCH or PUCCH transmission is scheduled the UE does not transmit the PUSCH or PUCCH as scheduled but rather indicates to the eNodeB the BUSY indication via a PUCCH format 1c transmission on the PCell. Several UE behaviors are envisioned, all of which assume that the eNodeB schedules only one SCell at a time in order to prevent any ambiguities at the eNodeB when PUCCH format 1c is received. In one embodiment, the PUCCH format 1c indicates on which serving cell the BUSY indication occurred. For example, the eNodeB may schedule an uplink transmission in subframe n+k, k>0 via an UL grant in DCI received in subframe n. Shortly before the uplink transmission is scheduled to occur, the CSMA/CA function at the UE begins to sense the medium and indicates to the UE higher layers if it is IDLE or BUSY. If IDLE is indicated, the UE proceeds with the scheduled transmissions as per the received DCI. If BUSY is indicated, the UE ignores the DCI scheduling the uplink transmission under consideration and instead sends a PUCCH format 1c on the PCell encoding in the QPSK symbol the serving cell on which the collision occurred. Since the eNodeB expected the PUSCH or PUCCH transmission on a particular serving cell, the PUCCH format 1c transmission does not really convey additional information to the eNodeB higher layers. Thus, in a different embodiment, the CSMA/CA function at the UE senses all configured serving cells prior to a scheduled uplink transmission. If IDLE is indicated for the serving cell on which the transmission is scheduled, the UE proceeds with the scheduled transmissions as per the received DCI. If BUSY is indicated, the UE ignores the DCI scheduling the uplink transmission under consideration and instead sends a PUCCH format 1c on the PCell encoding in the QPSK symbol a serving cell on which the CSMA/CA function at the UE indicated IDLE. This, does not guarantee that the corresponding serving cell is IDLE at a future subframe n+k2, k2>k, but at least the eNodeB does not continue scheduling uplink transmissions on the same serving cell. Introducing the new PUCCH format 1c requires the eNodeB receiver to monitor for the new PUCCH format. Accordingly, MAC layer procedures may be preferable over the aforementioned PHY procedures. Sending MAC control elements, however, requires the UE to have available uplink resources in addition to the ones it has to leave unused by not transmitting PUCCH or PUSCH because the medium is BUSY. Moreover, the time to prepare the PUSCH transmission carrying the MAC CE may take longer such that the carrier sensing has to occur much earlier than in the case of a new PUCCH format increasing the probability that the CSMA/CA function at the UE indicates IDLE but the medium is BUSY during subframe n+k. Latencies would be further increased if the UE has to send a Scheduling Request (SR) in order to transmit the MAC CE. Nevertheless, MAC layer procedures may still have their merits. For example, one would no longer need the restriction that only a single SCell is scheduled at a time. Rather, one octet (8 bits) in a MAC CE may be used to encode all four SCells simultaneously. Up to four serving (SCells) are again ordered in ascending order based on the ServCellIndex and represented by {00,01,10,11}, i.e., the serving cell (SCell) with the lowest ServCellIndex corresponds to 00, the serving cell (SCell) with the second lowest ServCellIndex corresponds to 01, and so forth. Moreover, the 8 bits in an octet of a MAC CE correspond to the four SCells through the following mapping. The first two bits correspond to the serving cell represented by {00}, the third and fourth bit correspond to the serving cell represented by {01}, the fifth and sixth bit correspond to the serving cell represented by {10}, and the last two bits correspond to the serving cell represented by {11}, although other mappings and orderings are not precluded. If the bits at a position correspond to the position itself, this indicates that the corresponding serving cell was indicated as IDLE. Otherwise, the indication was BUSY and the two bits indicate to which serving cell the eNodeB should switch. In other words, the bit position in the octet encodes for which serving cell the bits at that position belong and the bits themselves, encode the same information transmitted on the PUCCH format 1c above for a single cell. For instance, the octet {00010011} means that the first, second, and forth serving cell were IDLE whereas transmissions on the third serving cell should be transmitted on the first serving cell. Still further, while numerous examples have thus been provided, one skilled in the art should recognize that various modifications, substitutions, or alterations may be made to the described embodiments while still falling within the inventive scope as defined by the following claims. Other combinations will be readily apparent to one of ordinary skill in the art having access to the instant specification.
27,790
11943630
DETAILED DESCRIPTION Some wireless communications systems may implement dynamic spectrum sharing (DSS) between radio protocols (e.g., radio access technologies (RATs)) in the time and/or frequency domain. In some cases, DSS may support the coexistence of Long Term Evolution (LTE) and New Radio (NR) operations in the same frequency band. DSS may allow for user equipment (UEs) to operate in NR for longer time durations and to reduce the frequency of inter-RAT handover to LTE. DSS may further reuse LTE bands, efficiently utilizing the excess capacity of some LTE networks and providing support for low-band NR. Additionally, DSS may improve NR coverage, which may be limited in high frequency bands. However, to effectively implement DSS (e.g., to support efficient spectrum usage by UEs, reduce fallback latency for UEs, etc.), a system may implement one or more techniques to improve UE allocations, functionality, or both in dynamically shared frequency spectrums. In some aspects, a network implementing DSS for NR and LTE may configure a frequency spectrum with multiple bandwidth parts (BWPs), in which a first subset of the BWPs are dedicated for NR and a second subset of BWPs support DSS between NR and LTE. A UE communicating with the network via a base station may transmit an indication of a rate matching capability of the UE. This rate matching capability may indicate whether the UE supports rate matching NR communications around LTE cell-specific reference signals (CRSs). The network may activate a BWP of the configured BWPs for the UE based on the rate matching capability. In one aspect, the network may active a BWP dedicated for NR if the UE does not support rate matching with LTE CRS and may active a BWP supporting DSS if the UE does support rate matching with LTE CRS. Additionally or alternatively, the network may switch a UE from one BWP to another BWP based on a frequency of handover for the UE. In one aspect, based on the UE's mobility conditions and a frequency of handover greater than a threshold frequency while the UE operates on an NR-dedicated band, the network may switch the UE to a band (e.g., a BWP) supporting DSS to reduce the frequency of handover. Additionally or alternatively, in some aspects, a network may support dual registration for a UE operating in a DSS band. The UE may register with a base station on both LTE and NR concurrently if the base station supports DSS. In some cases, the UE may use an NR carrier for data communications while using an LTE carrier for voice calls. In some aspects, the network may implement a single control channel for scheduling data communications for multiple RATs. The base station may schedule LTE and NR data communications through a common control channel (e.g., the NR control channel) to reduce the control overhead for a UE dual registered on both LTE and NR. In some cases, an icon displayed on the user interface for such a UE may indicate that the UE supports DSS operation, dual registration, or both. Aspects of the disclosure are initially described in the context of wireless communications systems. Additional aspects of the disclosure are described with reference to frequency resource allocations and process flows. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to enhancements for multiple radio protocol DSS. FIG.1illustrates aspects of a wireless communications system100that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The wireless communications system100may include one or more base stations105, one or more UEs115, and a core network130. In some aspects, the wireless communications system100may support LTE networks or New Radio (NR) networks. An LTE network may be, in some aspects, an LTE network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network. In some aspects, the wireless communications system100may support enhanced broadband communications, ultra-reliable (e.g., mission critical) communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The base stations105may be dispersed throughout a geographic area to form the wireless communications system100and may be devices in different forms or having different capabilities. The base stations105and the UEs115may wirelessly communicate via one or more communication links125. Each base station105may provide a coverage area110over which the UEs115and the base station105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a base station105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some aspects UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the base stations105, or network equipment (e.g., core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The base stations105may communicate with the core network130, or with one another, or both. In one aspect, the base stations105may interface with the core network130through one or more backhaul links120(e.g., via an S1, N2, N3, or other interface). The base stations105may communicate with one another over the backhaul links120(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105), or indirectly (e.g., via core network130), or both. In some aspects, the backhaul links120may be or include one or more wireless links. One or more of the base stations105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some aspects, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the base stations105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the base stations105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. In one aspect, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. In some cases (e.g., in a carrier aggregation configuration), a carrier may also have acquisition signaling or control signaling that coordinates operations for other carriers. A carrier may be associated with a frequency channel (e.g., an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)) and may be positioned according to a channel raster for discovery by the UEs115. A carrier may be operated in a standalone mode where initial acquisition and connection may be conducted by the UEs115via the carrier, or the carrier may be operated in a non-standalone mode where a connection is anchored using a different carrier (e.g., of the same or a different radio access technology). The communication links125shown in the wireless communications system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions from a base station105to a UE115. Carriers may carry downlink or uplink communications (e.g., in an FDD mode) or may be configured to carry downlink and uplink communications (e.g., in a TDD mode). A carrier may be associated with a bandwidth of the radio frequency spectrum, and in some cases the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communications system100. In one aspect, the carrier bandwidth may be one of a number of determined bandwidths for carriers of a particular radio access technology (e.g., 1.4, 3, 5, 10, 15, 20, 40, or 80 megahertz (MHz)). Devices of the wireless communications system100(e.g., the base stations105, the UEs115, or both) may have hardware configurations that support communications over a carrier bandwidth or may be configurable to support communications over one of a set of carrier bandwidths. In some aspects, the wireless communications system100may include base stations105or UEs115that support simultaneous communications via carriers associated with multiple carrier bandwidths. In some aspects, each served UE115may be configured for operating over portions (e.g., a sub-band, a BWP) or all of a carrier bandwidth. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. One or more numerologies for a carrier may be supported, where a numerology may include a subcarrier spacing (Δf) and a cyclic prefix. A carrier may be divided into one or more BWPs having the same or different numerologies. In some aspects, a UE115may be configured with multiple BWPs. In some aspects, a single BWP for a carrier may be active at a given time and communications for the UE115may be restricted to one or more active BWPs. The time intervals for the base stations105or the UEs115may be expressed in multiples of a basic time unit which may, in some cases, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some cases, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some cases, the TTI duration (e.g., the number of symbol periods in a TTI) may be variable. Additionally or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. In one aspect, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. In some aspects, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some aspects, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same base station105. In some other aspects, the overlapping geographic coverage areas110associated with different technologies may be supported by different base stations105. The wireless communications system100may include, in some cases, a heterogeneous network in which different types of the base stations105provide coverage for various geographic coverage areas110using the same or different radio access technologies. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. In one aspect, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC) or mission critical communications. The UEs115may be designed to support ultra-reliable, low-latency, or critical functions (e.g., mission critical functions). Ultra-reliable communications may include private communication or group communication and may be supported by one or more mission critical services such as mission critical push-to-talk (MCPTT), mission critical video (MCVideo), or mission critical data (MCData). Support for mission critical functions may include prioritization of services, and mission critical services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, mission critical, and ultra-reliable low-latency may be used interchangeably herein. In some aspects, a UE115may also be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105or be otherwise unable to receive transmissions from a base station105. In some aspects, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some aspects, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between the UEs115without the involvement of a base station105. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or fifth generation (5G) core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the base stations105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to the network operators IP services150. The operators IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a base station105, may include subcomponents such as an access network entity140, which may be an aspect of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or base station105may be distributed across various network devices (e.g., radio heads and ANCs) or consolidated into a single network device (e.g., a base station105). The wireless communications system100may operate using one or more frequency bands, which may be in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. In one aspect, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the base stations105and the UEs115may employ carrier sensing for collision detection and avoidance. In some aspects, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A base station105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a base station105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. In one aspect, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some aspects, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a specific orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). The wireless communications system100may be a packet-based network that operates according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use error detection techniques, error correction techniques, or both to support retransmissions at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a base station105or a core network130supporting radio bearers for user plane data. At the physical layer, transport channels may be mapped to physical channels. Some wireless communications systems100may support spectrum sharing (e.g., downlink spectrum sharing, uplink spectrum sharing, or both) in the time domain, the frequency domain, or both. In some cases, the spectrum sharing may be dynamic based on LTE and NR traffic distribution in the system or in the frequency spectrum. DSS may allow the network to reuse frequency bands for NR communications that were previously allocated for LTE communications. In one aspect, a base station105may include LTE communications, NR communications, or a combination thereof in a given frequency spectrum based on the load conditions in the wireless communications system100, such that NR may coexist effectively with LTE in the same frequency band. In some cases, when a UE115connects to a DSS-supported base station105, the base station105may provide the UE115with LTE broadcast information in an NR message. As such, the UE115may determine resources to avoid (e.g., resources including LTE broadcast signals, such as LTE CRSs) and resources to monitor for NR signaling. DSS may allow for the UE115to remain using NR operations for a greater proportion of time (e.g., as opposed to maintaining LTE frequency bands as dedicated for LTE) and to reduce the frequency of the UE115performing inter-RAT handover procedures to LTE. Further, reusing the LTE bands may efficiently utilize the excess capacity of some LTE frequency spectrum bands and provide support for low-band NR. Additionally, DSS may improve NR coverage, which may be limited in high frequency bands. However, to effectively implement DSS (e.g., to support efficient spectrum usage by UEs115, reduce fallback latency for UEs115, etc.), the wireless communications system100may implement one or more techniques to improve UE115allocations, functionality, or both in dynamically shared frequency spectrums. These techniques may improve reuse of the frequency spectrum between different RATs (e.g., LTE and NR) by implementing static spectrum re-farming (e.g., of the LTE spectrum), employing DSS, or both. FIG.2illustrates aspects of a wireless communications system200that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. In some aspects, wireless communications system200may implement aspects of wireless communications system100. Wireless communications system200may include base station105-aand UE115-a, which may be examples of the corresponding devices as described with reference toFIG.1. Base station105-amay serve a geographic coverage area110-a. Base station105-aand UE115-amay be capable of using one or more radio protocols (e.g., one or more RATs, such as NR, LTE, or both), which may share the same carrier frequency or frequency band. It is to be understood that references to specific RATs (e.g., LTE and NR) are provided for illustrative purposes only, and different RATs not specifically referred to herein may be used interchangeably with those described below. In accordance with the present disclosure, wireless communications system200may support communication links205between devices such as UE115-aand base station105-a, where UE115-aand base station105-amay communicate in radio frequency spectrum bands. Base station105-aand UE115-amay operate over a carrier bandwidth. In some cases, base station105-amay divide the carrier bandwidth into multiple (e.g., up to two, four, or more) BWPs, which may be configured with different properties (e.g., protocol features, numerologies, modulation schemes, physical channels, etc.). Each BWP may include a contiguous set of resource blocks on a carrier bandwidth, and the different BWPs may or may not be contiguous in frequency (e.g., each BWP may be adjacent in frequency to at least one other BWP, or some BWPs may have gaps or guardbands to adjacent BWPs). In some cases, BWPs may be defined for NR carriers, while LTE carriers may divide frequency into different regions but may not vary some properties (e.g., numerology) across regions. In some cases, LTE carriers may have a lower maximum carrier bandwidth than NR carriers, and thus one NR carrier may span one or more LTE carriers, and one or more NR BWPs may correspond in frequency to one or more LTE carriers. In some aspects, each BWP or subset of BWPs may be configured for one or more RATs. A first subset of BWPs may be dedicated for use with a first RAT (e.g., NR) and a second subset of BWPs may be dynamically shared between the first RAT and a second RAT (e.g., LTE) by employing DSS. Base station105-amay communicate with UE115-aover communication link205-a, which may be an NR communication link. Base station105-amay configure UE115-awith a number of BWPs and may activate a specific BWP for UE115-ato use. UE115-amay communicate in the activated BWP according to the configured parameters for communicating in each of the BWPs. Base station105-amay transmit an indication of the activated BWP to UE115-aover communication link205-a(e.g., in a downlink channel) in a BWP activation indication210(e.g., in dedicated radio resource control (RRC) signaling, a downlink control information (DCI) message, etc.). In some aspects, base station105-amay assign a BWP based on capabilities of UE115-a, capabilities of other UEs115operating within coverage area110-a, a network load balance, or some combination thereof. In some cases, base station105-amay perform BWP activation differently based on load balance and/or capability information. Wireless devices operating in wireless communications system200, such as UE115-a, may be capable of rate matching between different RATs. In one aspect, UE115-amay use a first RAT but may be capable of rate matching signals using the first RAT with signals or channels (e.g., CRS, a control channel) for a second RAT. Rate matching between RATs may allow UEs115communicating with base station105-ato use different physical layer technologies with different transmission rates more efficiently, which may reduce overhead and increase throughput. If UE115-ais not implementing rate matching (e.g., if UE115-ais an LTE UE incapable of rate matching between NR messages and LTE CRSs or if rate matching is otherwise disabled at UE115-a), there may be fewer resource elements available on which to schedule transmissions (e.g., physical downlink shared channel (PDSCH) messages) for UE115-a. In one aspect, a UE115not implementing rate matching may communicate using a first RAT (e.g., NR) in one or more resources time division multiplexed (TDMed) with resources used for a second RAT (e.g., LTE). That is, the one or more resources used for the first RAT may include transmission time intervals (TTIs)—such as symbols—in which, across all sub-carriers of the one or more resources (e.g., for a particular BWP), there are no communications scheduled for the second RAT, such as LTE CRSs. In contrast, a UE115implementing rate matching between the first RAT and the second RAT may communicate using the first RAT (e.g., NR) in resources frequency division multiplexed (FDMed) with resources used for the second RAT (e.g., LTE). Accordingly, base station105-amay have more flexibility in scheduling UEs115that support rate matching for communications in a dynamically shared frequency spectrum than UEs115that are not using rate matching. UE115-amay indicate its rate matching capability to base station105-aover communication link205-b(e.g., in a rate matching capability indication215). In some cases, the rate matching capability indication215(e.g., a bit indicating whether the UE115-ahas “rateMatchingLTE-CRS supported”) may be part of a capabilities report transmitted by UE115-a. Base station105-amay use the rate matching information to determine an appropriate BWP to activate for UE115-a(e.g., to increase throughput). In some aspects, if UE115-ais not capable of—or otherwise not implementing—rate matching, base station105-amay assign UE115-ato a BWP dedicated to a single RAT (e.g., NR). In BWPs dedicated for a single RAT (e.g., either NR or LTE), base station105-amay schedule resource elements of the first and second RATs in mutually exclusive times and frequencies (e.g., separate frequency bands or BWPs). As such, base station105-amay schedule communications for the UE115-ain any resources within the BWP without the UE115-aperforming rate matching with LTE communications. Alternatively, if UE115-asupports rate matching, base station105-amay prioritize activation and assignation of a BWP that supports DSS. In BWPs supporting DSS between multiple RATs (e.g., NR and LTE), base station105-amay schedule communications for the UE115-ain resource elements for the first RAT FDMed with resource elements for the second RAT (e.g., as the UE115-amay rate match the communications using the first RAT with communications using the second RAT). Additionally or alternatively, base station105-amay determine a network load balance, and may use the network load balance to determine which BWP to assign to UE115-a. In some aspects, the assignation may be dynamic based on a traffic distribution between the first and second RATs and/or between the different BWPs. Base station105-amay determine the amount of traffic using each RAT. Additionally or alternatively, base station105-amay receive rate matching capability indications215from multiple UEs115operating within coverage area110-a. Base station105-amay activate a specific BWP for each UE115connected to the base station105-abased on the network load balancing calculation. In some aspects, base station105-amay match UEs115to BWPs such that the total throughput of the cell is maximized. To optimize the average throughput for any UE115, base station105-amay periodically switch a UE115from one BWP to another. In one aspect, base station105-amay periodically or aperiodically switch UE115-afrom an NR-dedicated BWP to a DSS-deployed BWP—or vice versa—to efficiently spread NR overhead amongst the available resource elements and/or resource blocks in the configured set of BWPs. In some cases, base station105-amay reassign UEs115operating within coverage area110-ato BWPs that more efficiently support their capabilities. In an aspect, base station105-amay switch UE115-afrom a BWP that supports DSS to a BWP that does not employ DSS. Base station105-amay use rate matching indications from UEs115in coverage area110-afor base station105-ain combination with the determined network load to assign or reassign UEs115to different configured BWPs. In some cases, base station105-amay identify that a UE115(e.g., UE115-a) assigned to a first BWP (e.g., an NR-dedicated BWP) is performing frequent handover procedures. This frequent handover may be due to increased mobility of the UE115, small cell sizes, or the like. In some aspects, the UE115may operate on a high frequency NR band (e.g., a frequency band greater than some threshold frequency). A high frequency band may correspond to a relatively small cell range, such that UEs115operating on such a high frequency band may frequently perform handover procedures to other cells (e.g., other cells with relatively small cell sizes). Base station105-amay determine that switching UE115-afrom the first BWP to a second BWP (e.g., a BWP corresponding to a relatively lower frequency band below the threshold frequency) may reduce the number of handover procedures. The first and second BWPs may be in the same or different carriers. In one aspect, the first BWP may be dedicated to a first RAT (e.g., NR), and the second BWP may support DSS between the first RAT and a second RAT (e.g., LTE). Switching UE115-afrom an NR-dedicated high frequency BWP to a low frequency BWP that supports DSS may support a greater cell size, allowing the UE115to use both RATs and reduce handover frequency (e.g., based on a relatively larger cell at the relatively lower frequency). Base station105-amay indicate the switch to UE115-aover communication link205-a. In some aspects, base station105-amay further switch UE115-afrom a first cell supporting a first carrier in a first frequency band to a second cell that supports a second carrier in a second frequency band. In one aspect, as described herein, UE115-amay be switched to a cell that is larger than the first cell, and thus UE115-amay be able to remain connected to that cell even in mobile conditions (e.g., for a longer time duration). FIG.3illustrates aspects of a carrier bandwidth300that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. Carrier bandwidth300may be an example of a carrier bandwidth used by a base station105as described with reference toFIGS.1and2. In some aspects, carrier bandwidth300may implement aspects of wireless communications systems100and200. Carrier bandwidth300may include a number of configured BWPs310(e.g., pre-configured BWPs310or BWPs310configured by a base station105). As illustrated, in some aspects, carrier bandwidth300may include up to four BWPs310. However, in some other aspects, carrier bandwidth300may include any number of BWPs310. As described herein, a base station105may use carrier bandwidth300for communications with one or more UEs115. The base station may configure carrier bandwidth300with a number of BWPs310, where each BWP310may be configured with different parameters (e.g., numerologies, modulation schemes, physical channels, frequency ranges, etc.). Additionally, each BWP310may be configured to use a first protocol (e.g., NR), a second protocol (e.g., LTE), or both (e.g., by using DSS between the first and second protocols). In one aspect, BWPs310-aand310-cmay be configured to use NR and BWPs310-band310-dmay be configured to use DSS between NR and LTE. As described with reference toFIG.2, a base station105may activate a specific BWP310for a UE115based on the network load, the rate matching capability indicated by the UE115, or some combination thereof. The base station may prioritize use of dedicated BWPs (e.g., BWPs310dedicated for a specific RAT, rather than support spectrum sharing between multiple RATs) for UEs115that are not implementing rate matching and use of DSS BWPs (e.g., BWPs310supporting spectrum sharing between multiple RATs) for UEs115that are capable of rate matching between the multiple supported RATs. In one aspect, a base station105may receive an indication that a first UE115does not support rate matching. The base station105may activate an NR-dedicated BWP310-afor the first UE115based on the first UE's rate matching indication (e.g., indicating that the UE115does not support rate matching of NR communications with LTE communications). Operating in the NR-dedicated BWP310-amay allow the first UE115to avoid a large overhead and/or limited scheduling flexibility that may accompany a non-rate matching UE115operating in a DSS BWP310(e.g., based on LTE communications, such as LTE CRSs, being present in the DSS BWP310). The base station105may receive another indication that a second UE115is capable of rate matching and may activate a DSS BWP310-bfor the second UE115based on the second UE's rate matching indication (e.g., indicating that the UE115supports rate matching of NR communications with LTE communications). The second UE115may be scheduled with NR communications that are TDMed or FDMed with LTE communications in the DSS BWP310-bbased on the second UE115supporting rate matching between NR and LTE, such as LTE CRSs. In some aspects, the second UE115may be scheduled in TTIs that overlap with TTIs used for LTE communications and have LTE CRS present. In another aspect, a base station105may reassign UEs115operating within the coverage area110of the base station105to BWPs310that more efficiently support the capabilities of the UEs115. In this manner, the base station105may intelligently match UEs115to appropriate BWPs310such that the total throughput of the cell across all UEs115is maximized. In some cases, the base station105may dynamically configure the BWPs310—such as which BWPs310support DSS and which BWPs310are dedicated for NR—based on the UEs115in the system (e.g., the current UEs115in the system, historical information about UEs115in the system, etc.). In one aspect, if the UEs115in the system are predominantly LTE UEs115or UEs115supporting rate matching between NR and LTE, the base station105may configure a larger proportion of the BWPs310for DSS. In another aspect, if the UEs115in the system are predominantly NR UEs115not implementing rate matching between NR and LTE, the base station105may configure a larger proportion of the BWPs310as NR-dedicated BWPs310. FIG.4illustrates aspects of a process flow400that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. In some aspects, the process flow400may implement aspects of wireless communications systems100and200. Base station105-bmay support DSS between a first RAT (e.g., NR) and a second RAT (e.g., LTE) and may activate a BWP for UE115-bbased on a capability or mobility condition of UE115-b. Base station105-band UE115-bmay be examples of the corresponding wireless devices described with reference toFIGS.1through3. Alternative aspects of the following may be implemented, where some steps are performed in a different order than described or are not performed at all. In some cases, steps may include additional features not mentioned below, or further steps may be added. At405, base station105-bmay identify a configuration for a frequency spectrum. In some cases, the frequency spectrum may be pre-configured, and base station105-bmay indicate the pre-configuration to UE115-b. In some other cases, base station105-bmay dynamically or semi-statically configure the frequency spectrum (e.g., based on current or historical traffic information for the base station's cell). The configuration may include a set of BWPs (e.g., up to four BWPs) such that a first subset of the set of BWPs is dedicated for a first RAT (e.g., NR) and a second subset of the set of BWPs is dynamically shared between the first RAT and a second RAT (e.g., supporting DSS for both NR and LTE). At410, UE115-bmay transmit an indication of a rate matching capability of the UE115-b. Base station105-bmay receive the indication of the rate matching capability and, at415, base station105-bmay activate a BWP of the set of BWPs for UE115-bbased on the rate matching capability for the UE115-b. In some aspects, if UE115-bindicates an absence of support for rate matching communications for the first RAT (e.g., NR) with a CRS for the second RAT (e.g., LTE), base station105-bmay activate a BWP from the first subset of the set of BWPs (e.g., a BWP dedicated for the first RAT) based on the absence of support for rate matching. In some other aspects, if UE115-bindicates support for rate matching communications for the first RAT (e.g., NR) with a CRS for the second RAT (e.g., LTE), base station105-bmay activate a BWP from the second subset of the set of BWPs (e.g., a BWP supporting DSS) based on the support for rate matching. In some cases, base station105-bmay activate a specific BWP for a specific UE115further based on load balancing between a set of UEs115. At420, base station105-bmay transmit, to UE115-b, an indication of the activated BWP for communication. UE115-band base station105-bmay communicate with one another on the activated BWP. In some cases, at425, base station105-bmay identify that UE115-bperforms a number of handover procedures greater than a threshold number of handover procedures while communicating in the activated BWP. At430, base station105-bmay switch the UE115-bfrom the activated BWP to a second BWP of the set of BWPs based on the identified number of handover procedures. In one aspect, base station105-bmay switch UE115-bfrom a BWP that is dedicated for NR (e.g., a relatively high frequency band) to a BWP that supports DSS (e.g., a relatively low frequency band). At435, base station105-bmay transmit, to UE115-b, an indication of the second BWP for communication based on the switching. FIG.5illustrates aspects of a wireless communications system500that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. In some aspects, wireless communications system500may implement aspects of wireless communications systems100or200. In one aspect, wireless communications system500includes base station105-cand UE115-c, which may be examples of the corresponding devices as described with reference toFIGS.1and2. Base station105-cmay serve a geographic coverage area110-b. Base station105-cand UE115-cmay be capable of using one or more RATs (e.g., NR, LTE, etc.), which may share the same carrier frequency or frequency band. It is to be understood that references to specific RATs (e.g., LTE and NR) are provided for illustrative purposes only, and different RATs not specifically referred to herein may be used interchangeably with those described below. In some cases, UE115-cmay be capable of dual registration, such that UE115-cmay register with a cell for multiple RATs. Specifically, UE115-cmay include a radio frequency chain that may be capable of receiving and transmitting in multiple RATs. UE115-cmay use a communication link505to initiate a registration procedure510-ato begin communicating with base station105-c. The registration procedure510-amay be performed for a first cell on a first carrier supporting a first RAT (e.g., NR). During the registration procedure510-a, UE115-cmay receive information (e.g., a cell identifier) indicating that base station105-csupports DSS between the first RAT and a second RAT (e.g., LTE) in a frequency spectrum. UE115-cmay perform a second registration procedure510-bfor a second cell on a second carrier that supports the second RAT. In one aspect, the first cell and the second cell may correspond to a same cell (e.g., corresponding to a same frequency spectrum). However, in some cases, the first cell and the second cell may have different cell identifiers. In one aspect, the first cell may have an NR cell identifier, while the second cell may have an LTE cell identifier. In some aspects, the first carrier and the second carrier may be the same carrier or may be at least partially overlapping carriers (e.g., at least a portion of the first and second carriers may support DSS between the first RAT and the second RAT). In this way, UE115-cmay connect to a cell (e.g., supported by base station105-c) using multiple different RATs (e.g., both NR and LTE) and may switch between NR communications and LTE communications without detaching from the cell. In some cases, UE115-cmay communicate (e.g., receive) using NR and LTE concurrently. After performing dual registration, UE115-cmay be able to use the first and second RATs concurrently in a single carrier or in overlapping carriers. In one aspect, the NR carrier and the LTE carrier may span different frequency bands, where the NR carrier may span a first set of frequency resources and the LTE carrier may span a subset of the first set of frequency resources. In some cases, UE115-cmay use one RAT as a primary technology (e.g., NR) and the other RAT as a secondary technology (e.g., LTE). UE115-cmay perform communications (e.g., data communications) on the first carrier using the first RAT and may perform communications (e.g., voice communications, data communications) on the second carrier using the second RAT. In some aspects, after completing the registration, UE115-cmay switch to an idle mode in one or both RATs. In one aspect, UE115-cmay be engaged in data communications on the first RAT (e.g., NR) and may receive or originate a voice call (e.g., using voice over NR (VoNR)). Base station105-cmay seamlessly (e.g., with minimal or insignificant delay) direct the UE115-cto continue the voice call on resource blocks scheduled for the second RAT (e.g., using voice over LTE (VoLTE)). In some cases, base station105-cmay direct traffic associated with voice communications to LTE resources based on a threshold quality of service (QoS) for voice communications supported by LTE. UE115-cmay be in a multi-radio bearer mode and may avoid fallback procedures used to achieve voice service. Specifically, UE115-cmay continue to support data communications using NR while concurrently handling voice communications using LTE. In some cases, the second carrier may at least partially overlap with the first carrier in the frequency spectrum (e.g., an LTE 20 Megahertz (MHz) carrier may be a subset of an NR 100 MHz carrier). The subcarrier spacing of the first RAT may be an integral multiple of the subcarrier spacing of the second RAT (e.g., 15 kilohertz (kHz) for LTE, 15 kHz or 30 kHz for NR). UE115-cmay include one or more radio frequency chains tuned to the entire BWP for one or both of the RATs and may establish a first radio bearer for communicating with the base station105-con the first carrier and a second radio bearer for communicating with the base station105-con the second carrier. UE115-cmay be capable of distinguishing between signals of each RAT (e.g., based on different demodulation and decoding processes for the different radio bearers). In some aspects, UE115-cmay store information indicating that base station105-csupports DSS between the first and second RATs. In one aspect, UE115-cmay cache, in local memory of the UE115-c, the cell identifier for base station105-cwith an indication of the DSS support. UE115-cmay prioritize connecting to base station105-cover other base stations105that do not support DSS when performing future acquisition processes. Additionally or alternatively, UE115-cmay automatically perform dual registration (e.g., both registration procedures510-aand510-b) when reconnecting to base station105-cbased on the stored indication of DSS support (e.g., without waiting for another indication from base station105-cthat base station105-csupports dual registration). DSS may enable base station105-cto multiplex resource elements for both RATs in the frequency domain on the same spectrum. In some aspects, the control channel for the first RAT may be used to schedule data communications for both the first and second RATs. Having performed dual registration procedures510-aand510-b, UE115-c(e.g., in standalone mode) may receive control messages for both RATs through a common control channel (e.g., in a control channel for the first RAT on the first carrier), avoiding separate control channels for each RAT. Each control message may indicate a set of resources (e.g., for data communications) for the respective RAT in the same carrier bandwidth. A portion of resources may be scheduled for each RAT on the same carrier bandwidth. This may enable devices supporting multiple RATs to have carrier aggregation-like advantages with a single radio frequency transceiver or chain. In some aspects, UE115-cmay monitor multiple carriers for different RATs using a single receive chain by receiving a signal, decoding the signal as if the signal corresponds to an LTE signal, and further decoding the signal as if the signal corresponds to an NR signal. As such, UE115-cmay use a single radio frequency transceiver and/or chain to concurrently monitor for NR messages and LTE messages (e.g., in a common control channel for both NR and LTE scheduling). In some cases, UE115-cmay display an icon to inform a user of the DSS capabilities of the UE115-c. In one aspect, UE115-cmay display an icon (e.g., a merged icon of both 4G and 5G) indicating that DSS is supported, rate matching of NR and LTE is supported, dual registration is supported, or some combination thereof. In this way, a user may easily differentiate between UEs115supporting DSS operations and UEs115not supporting DSS operations. In some cases, the UE115-cmay display the icon when DSS is supported by both the UE115-cand the currently connected base station105-c, indicating that the UE115-cis currently gaining the advantages of both LTE and NR communications. FIG.6illustrates aspects of a carrier bandwidth600that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. In some aspects, carrier bandwidth600may implement aspects of wireless communications systems100,200, or500. Carrier bandwidth600may be an example of a BWP310used by a base station105and/or a UE115as described with reference toFIG.3. As such, carrier bandwidth600may be used for wireless communications with various frequency bands. Carrier bandwidth600may support DSS. As described with reference toFIGS.3and5, a base station105(e.g., supporting a cell) may support dual registration within the carrier bandwidth600. In some aspects, carrier bandwidth600may include an NR carrier605and an LTE carrier610. In some cases, the NR carrier605may encapsulate the LTE carrier610on a smaller sub-band. A UE115that successfully performs a dual registration procedure on the NR carrier605and the LTE carrier610may concurrently communicate on both carriers with the base station105(e.g., in a same frequency spectrum). In one aspect, a UE115may be tuned to the full carrier bandwidth600to monitor for both NR messages (e.g., in the NR carrier605) and LTE messages (e.g., in the LTE carrier610). Some resource blocks in the carrier bandwidth600may be scheduled for NR communications and some resource blocks in the carrier bandwidth600may be scheduled for LTE communications. In some cases, the base station105may perform the scheduling in a single control channel615(e.g., an NR control channel) that can schedule both NR and LTE communications. In one aspect, control messages in DCI in control channel615may include an indicator of a RAT for a resource allocation, and a UE115receiving a control message may process the resource allocation in accordance with the RAT indicator. The RAT indicator may be, in one aspect, a bit in the DCI or a radio network temporary indicator (RNTI) associated with the RAT. In one aspect, the UE115may receive, over the single control channel615, an NR message scheduling specific PDSCH resources for NR and an LTE message scheduling specific PDSCH resources for LTE. In another aspect, a single control message may schedule both NR and LTE PDSCH resources. In some cases, the control channel615may span a subset of the NR carrier605frequency and a number of TTIs (e.g., one symbol, two symbols, etc.). The control channel615may overlap in frequency with LTE carrier610or may not overlap in frequency with LTE carrier610, in some cases. FIG.7illustrates aspects of a process flow700that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. In some aspects, the process flow700may implement aspects of wireless communications systems100,200, and500. Base station105-dmay support DSS between a first RAT (e.g., NR) and a second RAT (e.g., LTE) and may support dual registration by UE115-dfor both RATs (e.g., NR and LTE) on a cell. Base station105-dand UE115-dmay be examples of the corresponding wireless devices described with reference toFIGS.1through6. Alternative aspects of the following may be implemented, where some steps are performed in a different order than described or are not performed at all. In some cases, steps may include additional features not mentioned below, or further steps may be added. At705, base station105-dmay configure a frequency spectrum for dynamic sharing between a first RAT (e.g., NR) and a second RAT (e.g., LTE). At710, UE115-dmay perform a first registration procedure with base station105-don a first cell supporting a first RAT in the frequency spectrum. The first registration procedure may include, for example, authentication and security (e.g., key provisioning) and establishment of a default bearer for the first RAT. In the first registration procedure, the UE115-dmay send an attach request to the base station105-dwith an identifier of the UE115-d(e.g., subscriber identity). The base station105-dmay confirm (e.g., via a core network) the subscriber identity and may establish an active bearer context for the UE115-don the first RAT. The base station105-dmay respond with an RRC reconfiguration complete message to complete the first registration procedure. At715, UE115-dmay determine that base station105-dsupports dynamic sharing of the frequency spectrum between the first RAT and the second RAT. In one aspect, as part of the first registration procedure, UE115-dmay receive a cell identifier for base station105-d, where the cell identifier indicates that the first cell supports dual registration in the dynamically shared frequency spectrum. In some cases, UE115-dmay cache, in local memory at the UE115-d, an indication that the base station105-dsupports the dynamic sharing of the frequency spectrum between the first RAT and the second RAT. At720, UE115-dmay perform a second registration procedure with base station105-don a second cell (e.g., associated with the first cell) supporting the second RAT in the frequency spectrum based on base station105-dsupporting DSS between the first and second RATs. The second registration procedure may include, for example, authentication and security (e.g., key provisioning) and establishment of a default bearer for the second RAT. In the second registration procedure, the UE115-dmay send an attach request to the base station105-dwith an identifier of the UE115-d(e.g., subscriber identity). The base station105-dmay confirm (e.g., via a core network) the subscriber identity and may establish an active bearer context for the UE115-don the second RAT. The base station105-dmay respond with an RRC reconfiguration complete message to complete the second registration procedure. In some aspects, the first registration procedure may be performed on a first carrier and the second registration procedure may be performed on a second carrier. Based on the DSS, the second carrier may at least partially overlap with the first carrier in the frequency spectrum. In some examples, the first carrier and the second carrier may correspond to the same carrier. Additionally or alternatively, the first cell and the second cell may correspond to the same cell. At725, UE115-dand base station105-dmay communicate based on the first registration procedure and the second registration procedure (e.g., as part of a dual registration procedure). In some aspects, UE115-dmay perform data communications (e.g., on the first carrier) using the first RAT and may perform, at least partially concurrent to the data communications, voice communications (e.g., on the second carrier) using the second RAT. In some cases, base station105-dmay schedule data communications for both RATs (e.g., NR and LTE) using a single control channel. In one aspect, UE115-dmay receive, in a control channel for the first RAT (e.g., NR) on the first carrier, one or more control messages indicating a first set of resources in the frequency spectrum for data communications using the first RAT and a second set of resources in the frequency spectrum for data communications using the second RAT. In some aspects, the first carrier is the primary carrier for UE115-d, and UE115-dmay use a single radio frequency transceiver to receive the first set of resources and the second set of resources. FIG.8shows a block diagram800of a device805that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The device805may be an example of aspects of a UE115as described herein. The device805may include a receiver810, a communications manager815, and a transmitter820. The device805may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver810may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to enhancements for multiple radio protocol dynamic spectrum sharing, etc.). Information may be passed on to other components of the device805. The receiver810may be an example of aspects of the transceiver1120described with reference toFIG.11. The receiver810may utilize a single antenna or a set of antennas. The communications manager815may register, with a base station, on a first cell supporting a first RAT in a frequency spectrum and may register, with the base station, on a second cell supporting a second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the base station supporting dynamic sharing of the frequency spectrum between the first RAT and the second RAT. The communications manager815may further communicate with the base station based on the registering on the first cell supporting the first RAT and registering on the second cell supporting the second RAT. The communications manager815may be an example of aspects of the communications manager1110described herein. The communications manager815as described herein may be implemented to realize one or more potential advantages. In some aspects, a device805may improve voice support based on the dual registration. In one aspect, dual registration may allow the device805to refrain from performing full LTE fallback procedures, mitigating the latency involved in switching from NR to LTE resources during a voice call. Accordingly, the network may provide VoNR-like performance and latency over VoLTE resources in a DSS cell based on the dual registration. Additionally or alternatively, using a single control channel for dual registration control signaling (e.g., for scheduling both LTE and NR data transmissions) may improve UE performance and reduce control signaling overhead. Based on registering on the first cell and registering on the second cell for a first RAT and a second RAT respectively, as described herein, a processor of the device805may be associated with fewer processing computations and less processing time, which may result in improved power savings and increased battery life. In one aspect, by reducing the number of LTE fallback procedures, the processor may perform fewer connection procedures. Additionally or alternatively, by using a single control channel for LTE and NR scheduling, the UE may reduce the number of control channel resources to monitor after performing dual registration. As such, the device805may reduce the number of times that the device805ramps up processing units controlling the receiver810, the communications manager815, the transmitter820, or a combination thereof. The communications manager815, or its sub-components, may be implemented in hardware, code (e.g., software or firmware) executed by a processor, or any combination thereof. If implemented in code executed by a processor, the functions of the communications manager815, or its sub-components may be executed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The communications manager815, or its sub-components, may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical components. In some aspects, the communications manager815, or its sub-components, may be a separate and distinct component in accordance with various aspects of the present disclosure. In some aspects, the communications manager815, or its sub-components, may be combined with one or more other hardware components, including but not limited to an input/output (I/O) component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. The transmitter820may transmit signals generated by other components of the device805. In some aspects, the transmitter820may be collocated with a receiver810in a transceiver module. In one aspect, the transmitter820may be an example of aspects of the transceiver1120described with reference toFIG.11. The transmitter820may utilize a single antenna or a set of antennas. FIG.9shows a block diagram900of a device905that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The device905may be an example of aspects of a device805or a UE115as described herein. The device905may include a receiver910, a communications manager915, and a transmitter935. The device905may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver910may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to enhancements for multiple radio protocol dynamic spectrum sharing, etc.). Information may be passed on to other components of the device905. The receiver910may be an example of aspects of the transceiver1120described with reference toFIG.11. The receiver910may utilize a single antenna or a set of antennas. The communications manager915may be an example of aspects of the communications manager815as described herein. The communications manager915may include a dual registration component920, a DSS support identifier925, and a communication component930. The communications manager915may be an example of aspects of the communications manager1110described herein. The dual registration component920may register, with a base station, on a first cell supporting a first RAT in a frequency spectrum. In some examples, a DSS support identifier925may determine that the base station supports dynamic sharing of the frequency spectrum between the first RAT and a second RAT. The dual registration component920may further register, with the base station, on a second cell supporting the second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the base station supporting dynamic sharing of the frequency spectrum between the first RAT and the second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The communication component930may communicate with the base station based on the registering on the first cell supporting the first RAT and the registering on the second cell supporting the second RAT. The transmitter935may transmit signals generated by other components of the device905. In some aspects, the transmitter935may be collocated with a receiver910in a transceiver module. In one aspect, the transmitter935may be an example of aspects of the transceiver1120described with reference toFIG.11. The transmitter935may utilize a single antenna or a set of antennas. FIG.10shows a block diagram1000of a communications manager1005that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The communications manager1005may be an example of aspects of a communications manager815, a communications manager915, or a communications manager1110described herein. The communications manager1005may include a dual registration component1010, a DSS support identifier1015, a communication component1020, a caching component1025, a control messaging component1030, and a user interface component1035. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager1005may be implemented by a UE115. The dual registration component1010may register, with a base station, on a first cell supporting a first RAT in a frequency spectrum. In some examples, the DSS support identifier1015may determine that the base station supports dynamic sharing of the frequency spectrum between the first RAT and a second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). In some aspects, the DSS support identifier1015may receive a cell identifier for the first cell, where the determining is based on the cell identifier. For example, the DSS support identifier1015may store an association (e.g., a lookup table, a formula) between one or more cell identifiers and support for DSS between the first RAT and the second RAT. The DSS support identifier1015may receive the cell identifier for the first cell, the second cell, or both and may determine whether the first cell, the second cell, or both support dynamic sharing of the frequency spectrum between the first RAT and a second RAT based on an association stored at the UE115. In some aspects, the dual registration component1010may further register, with the base station, on a second cell supporting a second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum (e.g., where registering on the second cell in addition to registering on the first cell is based on the base station supporting dynamic sharing of the frequency spectrum between the first RAT and the second RAT). In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). In some cases, registering on the first cell and registering on the second cell may be part of a dual registration procedure. In some cases, the first cell and the second cell may correspond to a same cell. The communication component1020may communicate with the base station based on the registering on the first cell and the registering on the second cell. In some aspects, the communication component1020may perform data communications on the first cell using the first RAT and may perform, at least partially concurrent to performing the data communications, voice communications on the second cell using the second RAT. In some aspects, the communication component1020may establish a first radio bearer for the communicating with the base station using the first RAT and a second radio bearer for the communicating with the base station using the second RAT. In some aspects, the communication component1020may communicate with the base station on a first carrier using the first RAT based on the registering on the first cell and communicate with the base station on a second carrier using the second RAT based on the registering on the second cell. In some cases, the first carrier and the second carrier may be a same carrier. The caching component1025may cache, in local memory at the UE, an indication that the base station supports the dynamic sharing of the frequency spectrum between the first RAT and the second RAT. The control messaging component1030may receive, in a control channel for the first RAT, a control message indicating a first set of resources in the frequency spectrum for communications using the first RAT and a second set of resources in the frequency spectrum for communications using the second RAT. In some aspects, the control messaging component1030may use a single radio frequency transceiver to receive the first set of resources and the second set of resources. In some cases, the control message may be received on a primary carrier corresponding to the first RAT. The user interface component1035may display, in a user interface of the UE, an icon indicating that the UE supports the dynamic sharing of the frequency spectrum between the first RAT and the second RAT. FIG.11shows a diagram of a system1100including a device1105that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The device1105may be an example of or include the components of device805, device905, or a UE115as described herein. The device1105may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including a communications manager1110, an I/O controller1115, a transceiver1120, an antenna1125, memory1130, and a processor1140. These components may be in electronic communication via one or more buses (e.g., bus1145). The communications manager1110may register, with a base station, on a first cell supporting a first RAT in a frequency spectrum, register, with the base station, on a second cell supporting a second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the base station supporting dynamic sharing of the frequency spectrum between the first RAT and the second RAT, and communicate with the base station based on the registering on the first cell supporting the first RAT and the registering on the second cell supporting the second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The I/O controller1115may manage input and output signals for the device1105. The I/O controller1115may also manage peripherals not integrated into the device1105. In some cases, the I/O controller1115may represent a physical connection or port to an external peripheral. In some cases, the I/O controller1115may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In other cases, the I/O controller1115may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller1115may be implemented as part of a processor. In some cases, a user may interact with the device1105via the I/O controller1115or via hardware components controlled by the I/O controller1115. The transceiver1120may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. In one aspect, the transceiver1120may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1120may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna1125. However, in some cases the device may have more than one antenna1125, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The memory1130may include random-access memory (RAM) and read-only memory (ROM). The memory1130may store computer-readable, computer-executable code1135including instructions that, when executed, cause the processor to perform various functions described herein. In some cases, the memory1130may contain, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1140may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a central processing unit (CPU), a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1140may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into the processor1140. The processor1140may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1130) to cause the device1105to perform various functions (e.g., functions or tasks supporting enhancements for multiple radio protocol DSS). The code1135may include instructions to implement aspects of the present disclosure, including instructions to support wireless communications. The code1135may be stored in a non-transitory computer-readable medium such as system memory or other type of memory. In some cases, the code1135may not be directly executable by the processor1140but may cause a computer (e.g., when compiled and executed) to perform functions described herein. FIG.12shows a block diagram1200of a device1205that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The device1205may be an example of aspects of a base station105as described herein. The device1205may include a receiver1210, a communications manager1215, and a transmitter1220. The device1205may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1210may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to enhancements for multiple radio protocol dynamic spectrum sharing, etc.). Information may be passed on to other components of the device1205. The receiver1210may be an example of aspects of the transceiver1520described with reference toFIG.15. The receiver1210may utilize a single antenna or a set of antennas. In one aspect, the communications manager1215may configure a set of BWPs such that a first subset of the set of BWPs is dedicated for a first RAT and a second subset of the set of BWPs is dynamically shared between the first RAT and a second RAT, receive, from a UE, an indication of a rate matching capability of the UE, activate, for the UE, a BWP of the set of BWPs for communication based on the rate matching capability of the UE and the configuring, and transmit, to the UE, an indication of the activated BWP for communication. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). In another aspect, the communications manager1215may communicate with a UE in a first BWP of a set of BWPs dedicated for a first RAT, identify that the UE performs a number of handover procedures greater than a threshold number of handover procedures while communicating in the first BWP, switch, based on the identifying, the UE from the first BWP to a second BWP of the set of BWPs different from the first BWP and dynamically shared between the first RAT and a second RAT, and transmit, to the UE, an indication of the second BWP for communication based on the switching. In yet another aspect, the communications manager1215may configure a frequency spectrum for dynamic sharing between a first RAT and a second RAT, register a UE on a first cell supporting the first RAT in the frequency spectrum, register the UE on a second cell supporting the second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the configuring, and communicate with the UE based on the registering the UE on the first cell supporting the first RAT and the registering the UE on the second cell supporting the second RAT. The communications manager1215may be an example of aspects of the communications manager1510described herein. The communications manager1215as described herein may be implemented to realize one or more potential advantages. In some aspects, a device1205may efficiently allocate the available resources by activating BWPs for UEs based on the rate matching capabilities of the UEs (e.g., along with load balancing). Specifically, if a UE does not support rate matching, there may be a significant overhead and reduction in the available resources for NR data scheduling in a DSS frequency band. As such, activating a BWP dedicated for NR for such a UE may greatly improve the efficiency of the spectrum usage. In some aspects, a device1205may improve mobility performance of UEs when detecting multiple handovers by switching the UEs to DSS-supported frequency bands. In some aspects, the device1205may improve voice support for UEs based on a dual registration procedure. In one aspect, dual registration may allow the device1205to provide VoNR-like performance and latency over VoLTE resources in a DSS cell. Additionally or alternatively, using a single control channel for dual registration control signaling (e.g., for scheduling both LTE and NR data transmissions) may reduce control signaling overhead. The communications manager1215, or its sub-components, may be implemented in hardware, code (e.g., software or firmware) executed by a processor, or any combination thereof. If implemented in code executed by a processor, the functions of the communications manager1215, or its sub-components may be executed by a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The communications manager1215, or its sub-components, may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical components. In some aspects, the communications manager1215, or its sub-components, may be a separate and distinct component in accordance with various aspects of the present disclosure. In some aspects, the communications manager1215, or its sub-components, may be combined with one or more other hardware components, including but not limited to an I/O component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. The transmitter1220may transmit signals generated by other components of the device1205. In some aspects, the transmitter1220may be collocated with a receiver1210in a transceiver module. In one aspect, the transmitter1220may be an example of aspects of the transceiver1520described with reference toFIG.15. The transmitter1220may utilize a single antenna or a set of antennas. FIG.13shows a block diagram1300of a device1305that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The device1305may be an example of aspects of a device1205or a base station105as described herein. The device1305may include a receiver1310, a communications manager1315, and a transmitter1360. The device1305may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1310may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to enhancements for multiple radio protocol dynamic spectrum sharing, etc.). Information may be passed on to other components of the device1305. The receiver1310may be an example of aspects of the transceiver1520described with reference toFIG.15. The receiver1310may utilize a single antenna or a set of antennas. The communications manager1315may be an example of aspects of the communications manager1215as described herein. The communications manager1315may include a BWP configuration component1320, a rate matching capability component1325, a BWP activation component1330, a communication component1335, a handover identification component1340, a BWP switching component1345, a DSS configuration component1350, and a dual registration component1355. The communications manager1315may be an example of aspects of the communications manager1510described herein. The BWP configuration component1320may configure a set of BWPs such that a first subset of the set of BWPs is dedicated for a first RAT and a second subset of the set of BWPs is dynamically shared between the first RAT and a second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The rate matching capability component1325may receive, from a UE, an indication of a rate matching capability of the UE. The BWP activation component1330may activate, for the UE, a BWP of the set of BWPs for communication based on the rate matching capability of the UE and the configuring and transmit, to the UE, an indication of the activated BWP for communication. The communication component1335may communicate with a UE in a first BWP of a set of BWPs dedicated for a first RAT. The handover identification component1340may identify that the UE performs a number of handover procedures greater than a threshold number of handover procedures while communicating in the first BWP. The BWP switching component1345may switch, based on the identifying, the UE from the first BWP to a second BWP of the set of BWPs different from the first BWP and dynamically shared between the first RAT and a second RAT and transmit, to the UE, an indication of the second BWP for communication based on the switching. The DSS configuration component1350may configure a frequency spectrum for dynamic sharing between a first RAT and a second RAT. The dual registration component1355may register a UE on a first cell supporting the first RAT in the frequency spectrum and register the UE on a second cell supporting the second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the configuring. The communication component1335may communicate with the UE based on the registering the UE on the first cell supporting the first RAT and the registering the UE on the second cell supporting the second RAT. The transmitter1360may transmit signals generated by other components of the device1305. In some aspects, the transmitter1360may be collocated with a receiver1310in a transceiver module. In one aspect, the transmitter1360may be an example of aspects of the transceiver1520described with reference toFIG.15. The transmitter1360may utilize a single antenna or a set of antennas. FIG.14shows a block diagram1400of a communications manager1405that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The communications manager1405may be an example of aspects of a communications manager1215, a communications manager1315, or a communications manager1510described herein. The communications manager1405may include a BWP configuration component1410, a rate matching capability component1415, a BWP activation component1420, a load balancing component1425, a BWP switching component1430, a communication component1435, a handover identification component1440, a DSS configuration component1445, a dual registration component1450, a DSS support indicator1455, and a control messaging component1460. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager1405may be implemented by a base station105. In one aspect, the BWP configuration component1410may configure a set of BWPs such that a first subset of the set of BWPs is dedicated for a first RAT and a second subset of the set of BWPs is dynamically shared between the first RAT and a second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The rate matching capability component1415may receive, from a UE, an indication of a rate matching capability of the UE. The BWP activation component1420may activate, for the UE, a BWP of the set of BWPs for communication based on the rate matching capability of the UE and the configuring. The BWP activation component1420may transmit, to the UE, an indication of the activated BWP for communication. In some aspects, the indication of the rate matching capability of the UE indicates an absence of support for rate matching with a CRS for the second RAT (e.g., LTE). In some such aspects, the BWP activation component1420may activate, for the UE, the BWP from the first subset of the set of BWPs based on the absence of support for rate matching with the CRS for the second RAT. In some other aspects, the indication of the rate matching capability of the UE indicates support for rate matching with a CRS for the second RAT (e.g., LTE). In some such other aspects, the BWP activation component1420may activate, for the UE, the BWP from the second subset of the set of BWPs based on the UE supporting the rate matching with the CRS for the second RAT. The load balancing component1425may receive, from a set of additional UEs, a set of additional indications of rate matching capabilities for the set of additional UEs. In some aspects, the load balancing component1425may perform a load balancing calculation for the UE and the set of additional UEs across the set of BWPs, where the activating the BWP of the set of BWPs for communication is further based on the load balancing calculation. In some cases, the activated BWP for communication is an example of a first BWP of the set of BWPs. The BWP switching component1430may switch the UE from the first BWP to a second BWP of the set of BWPs different from the first BWP based on an average throughput for the UE. In some aspects, the BWP switching component1430may periodically switch the UE between the first BWP and the second BWP. The communication component1435may communicate with the UE in the activated BWP for communication. In another aspect, the communication component1435may communicate with a UE in a first BWP of a set of BWPs dedicated for a first RAT. In some cases, the first RAT may be 5G NR. The handover identification component1440may identify that the UE performs a number of handover procedures greater than a threshold number of handover procedures while communicating in the first BWP. The BWP switching component1430may switch, based on the identifying, the UE from the first BWP to a second BWP of the set of BWPs different from the first BWP and dynamically shared between the first RAT and a second RAT. In some cases, the second RAT may be LTE. The BWP switching component1430may transmit, to the UE, an indication of the second BWP for communication based on the switching. In some aspects, the BWP switching component1430may switch the UE from a first cell supporting the first BWP dedicated for the first RAT and a first carrier in a first frequency band to a second cell supporting the second BWP dynamically shared between the first RAT and the second RAT and a second carrier in a second frequency band that is different from the first frequency band. In yet another aspect, the DSS configuration component1445may configure a frequency spectrum for dynamic sharing between a first RAT and a second RAT. In some cases, the first RAT may be 5G NR and the second RAT may be LTE. The dual registration component1450may register a UE on a first cell supporting the first RAT in the frequency spectrum. The dual registration component1450may additionally register the UE on a second cell supporting the second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the configuring. In some cases, registering on the first cell supporting the first RAT and registering on the second cell supporting the second RAT may be part of a dual registration procedure for the UE. In some cases, the first cell and the second cell may correspond to a same cell. The communication component1435may communicate with the UE based on the registering on the first cell supporting the first RAT and the registering on the second cell supporting the second RAT. In some aspects, the communication component1435may perform data communications (e.g., on a first carrier) using the first RAT and may perform, at least partially concurrent to the performing the data communications, voice communications (e.g., on a second carrier) using the second RAT. In some aspects, the communication component1435may direct traffic associated with the voice communications to the second RAT based on the second RAT supporting a threshold QoS for the voice communications. In some aspects, the communication component1435may maintain data connectivity using the first RAT during the performing the voice communications using the second RAT. In some aspects, the communication component1435may communicate with the UE on a first carrier using the first RAT based on the registering on the first cell supporting the first RAT and communicate with the UE on a second carrier using the second RAT based on the registering on the second cell supporting the second RAT. The first carrier and the second carrier may be a same carrier. The DSS support indicator1455may transmit a cell identifier for the first cell, the second cell, or both where the cell identifier is associated with support of the dynamic sharing between the first RAT and the second RAT, where the registering the UE on the second cell supporting the second RAT (e.g., in addition to registering the UE on the first cell supporting the first RAT) is based on the cell identifier. The control messaging component1460may transmit, in a control channel for the first RAT, a control message indicating a first set of resources in the frequency spectrum for communications using the first RAT and a second set of resources in the frequency spectrum for communications using the second RAT. FIG.15shows a diagram of a system1500including a device1505that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The device1505may be an example of or include the components of device1205, device1305, or a base station105as described herein. The device1505may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including a communications manager1510, a network communications manager1515, a transceiver1520, an antenna1525, memory1530, a processor1540, and an inter-station communications manager1545. These components may be in electronic communication via one or more buses (e.g., bus1550). In some aspects, the communications manager1510may configure a set of BWPs such that a first subset of the set of BWPs is dedicated for a first RAT and a second subset of the set of BWPs is dynamically shared between the first RAT and a second RAT, receive, from a UE, an indication of a rate matching capability of the UE, activate, for the UE, a BWP of the set of BWPs for communication based on the rate matching capability of the UE and the configuring, and transmit, to the UE, an indication of the activated BWP for communication. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). Additionally or alternatively, the communications manager1510may communicate with a UE in a first BWP of a set of BWPs dedicated for a first RAT, identify that the UE performs a number of handover procedures greater than a threshold number of handover procedures while communicating in the first BWP, switch, based on the identifying, the UE from the first BWP to a second BWP of the set of BWPs different from the first BWP and dynamically shared between the first RAT and a second RAT, and transmit, to the UE, an indication of the second BWP for communication based on the switching. Further, additionally or alternatively, the communications manager1510may configure a frequency spectrum for dynamic sharing between a first RAT and a second RAT, register a UE on a first cell supporting the first RAT in the frequency spectrum, register the UE on a second cell supporting the second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the configuring, and communicate with the UE based on the registering the UE on the first cell supporting the first RAT and the registering the UE on the second cell supporting the second RAT. The network communications manager1515may manage communications with the core network130(e.g., via one or more wired backhaul links). In one aspect, the network communications manager1515may manage the transfer of data communications for client devices, such as one or more UEs115. The transceiver1520may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. In one aspect, the transceiver1520may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1520may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna1525. However, in some cases the device may have more than one antenna1525, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The memory1530may include RAM, ROM, or a combination thereof. The memory1530may store computer-readable code1535including instructions that, when executed by a processor (e.g., the processor1540) cause the device to perform various functions described herein. In some cases, the memory1530may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1540may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1540may be configured to operate a memory array using a memory controller. In some cases, a memory controller may be integrated into processor1540. The processor1540may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1530) to cause the device1505to perform various functions (e.g., functions or tasks supporting enhancements for multiple radio protocol DSS). The inter-station communications manager1545may manage communications with other base station105and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. In one aspect, the inter-station communications manager1545may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some aspects, the inter-station communications manager1545may provide an X2 interface within an LTE/LTE-A wireless communication network technology to provide communication between base stations105. The code1535may include instructions to implement aspects of the present disclosure, including instructions to support wireless communications. The code1535may be stored in a non-transitory computer-readable medium such as system memory or other type of memory. In some cases, the code1535may not be directly executable by the processor1540but may cause a computer (e.g., when compiled and executed) to perform functions described herein. FIG.16shows a flowchart illustrating a method1600that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The operations of method1600may be implemented by a base station105or its components as described herein. In one aspect, the operations of method1600may be performed by a communications manager as described with reference toFIGS.12through15. In some aspects, a base station may execute a set of instructions to control the functional elements of the base station to perform the functions described below. Additionally or alternatively, a base station may perform aspects of the functions described below using special-purpose hardware. At1605, the base station may configure a set of BWPs such that a first subset of the set of BWPs is dedicated for a first RAT and a second subset of the set of BWPs is dynamically shared between the first RAT and a second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The operations of1605may be performed according to the methods described herein. In some aspects, aspects of the operations of1605may be performed by a BWP configuration component as described with reference toFIGS.12through15. At1610, the base station may receive, from a UE, an indication of a rate matching capability of the UE. The operations of1610may be performed according to the methods described herein. In some aspects, aspects of the operations of1610may be performed by a rate matching capability component as described with reference toFIGS.12through15. At1615, the base station may activate, for the UE, a BWP of the set of BWPs for communication based on the rate matching capability of the UE and the configuring. The operations of1615may be performed according to the methods described herein. In some aspects, aspects of the operations of1615may be performed by a BWP activation component as described with reference toFIGS.12through15. At1620, the base station may transmit, to the UE, an indication of the activated BWP for communication. The operations of1620may be performed according to the methods described herein. In some aspects, aspects of the operations of1620may be performed by a BWP activation component as described with reference toFIGS.12through15. FIG.17shows a flowchart illustrating a method1700that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The operations of method1700may be implemented by a base station105or its components as described herein. In one aspect, the operations of method1700may be performed by a communications manager as described with reference toFIGS.12through15. In some aspects, a base station may execute a set of instructions to control the functional elements of the base station to perform the functions described below. Additionally or alternatively, a base station may perform aspects of the functions described below using special-purpose hardware. At1705, the base station may communicate with a UE in a first BWP of a set of BWPs dedicated for a first RAT. The operations of1705may be performed according to the methods described herein. In some aspects, aspects of the operations of1705may be performed by a communication component as described with reference toFIGS.12through15. At1710, the base station may identify that the UE performs a number of handover procedures greater than a threshold number of handover procedures while communicating in the first BWP. The operations of1710may be performed according to the methods described herein. In some aspects, aspects of the operations of1710may be performed by a handover identification component as described with reference toFIGS.12through15. At1715, the base station may switch, based on the identifying, the UE from the first BWP to a second BWP of the set of BWPs different from the first BWP and dynamically shared between the first RAT and a second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The operations of1715may be performed according to the methods described herein. In some aspects, aspects of the operations of1715may be performed by a BWP switching component as described with reference toFIGS.12through15. At1720, the base station may transmit, to the UE, an indication of the second BWP for communication based on the switching. The operations of1720may be performed according to the methods described herein. In some aspects, aspects of the operations of1720may be performed by a BWP switching component as described with reference toFIGS.12through15. FIG.18shows a flowchart illustrating a method1800that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The operations of method1800may be implemented by a UE115or its components as described herein. In one aspect, the operations of method1800may be performed by a communications manager as described with reference toFIGS.8through11. In some aspects, a UE may execute a set of instructions to control the functional elements of the UE to perform the functions described below. Additionally or alternatively, a UE may perform aspects of the functions described below using special-purpose hardware. At1805, the UE may register, with a base station, on a first cell supporting a first RAT in a frequency spectrum. The operations of1805may be performed according to the methods described herein. In some aspects, aspects of the operations of1805may be performed by a dual registration component as described with reference toFIGS.8through11. At1810, the UE may register, with the base station, on a second cell supporting a second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the base station supporting dynamic sharing of the frequency spectrum between the first RAT and the second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The operations of1810may be performed according to the methods described herein. In some aspects, aspects of the operations of1810may be performed by a dual registration component as described with reference toFIGS.8through11. At1815, the UE may communicate with the base station based on the registering on the first cell supporting the first RAT and the registering on the second cell supporting the second RAT. The operations of1815may be performed according to the methods described herein. In some aspects, aspects of the operations of1815may be performed by a communication component as described with reference toFIGS.8through11. FIG.19shows a flowchart illustrating a method1900that supports enhancements for multiple radio protocol DSS in accordance with aspects of the present disclosure. The operations of method1900may be implemented by a base station105or its components as described herein. In one aspect, the operations of method1900may be performed by a communications manager as described with reference toFIGS.12through15. In some aspects, a base station may execute a set of instructions to control the functional elements of the base station to perform the functions described below. Additionally or alternatively, a base station may perform aspects of the functions described below using special-purpose hardware. At1905, the base station may configure a frequency spectrum for dynamic sharing between a first RAT and a second RAT. In one aspect, the first RAT may be a 5th generation radio technology (e.g., 5G NR), and the second RAT may be a 4th generation radio technology (e.g., LTE, LTE-A, LTE-A Pro). The operations of1905may be performed according to the methods described herein. In some aspects, aspects of the operations of1905may be performed by a DSS configuration component as described with reference toFIGS.12through15. At1910, the base station may register a UE on a first cell supporting the first RAT in the frequency spectrum. The operations of1910may be performed according to the methods described herein. In some aspects, aspects of the operations of1910may be performed by a dual registration component as described with reference toFIGS.12through15. At1915, the base station may register, based on the configuring, the UE on a second cell supporting the second RAT different from the first RAT and at least partially overlapping with the first cell in the frequency spectrum based on the configuring. The operations of1915may be performed according to the methods described herein. In some aspects, aspects of the operations of1915may be performed by a dual registration component as described with reference toFIGS.12through15. At1920, the base station may communicate with the UE based on the registering the UE on the first cell supporting the first RAT and the registering the UE on the second cell supporting the second RAT. The operations of1920may be performed according to the methods described herein. In some aspects, aspects of the operations of1920may be performed by a communication component as described with reference toFIGS.12through15. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communications implemented by a base station, comprising: configuring a plurality of bandwidth parts such that a first subset of the plurality of bandwidth parts is dedicated for a first radio access technology and a second subset of the plurality of bandwidth parts is dynamically shared between the first radio access technology and a second radio access technology; receiving, from a UE, an indication of a rate matching capability of the UE; activating, for the UE, a bandwidth part of the plurality of bandwidth parts for communication based at least in part on the rate matching capability of the UE and the configuring; and transmitting, to the UE, an indication of the activated bandwidth part for communication. Aspect 2: The method of aspect 1, wherein the indication of the rate matching capability of the UE indicates an absence of support for rate matching with a cell-specific reference signal for the second radio access technology and the activating further comprises: activating, for the UE, the bandwidth part from the first subset of the plurality of bandwidth parts based at least in part on the absence of support for rate matching with the cell-specific reference signal for the second radio access technology. Aspect 3: The method of any of aspects 1 through 2, wherein the indication of the rate matching capability of the UE indicates support for rate matching with a cell-specific reference signal for the second radio access technology and the activating further comprises: activating, for the UE, the bandwidth part from the second subset of the plurality of bandwidth parts based at least in part on the UE supporting the rate matching with the cell-specific reference signal for the second radio access technology. Aspect 4: The method of any of aspects 1 through 3, wherein the first radio access technology comprises a fifth generation radio technology; and the second radio access technology comprises a long term evolution technology. Aspect 5: The method of any of aspects 1 through 4, further comprising: receiving, from a plurality of additional UEs, a plurality of additional indications of rate matching capabilities for the plurality of additional UEs; and performing a load balancing calculation for the UE and the plurality of additional UEs across the plurality of bandwidth parts, wherein the activating the bandwidth part of the plurality of bandwidth parts for communication is further based at least in part on the load balancing calculation. Aspect 6: The method of any of aspects 1 through 5, wherein the activated bandwidth part for communication comprises a first bandwidth part of the plurality of bandwidth parts, the method further comprising: switching the UE from the first bandwidth part to a second bandwidth part of the plurality of bandwidth parts different from the first bandwidth part based at least in part on an average throughput for the UE. Aspect 7: The method of aspect 6, wherein the switching further comprises: periodically switching the UE between the first bandwidth part and the second bandwidth part. Aspect 8: The method of any of aspects 1 through 7, further comprising: communicating with the UE in the activated bandwidth part for communication. Aspect 9: A method for wireless communications implemented by a base station, comprising: communicating with a UE in a first bandwidth part of a plurality of bandwidth parts dedicated for a first radio access technology; identifying that the UE performs a number of handover procedures greater than a threshold number of handover procedures while communicating in the first bandwidth part; switching, based at least in part on the identifying, the UE from the first bandwidth part to a second bandwidth part of the plurality of bandwidth parts different from the first bandwidth part and dynamically shared between the first radio access technology and a second radio access technology; and transmitting, to the UE, an indication of the second bandwidth part for communication based at least in part on the switching. Aspect 10: The method of aspect 9, wherein the switching further comprises: switching the UE from a first cell supporting the first bandwidth part dedicated for the first radio access technology in a first frequency band to a second cell supporting the second bandwidth part dynamically shared between the first radio access technology and the second radio access technology in a second frequency band that is different from the first frequency band. Aspect 11: The method of any of aspects 9 through 10, wherein the first radio access technology comprises a fifth generation radio technology; and the second radio access technology comprises a long term evolution technology. Aspect 12: A method for wireless communications implemented by a UE, comprising: registering, with a base station, on a first cell supporting a first radio access technology in a frequency spectrum; registering, with the base station, on a second cell supporting a second radio access technology different from the first radio access technology and at least partially overlapping with the first cell in the frequency spectrum based at least in part on the base station supporting dynamic sharing of the frequency spectrum between the first radio access technology and the second radio access technology; and communicating with the base station based at least in part on the registering on the first cell supporting the first radio access technology and the registering on the second cell supporting the second radio access technology. Aspect 13: The method of aspect 12, further comprising: receiving a cell identifier for the first cell, wherein the cell identifier indicates that the base station supports the dynamic sharing of the frequency spectrum between the first radio access technology and the second radio access technology. Aspect 14: The method of any of aspects 12 through 13, wherein the communicating comprises: performing data communications on the first cell using the first radio access technology; and performing, at least partially concurrent to the performing the data communications, voice communications on the second cell using the second radio access technology. Aspect 15: The method of any of aspects 12 through 14, further comprising: establishing a first radio bearer for the communicating with the base station using the first radio access technology and a second radio bearer for the communicating with the base station using the second radio access technology. Aspect 16: The method of any of aspects 12 through 15, further comprising: caching, in local memory at the UE, an indication that the base station supports the dynamic sharing of the frequency spectrum between the first radio access technology and the second radio access technology. Aspect 17: The method of any of aspects 12 through 16, further comprising: receiving, in a control channel for the first radio access technology, a control message indicating a first set of resources in the frequency spectrum for communications using the first radio access technology and a second set of resources in the frequency spectrum for communications using the second radio access technology. Aspect 18: The method of aspect 17, wherein the control message is received on a primary carrier corresponding to the first radio access technology. Aspect 19: The method of any of aspects 17 through 18, further comprising: using a single radio frequency transceiver to receive the first set of resources and the second set of resources. Aspect 20: The method of any of aspects 12 through 19, wherein the registering on the first cell supporting the first radio access technology and the registering on the second cell supporting the second radio access technology comprise a dual registration procedure. Aspect 21: The method of any of aspects 12 through 20, further comprising: displaying, in a user interface of the UE, an icon indicating that the UE supports the dynamic sharing of the frequency spectrum between the first radio access technology and the second radio access technology. Aspect 22: The method of any of aspects 12 through 21, wherein the communicating comprises: communicating with the base station on a first carrier using the first radio access technology based at least in part on the registering on the first cell supporting the first radio access technology; and communicating with the base station on a second carrier using the second radio access technology based at least in part on the registering on the second cell supporting the second radio access technology. Aspect 23: The method of aspect 22, wherein the first carrier and the second carrier are a same carrier. Aspect 24: The method of any of aspects 12 through 23, wherein the first radio access technology comprises a fifth generation radio technology; and the second radio access technology comprises a long term evolution technology. Aspect 25: A method for wireless communications implemented by a base station, comprising: configuring a frequency spectrum for dynamic sharing between a first radio access technology and a second radio access technology; registering a UE on a first cell supporting the first radio access technology in the frequency spectrum; registering the UE on a second cell supporting the second radio access technology different from the first radio access technology and at least partially overlapping with the first cell in the frequency spectrum based at least in part on the configuring; and communicating with the UE based at least in part on the registering the UE on the first cell supporting the first radio access technology and the registering the UE on the second cell supporting the second radio access technology. Aspect 26: The method of aspect 25, further comprising: transmitting a cell identifier for the first cell, the second cell, or both, wherein the cell identifier is associated with support of the dynamic sharing between the first radio access technology and the second radio access technology, wherein the registering the UE on the second cell supporting the second radio access technology is based at least in part on the cell identifier. Aspect 27: The method of any of aspects 25 through 26, wherein the communicating comprises: performing data communications on the first cell using the first radio access technology; and performing, at least partially concurrent to the performing the data communications, voice communications on the second cell using the second radio access technology. Aspect 28: The method of aspect 27, further comprising: directing traffic associated with the voice communications to the second radio access technology based at least in part on the second radio access technology supporting a threshold quality of service for the voice communications. Aspect 29: The method of any of aspects 27 through 28, further comprising: maintaining data connectivity using the first radio access technology during the performing the voice communications using the second radio access technology. Aspect 30: The method of any of aspects 25 through 29, further comprising: transmitting, in a control channel for the first radio access technology, a control message indicating a first set of resources in the frequency spectrum for communications using the first radio access technology and a second set of resources in the frequency spectrum for communications using the second radio access technology. Aspect 31: The method of any of aspects 25 through 30, wherein the registering the UE on the first cell supporting the first radio access technology and the registering the UE on the second cell supporting the second radio access technology comprise a dual registration procedure for the UE. Aspect 32: The method of any of aspects 25 through 31, wherein the communicating comprises: communicating with the UE on a first carrier using the first radio access technology based at least in part on the registering the UE on the first cell supporting the first radio access technology; and communicating with the UE on a second carrier using the second radio access technology based at least in part on the registering the UE on the second cell supporting the second radio access technology. Aspect 33: The method of aspect 32, wherein the first carrier and the second carrier are a same carrier. Aspect 34: The method of any of aspects 25 through 33, wherein the first radio access technology comprises a fifth generation radio technology; and the second radio access technology comprises a long term evolution technology. Aspect 35: An apparatus for wireless communications implemented by a base station, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 8. Aspect 36: An apparatus for wireless communications implemented by a base station, comprising at least one means for performing a method of any of aspects 1 through 8. Aspect 37: A non-transitory computer-readable medium storing code for wireless communications implemented by a base station, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 8. Aspect 38: An apparatus for wireless communications implemented by a base station, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 9 through 11. Aspect 39: An apparatus for wireless communications implemented by a base station, comprising at least one means for performing a method of any of aspects 9 through 11. Aspect 40: A non-transitory computer-readable medium storing code for wireless communications implemented by a base station, the code comprising instructions executable by a processor to perform a method of any of aspects 9 through 11. Aspect 41: An apparatus for wireless communications implemented by a UE, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 12 through 24. Aspect 42: An apparatus for wireless communications implemented by a UE, comprising at least one means for performing a method of any of aspects 12 through 24. Aspect 43: A non-transitory computer-readable medium storing code for wireless communications implemented by a UE, the code comprising instructions executable by a processor to perform a method of any of aspects 12 through 24. Aspect 44: An apparatus for wireless communications implemented by a base station, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 25 through 34. Aspect 45: An apparatus for wireless communications implemented by a base station, comprising at least one means for performing a method of any of aspects 25 through 34. Aspect 46: A non-transitory computer-readable medium storing code for wireless communications implemented by a base station, the code comprising instructions executable by a processor to perform a method of any of aspects 25 through 34. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. In one aspect, the described techniques may be applicable to various other wireless communications systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. In one aspect, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other aspects and implementations are within the scope of the disclosure and appended claims. In one aspect, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. In one aspect, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, in some cases, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. In one aspect, a step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the aspects that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” Further, the term “example” as used herein does not imply separate embodiments; that is, multiple examples, multiple aspects, or both may be combined in any embodiment as described herein. The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described aspects. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
133,466
11943631
Although the present disclosure is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown in the drawings as examples and are described in detail herein. It should be understood that description for the specific embodiments is not intended to limit the present disclosure into a disclosed specific form, and the present disclosure aims to cover all modification, equivalents and alternations falling within the spirit and scope of the present disclosure. It is noted that throughout the several figures, corresponding reference numerals indicate corresponding parts. DETAILED DESCRIPTION OF EMBODIMENTS Examples of the present disclosure are fully described with reference to the drawings. The following description is merely exemplary rather than being intended to limit the present disclosure and applications or purposes of the present disclosure. The exemplary embodiments are provided such that the present disclosure will become thorough, and will convey the scope thereof fully to those skilled in the art. Examples of numerous specific details, such as specific components, devices, and methods, are set forth to provide a thorough understanding of the embodiments of the present disclosure. It will be apparent to those skilled in the art that exemplary embodiments may be implemented in many different forms without the use of specific details, and exemplary embodiments should not be construed as limiting the scope of the present disclosure. In some exemplary embodiments, well-known processes, well-known structures, and well-known technologies are not described in detail. Description is made in the following order: 1. Description of a scenario 2. Example of a configuration of a spectrum coordination device 3. Example of a configuration of a spectrum division device 4. Example of a configuration of a spectrum authorization device 5. Example of a configuration of a wireless communication system 6. Method embodiment 7. Application example 1. Description of a Scenario FIG.1is a schematic diagram showing an application scenario according to an embodiment of the present disclosure. In the scenario shown inFIG.1, two spectrum authorization devices are shown, namely, a spectrum authorization device1and a spectrum authorization device2. A coexistence system in which the spectrum authorization device1is located (hereinafter simply referred to as the coexistence system1) also includes a spectrum coordination device and two spectrum division devices, namely, a spectrum division device1and a spectrum division device2. In addition, the coexistence system1also includes four secondary systems, namely, a secondary system1, a secondary system2, a secondary system3, and a secondary system4. Here, the secondary system1and the secondary system3are managed by the spectrum division device1, and thus, the secondary system1and the secondary system2are referred to as a coexistence group. The secondary system2and the secondary system4are managed by the spectrum division device2, and thus, the secondary system2and the secondary system4are referred to as a coexistence group. Further, a coexistence system in which the spectrum authorization device2is located (hereinafter simply referred to as the coexistence system2) includes three secondary systems, namely, a secondary system5, a secondary system6and a secondary system7. In the coexistence system1shown inFIG.1, the spectrum authorization device1may determine usable spectrum resources of a secondary system in the coexistence system1according to the information such as the spectrum usage of the primary system, the location of the primary system, the location of the secondary system and the like. The spectrum coordination device coordinates spectrum resources between the coexistence group1and the coexistence group2based on usable spectrum resources from the spectrum authorization device1, to avoid interference between the secondary systems in the coexistence group1and the coexistence group2. For example, it is assumed that the usable spectrum resources determined by the spectrum authorization device1include 4 frequency bands, namely, CH1, CH2, CH3, and CH4, the spectrum coordination device may allocate CH1and CH2to the coexistence group1and allocate CH3and CH4to the coexistence group2. The spectrum division device may adjust the spectrum usage of the secondary system within the range of the usable spectrum resources from the spectrum coordination device. For example, the spectrum division device1allocates CH1to the secondary system1, allocates CH2to the secondary system3, and the spectrum division device2allocates CH3to the secondary system2, and allocates CH4to the secondary system4. In the coexistence system2shown inFIG.1, there is no spectrum coordination device and spectrum division device. Therefore, the spectrum authorization device2may determine the usable spectrum resources of the secondary system in the coexistence system2based on the information such as the spectrum usage of the primary system, the location of the primary system, and the location of the secondary system and the like. For example, the spectrum authorization device2allocates CH4to the secondary system5, allocates CH1to the secondary system6, and allocates CH3to the secondary system7. In the scenario shown inFIG.1, since there is no coordination between the coexistence system1and the coexistence system2, the coexistence system1and the coexistence system2allocate spectrum resources to the secondary systems relatively independently. From the perspective of geographical location, the secondary system4and the secondary system5are relatively close and belong to different coexistence systems. Therefore, both the secondary system4and the secondary system5may use CH4, such that transmission power is small when the secondary system4uses CH4, which cannot satisfy the coexistence management requirement of the coexistence system1. Furthermore, the spectrum coordination device allocates continuous spectrum resources to a coexistence group when allocating spectrum resources to the coexistence groups. For example, the spectrum coordination device may allocate CH1and CH2to the coexistence group1, but not allocate CH1and CH3to the coexistence group1. In a case that the secondary system5uses CH4, the secondary system6uses CH1, and the secondary system7uses CH3, since multiple secondary systems in the coexistence group2occupy the spectrum resources in a dispersed manner, the spectrum coordination device may not allocate continuous spectrum resources to a coexistence group, such that the coexistence management requirement of the coexistence system1cannot be satisfied. According to the present disclosure, for such scenario, a spectrum coordination device in a wireless communication system, a spectrum division device in a wireless communication system, a spectrum authorization device in a wireless communication system, a wireless communication system, and a wireless communication method performed by the spectrum coordination device in the wireless communication system, a wireless communication method performed by the spectrum division device in the wireless communication system, a wireless communication method performed by the spectrum authorization device in the wireless communication system as well as a computer readable storage medium are proposed, such that the coordination between different coexisting systems is possible, thereby reasonably allocating spectrum resources to the secondary systems, and thus, secondary systems can effectively use resources without interfering with each other. It is worth noting thatFIG.1is only an example of the application scenario according to the present disclosure. In an actual scenario, the number of coexistence system, the number of spectrum division device, the number of secondary system in each of coexistence groups and the like may all have other values. That is, the present disclosure is applicable to any wireless communication system including a primary system and a secondary system. Furthermore, for ease of description, the primary system and a primary user are not shown inFIG.1. The wireless communication system according to the present disclosure may be a 5G NR (new radio) communication system. The spectrum authorization device according to the present disclosure may be a SAS (Spectrum Access System). The SAS may determine the spectrum range that may be used by the secondary system based on the information such as the spectrum usage of the primary system, the location of the primary system, and the location of the secondary system and the like. The SAS may be a spectrum management device determined according to a geographic location, and each SAS may manage secondary systems in a certain area. For example, the SAS may be a spectrum allocation function module provided by a geographic location database operator authorized according to national regulations. The spectrum coordination device according to the present disclosure may be a GSC (General Authorized Access (GAA) Spectrum Coordination) device. The GSC device may be a spectrum management device that coordinates spectrums between coexistence groups managed by multiple spectrum division devices based on usable spectrum resources from the SAS, to avoid interference between secondary systems managed by different spectrum division devices. The spectrum division device according to the present disclosure may be a C×M (Coexistence Manager). The C×M may be a spectrum management device that adjusts the spectrum usage of the secondary system within the range of usable spectrum resources from the GSC. Each C×M manages a CSG (Coexistence Group), and a CSG may include one or more secondary systems. For example, the C×M may be different operators or network providers, or may be a network management organization in a certain office area, residential area or university campus. The secondary system according to the present disclosure may be a CBSD (Citizens Broadband Radio Service Device). The CBSD may be a network side device, such as any type of TRP (Transmit and Receive Port) and a base station device, for example, may be an eNB or a gNB (a base station in the 5th generation communication system). FIG.2is a schematic diagram showing an architecture of a wireless communication system according to an embodiment of the present disclosure. As shown inFIG.2, the coexistence systems in the wireless communication system include a SAS, a GSC, a C×M and a CBSD. When the CBSD needs to use spectrum resources, the CBSD sends a spectrum usage request to SAS. The SAS determines the spectrums that may be used by the secondary system according to the spectrum usage of the primary system, and sends the calculated usable spectrum to the GSC. Next, the GSC coordinates the spectrums between coexistence groups managed by the multiple C×M based on the usable spectrum resources from the SAS, and sends the spectrum resources allocated to the C×M to the corresponding C×M. Next, the C×M adjusts the spectrum usage of the secondary system within the range of usable spectrum resources from the GSC, to allocate spectrum resource to the CBSD.FIG.2shows a situation where the coexistence system includes a GSC and a C×M. In actual scenarios, some coexistence systems may not include a GSC and a C×M. In this case, when the CBSD needs to use spectrum resources, the CBSD sends a spectrum usage request to the SAS, and the SAS determines the spectrums that may be used by the secondary system according to the spectrum usage of the primary system, and sends the calculated usable spectrum to the CBSD. According to an embodiment of the present disclosure, for ease of description, a system composed of a spectrum management device and a secondary system within the coverage of a spectrum authorization device is referred to as a coexistence system. In other words, the coexistence system may include one spectrum authorization device and one or more secondary systems. Optionally, the coexistence system may further include a spectrum coordination device and/or one or more spectrum division devices. Furthermore, according to an embodiment of the present disclosure, the spectrum coordination device may be separated from the spectrum authorization device, or may be integrated into the spectrum authorization device. 2. Example of a Configuration of a Spectrum Coordination Device FIG.3is a block diagram showing an example of a configuration of a spectrum coordination device300according to an embodiment of the present disclosure. The spectrum coordination device300here may be, for example, a GSC. Furthermore, the spectrum coordination device300may be applied to a wireless communication system. The wireless communication system includes a first coexistence system and a second coexistence system. The first coexistence system includes the spectrum coordination device300and one or more secondary systems divided into coexistence groups, and the second coexistence system includes one or more secondary systems. As shown inFIG.3, the spectrum coordination device300may include a coordination unit310. Here, all units of the spectrum coordination device300may be included in a processing circuitry. It should be noted that the spectrum coordination device300may include one processing circuitry or multiple processing circuitries. Further, the processing circuitry may include various discrete functional units to perform different functions and/or operations. It should be noted that these functional units may be physical entities or logical entities, and units with different names may be implemented by the same physical entity. According to an embodiment of the present disclosure, the coordination unit310of the spectrum coordination device300may generate, in a case where a coexistence management requirement of the first coexistence system is not satisfied, spectrum modification information for modifying spectrum resource of a secondary system in the first coexistence system and/or for modifying spectrum resource of the a secondary system in the second coexistence system. According to an embodiment of the present disclosure, the coexistence management requirement of the coexistence system represents a requirement for the secondary system in the coexistence system to be able to work normally, which may include requirements from multiple aspects. In one example, the coexistence management requirement may include the ability to allocate usable spectrum resources to the secondary system. For example, the coexistence management requirement includes allocating continuous spectrum resources for multiple secondary systems in a coexistence group. In another example, the coexistence management requirement may include the ability to allocate sufficient spectrum resources for the secondary system. For example, the coexistence management requirement includes that when the secondary system reuses the allocated spectrum resources, the transmission power should reach a predetermined threshold. Of course, the coexistence management requirement may also include other coexistence management requirements. According to an embodiment of the present disclosure, in a case that the coexistence management requirement of the first coexistence system is not satisfied, it may be indicated with a high probability that the secondary system in the first coexistence system may not be allocated usable and sufficient spectrum resources, which may be caused by the lack of coordination between different coexistence systems. Therefore, according to an embodiment of the present disclosure, the spectrum coordination device300may generate spectrum modification information for modifying spectrum resource of a secondary system in the first coexistence system and/or for modifying spectrum resource of the a secondary system in the second coexistence system. That is, by modifying the spectrum resources of the secondary system in the first coexistence system and/or modifying the spectrum resources of the secondary system in the second coexistence system, the coexistence management requirement of the first coexistence system is satisfied. It can be seen that the spectrum coordination device300according to an embodiment of the present disclosure may generate, in a case where a coexistence management requirement of the first coexistence system is not satisfied, spectrum modification information for modifying spectrum resource of a secondary system in the first coexistence system and/or for modifying spectrum resource of the a secondary system in the other coexistence system may be generated. In this way, when allocating spectrum resources to the secondary systems in the coexistence system, the spectrum resources of the secondary systems in other coexistence systems may be considered, such that coordination between different coexistence systems is possible, thereby reasonably allocating spectrum resources to the secondary systems, and thus, secondary systems can effectively use resources without interfering with each other. As shown inFIG.3, according to an embodiment of the present disclosure, the spectrum coordination device300may further include a communication unit320configured to communicate with devices other than the spectrum coordination device300. According to an embodiment of the present disclosure, the spectrum coordination device300may receive a spectrum usage report from one or more spectrum division devices in the first coexistence system via the communication unit320. According to an embodiment of the present disclosure, the coordination unit310may determine, according to the spectrum usage report, whether the coexistence management requirement of the first coexistence system is satisfied. According to an embodiment of the present disclosure, the coordination unit310may determine frequency bands that do not satisfy the coexistence management requirement of the first coexistence system according to the spectrum usage report. That is, the spectrum usage report received from the spectrum division device includes frequency bands that do not satisfy the coexistence management requirement of the first coexistence system. For example, the spectrum usage report may include identification information on frequency bands that do not satisfy the coexistence management requirement of the first coexistence system. As described above, according to an embodiment of the present disclosure, the spectrum division device in the first coexistence system may determine that the coexistence management requirement of the first coexistence system is not satisfied, such that the spectrum coordination device300determine that the coexistence management requirement of the first coexistence system is not satisfied based on the spectrum usage report from the spectrum division device. As shown inFIG.3, according to an embodiment of the present disclosure, the spectrum coordination device300may further include a determination unit330configured to determine whether the coexistence management requirement of the first coexistence system is satisfied. According to an embodiment of the present disclosure, the determination unit330may determine a continuity of the usable spectrum resources of the first coexistence system, that is, a consecutive usage period of time, and determine, in a case where a continuity requirement with respect to frequency bands of one coexistence group is not satisfied, that the coexistence management requirement of the first coexistence system is not satisfied. According to an embodiment of the present disclosure, when allocating spectrum resources to different spectrum division devices, the spectrum coordination device300needs to allocate continuous spectrum resources to the coexistence group managed by one spectrum division device. Therefore, the determination unit330of the spectrum coordination device300needs to determine the continuity of the currently usable spectrum resources. Here, the determination unit330may obtain the currently usable spectrum resource from the spectrum authorization device in the first coexistence system, to determine the continuity of the currently usable spectrum resources. According to an embodiment of the present disclosure, since the spectrum authorization device has considered the usage of spectrum resources of the secondary system managed by other spectrum authorization devices when determining the usable spectrum resources, which may be implemented through communication with other spectrum authorization devices, the usable spectrum resources obtained by the spectrum coordination device300actually reflect the usage of spectrum resources of the secondary systems in the entire wireless communication system. Therefore, in a case that the continuity requirement for the frequency bands of a coexistence group is not satisfied, the determination unit330may determine that the coexistence management requirement of the first coexistence system is not satisfied. For example, when both CH2and CH4are occupied by a secondary system in other coexistence system, the spectrum coordination device300may not allocate two consecutive frequency bands to the coexistence group (assuming that two frequency bands need to be allocated to the coexistence group), and the determination unit330may determine that the coexistence management requirement of the first coexistence system is not satisfied. For another example, when CH2is occupied by a secondary system of other coexistence system, although the spectrum coordination device300may allocate two consecutive frequency bands CH3and CH4to the coexistence group (assuming that two frequency bands need to be allocated to the coexistence group), there is a potential risk that the coexistence management requirement of the first coexistence system is not satisfied. In this case, the determination unit330may also determine that the coexistence management requirement of the first coexistence system is not satisfied. That is, the coexistence management requirement of the first coexistence system includes a continuity requirement for the frequency bands of a coexistence group. When the continuity requirement for the frequency bands of a coexistence group is not satisfied, the determination unit330may determine that the coexistence management requirement of the first coexistence system is not satisfied. According to an embodiment of the present disclosure, the determination unit330may also determine a frequency band that does not satisfy the coexistence management requirement of the first coexistence system according to the continuity of usable spectrum resources. Here, the determination unit330may determine a frequency band in the frequency band occupied by the secondary system in the other coexistence system that has caused or is about to cause the continuity requirement for the frequency band of a coexistence group to be not satisfied as a frequency band that does not satisfy the coexistence management requirement of the first coexistence system. The previous embodiment is taken as an example. When both CH2and CH4are occupied by a secondary system of other coexistence system, the spectrum coordination device300may not allocate two consecutive frequency bands to the coexistence group (assuming that two frequency bands need to be allocated to the coexistence group). In this case, the determination unit330may determine CH2and CH4as frequency bands that do not satisfy the coexistence management requirement of the first coexistence system. For another example, when CH2is occupied by a secondary system of other coexistence system, although the spectrum coordination device300may allocate two consecutive frequency bands CH3and CH4to the coexistence group (assuming that two frequency bands need to be allocated to the coexistence group), there is a potential risk that the coexistence management requirement of the first coexistence system is not satisfied. In this case, the determination unit330may determine CH2as a frequency band that does not satisfy the coexistence management requirement of the first coexistence system. As described above, according to an embodiment of the present disclosure, the spectrum coordination device300in the first coexistence system may determine that the coexistence management requirement of the first coexistence system is not satisfied. According to an embodiment of the present disclosure, in a case that the coexistence management requirement of the first coexistence system is not satisfied, the coordination unit310may generate at least one of the following: the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system; and the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system. According to an embodiment of the present disclosure, the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system generated by the coordination unit310may include: allocating, to the coexistence group in the first coexistence system, frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands which do not satisfy the coexistence management requirement. In other words, the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system may include: allocating, to the coexistence groups in the first coexistence system, frequency bands in the usable spectrum resources of the first coexistence system which satisfy the coexistence management requirement. That is, when allocating spectrum resources of the coexistence group to the spectrum division device in the first coexistence system, the coordination unit310may allocate frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands which do not satisfy the coexistence management requirement. Here, the spectrum coordination device300may obtain the usable spectrum resources of the first coexistence system from the spectrum authorization device in the first coexistence system. For example, when CH1to CH4are usable spectrum resources of the first coexistence system and CH3is a frequency band that does not satisfy the coexistence management requirement, the spectrum coordination device300may allocate CH1, CH2, and CH4to the secondary system of the first coexistence system. At this time, CH1, CH2, and CH4are frequency bands that satisfy the coexistence management requirement. As described above, according to an embodiment of the present disclosure, in a case that the coexistence management requirement of the first coexistence system is not satisfied, the spectrum coordination device300may modify the spectrum resource of the secondary system in the first coexistence system, to avoid interference with the secondary system in other coexistence system. According to an embodiment of the present disclosure, the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system generated by the coordination unit310may include: allocating, to the secondary system in the second coexistence system, frequency bands in the usable spectrum resources of the second coexistence system except the frequency bands which do not satisfy the coexistence management requirement. Here, the spectrum coordination device300may obtain the usable spectrum resources of the second coexistence system from the spectrum authorization device in the second coexistence system (for ease of distinction, the spectrum authorization device in the first coexistence system is referred to as the first spectrum authorization device and the spectrum authorization device in the second coexistence system is referred to as the second spectrum authorization device). For example, the spectrum coordination device300may obtain the usable spectrum resources of the second coexistence system from the spectrum authorization device in the second coexistence system through the spectrum authorization device in the first coexistence system, thereby allocating, to the secondary system in the second coexistence system, frequency bands in the usable spectrum resources of the second coexistence system except the frequency bands which do not satisfy the coexistence management requirement. For example, when CH1to CH4are usable spectrum resources of the second coexistence system and CH3is a frequency band that does not satisfy the coexistence management requirement, the spectrum coordination device300may allocate CH1, CH2, and CH4to the secondary system of the second coexistence system. As described above, according to an embodiment of the present disclosure, when the coexistence management requirement of the first coexistence system is not satisfied, the spectrum coordination device300may modify the spectrum resource of the secondary system in the second coexistence system, to avoid interference with the secondary system in other coexistence system. As described above, according to an embodiment of the present disclosure, in a case that the coexistence management requirement of the first coexistence system is not satisfied, the spectrum coordination device300may modify the spectrum resource of the secondary system in the second coexistence system and/or the spectrum resource of the secondary system in the first coexistence system. According to an embodiment of the present disclosure, these two modification methods may be combined according to actual needs. For example, in a case that the coexistence management requirement of the first coexistence system is not satisfied, the spectrum coordination device300may request one or more times to modify the spectrum resource of the secondary system in the second coexistence system, and if the second coexistence system does not modify the spectrum resource of the secondary system in the second coexistence system, the spectrum coordination device300may modify the spectrum resource of the secondary system in the first coexistence system. In this case, the spectrum coordination device300may also receive information indicating whether the second coexistence system has modified the spectrum resource of the secondary system in the second coexistence system from the second coexistence system (for example, the second spectrum authorization device). According to an embodiment of the present disclosure, the above described embodiments are particularly suitable for a case where it is determined by the spectrum division device that the coexistence management requirement of the first coexistence system is not satisfied. This is because, in a case that the coexistence management requirement of the first coexistence system is determined by the spectrum division device not to be satisfied, there is a high probability that there are secondary systems in the second coexistence system using the same spectrum resources as the secondary systems in the first coexistence system. Therefore, in this case, the spectrum coordination device300may request the secondary system in the first coexistence system not to use the same spectrum resource, or request the secondary system in the second coexistence system not to use the same spectrum resource, to avoid interference between secondary systems in two coexistence systems. According to an embodiment of the present disclosure, the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system generated by the coordination unit310may include: allocating, to the secondary system in the second coexistence system, frequency bands located at the edge in the usable spectrum resources of the second coexistence system. For example, when CH1to CH4are usable spectrum resources of the second coexistence system, the spectrum coordination device300may allocate CH1and/or CH4to the secondary system of the second coexistence system. This is because, in the usable spectrum resources CH1to CH4, CH1and CH4are frequency bands located at the edge. As described above, according to an embodiment of the present disclosure, the spectrum coordination device300may try to make the secondary system in the second coexistence system use the frequency bands located at the edge, to ensure the continuity of the frequency bands of the first coexistence system to the greatest extent, thereby satisfying the coexistence management requirement of the first coexistence system. Further, according to an embodiment of the present disclosure, when the determination unit330determines a frequency band that does not satisfy the coexistence management requirement of the first coexistence system according to the continuity of the usable spectrum resources, the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system generated by the coordination unit310may include: allocating, to the secondary system in the second coexistence system, a band located at the edge in a band that belongs to the usable spectrum resources of the second coexistence system and not satisfies the coexistence management requirement of the first coexistence system. For example, when CH1to CH4are the usable spectrum resources of the second coexistence system and the frequency bands that do not satisfy the coexistence management requirement of the first coexistence system determined by the determination unit330are CH3and CH4, the spectrum coordination device300may allocates CH4to the secondary system in the second coexistence system. This is because, both CH3and CH4belong to the usable spectrum resources of the second coexistence system, and in the CH3and CH4, CH4is a frequency band located at the edge. As described above, according to an embodiment of the present disclosure, the spectrum coordination device300may try to make the secondary system in the second coexistence system use the frequency band located at the edge, so as to ensure the continuity of the frequency band of the first coexistence system to the greatest extent, thereby satisfying the coexistence management requirement of the first coexistence system. Further, the spectrum coordination device300may allocate, to the second coexistence system, frequency bands in frequency bands that do not satisfy the coexistence management requirement of the first coexistence system as much as possible, so as to minimize changes to the frequency bands allocated to the secondary system in the second coexistence system. For example, in the above example, the second coexistence system only needs to change the spectrum resource of the secondary system that originally allocated CH3to CH4, and the secondary system that originally allocated CH4does not need to change the allocation of spectrum resource. Further, according to an embodiment of the present disclosure, in a case that none of the frequency bands that do not satisfy the coexistence management requirement of the first coexistence system determined by the determination unit330is a frequency band located at the edge, the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system generated by the coordination unit310may include: allocating, to the secondary system in the second coexistence system, frequency bands located at the edge in the usable spectrum resources of the second coexistence system except the frequency bands that do not satisfy the coexistence management requirement. For example, when CH1to CH4are usable spectrum resources of the second coexistence system, and CH3is a frequency band that does not satisfy the coexistence management requirement, the spectrum coordination device300may allocate CH1and/or CH4to the secondary system of the second coexistence system. This is because, in the usable spectrum resources CH1to CH4, CH1and CH4are frequency bands located at the edge. As described above, according to an embodiment of the present disclosure, when the coexistence management requirement of the first coexistence system is not satisfied, the spectrum coordination device300may modify the spectrum resources of the secondary system in the second coexistence system, to ensure the continuity of the remaining spectrum resources as much as possible. According to an embodiment of the present disclosure, the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system generated by the coordination unit310may include: allocating, to the coexistence groups in the first coexistence system, frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands allocated to the secondary systems in the second coexistence system. For example, when CH1to CH4are usable spectrum resources of the second coexistence system, and CH3is a frequency band that does not satisfy the coexistence management requirement, the spectrum coordination device300may allocate CH1and/or CH4to the secondary system in the second coexistence system. It is assumed that the spectrum coordination device300allocates CH1to the secondary systems in the second coexistence system, the spectrum coordination device300may allocate CH2, CH3, and CH4to the secondary system in the first coexistence system. As described above, according to an embodiment of the present disclosure, in a case that the coexistence management requirement of the first coexistence system is not satisfied, the spectrum coordination device300may modify the spectrum resource of the secondary system in the second coexistence system, and further modify, according to the modification to the spectrum resource of the secondary system in the second coexistence system, the spectrum resource of the secondary system in the first coexistence system, to ensure the continuity of the spectrum resource of the first coexistence system as much as possible. According to an embodiment of the present disclosure, the spectrum coordination device300may also send spectrum modification information for modifying the spectrum resources of the secondary system in the first coexistence system to the spectrum division device in the first coexistence system via the communication unit320. Here, the spectrum coordination device300may allocate spectrum resources to the coexistence group managed by each of spectrum division devices according to the above modified spectrum, and send the spectrum modification information to the spectrum division device in the first coexistence system. The spectrum modification information here includes spectrum resources allocated to the coexistence group managed by each of spectrum division devices. According to an embodiment of the present disclosure, the spectrum authorization device in the first coexistence system is configured to determine the usable spectrum resources of the first coexistence system, such that the coordination unit310of the spectrum coordination device300may allocate spectrum resources to one or more coexistence groups in the first coexistence system based on the usable spectrum resources of the first coexistence system. According to an embodiment of the present disclosure, the spectrum coordination device300may send spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system to the second spectrum authorization device via the communication unit320. For example, the spectrum coordination device300may communicate with the second spectrum authorization device through the first spectrum authorization device, to achieve the foregoing purpose. According to an embodiment of the present disclosure, the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system may include a frequency band allocated to the secondary system in the second coexistence system. It can be seen that, according to an embodiment of the present disclosure, it may be determined by the spectrum coordination device or the spectrum division device whether the coexistence management requirement of the first coexistence system is satisfied, and in a case that the coexistence management requirement of the first coexistence system is not satisfied, the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system and/or for modifying the spectrum resource of the secondary system in other coexistence system may be generated. In this way, when allocating spectrum resource to the secondary system in the coexistence system, the spectrum resources of the secondary systems in other coexistence systems may be considered, such that coordination between different coexistence systems is possible, thereby avoiding interference between the secondary systems in different coexistence systems and ensuring the continuity of spectrum resources as much as possible. 3. Example of a Configuration of a Spectrum Division Device FIG.4is a block diagram showing a structure of a spectrum division device400according to an embodiment of the present disclosure. The spectrum division device400here may be, for example, a C×M. The spectrum division device according to an embodiment of the present disclosure is applied in a wireless communication system. The wireless communication system includes a first coexistence system, and the first coexistence system includes a spectrum coordination device, a spectrum division device, and one or more secondary systems divided into coexistence groups. In addition, the spectrum dividing device400is used to manage one of the coexistence groups. As shown inFIG.4, the spectrum division device400may include a determination unit410and a communication unit420. Here, all units of the spectrum division device400may be included in a processing circuitry. It should be noted that the spectrum division device400may include one processing circuitry or multiple processing circuitries. Further, the processing circuitry may include various discrete functional units to perform different functions and/or operations. It should be noted that these functional units may be physical entities or logical entities, and units with different names may be implemented by the same physical entity. According to an embodiment of the present disclosure, the determination unit410may determine whether the transmission power of one or more secondary systems in the coexistence group managed by the spectrum division device400satisfies the coexistence management requirement of the first coexistence system. According to an embodiment of the present disclosure, in a case where the transmission power of the one or more secondary systems does not satisfy the coexistence management requirement of the first coexistence system, the spectrum division device400may send a spectrum usage report to the spectrum coordination device via the communication unit420. As described above, the spectrum division device400according to the embodiment of the present disclosure may determine whether the transmission power of each of secondary systems in the coexistence group managed by the spectrum division device400satisfies the coexistence management requirement of the first coexistence system, and may send a spectrum usage report to the spectrum coordination device in a case that the coexistence management requirement is not satisfied, such that the spectrum coordination device may modify the spectrum resource of the secondary system in the first coexistence system or modify the spectrum resource of the secondary system of other coexistence system according to the spectrum usage report. In this way, when allocating spectrum resources to the secondary systems in the coexistence system, the spectrum resources of the secondary systems in other coexistence systems may be considered, such that coordination between different coexistence systems is possible, thereby reasonably allocating spectrum resources to the secondary systems, and thus, resources can be effectively used without interference between secondary systems. According to an embodiment of the present disclosure, in a case where the transmission power of the one or more secondary systems is less than a predetermined threshold, the determination unit410may determine that the one or more secondary systems do not satisfy the coexistence management requirement of the first coexistence system. According to an embodiment of the present disclosure, in a case where the transmission power of the one or more secondary systems is less than a predetermined threshold, there may be a high probability that the secondary systems of other coexistence systems use the same frequency band as the one or more secondary systems. This situation may be caused by the lack of coordination between different coexistence systems. Therefore, in this case, the determination unit410may determine that the coexistence management requirement of the first coexistence system is not satisfied. As shown inFIG.4, according to an embodiment of the present disclosure, the spectrum division device400may further include a generation unit430configured to generate a spectrum usage report. Further, the spectrum usage report may include frequency band information of one or more secondary systems that do not satisfy the coexistence management requirement of the first coexistence system. For example, when the spectrum division device400allocates CH1to a certain secondary system and the transmission power of the secondary system on CH1is less than a predetermined threshold, the spectrum usage report may include CH1information. As shown inFIG.4, according to an embodiment of the present disclosure, the spectrum division device400may further include a division unit440configured to allocate spectrum resources to the coexistence group managed by the spectrum division device400according to the usable spectrum resources from the spectrum coordination device. According to an embodiment of the present disclosure, the division unit440of the spectrum division device400may allocate spectrum resources to the coexistence group managed by the spectrum division device400according to the usable spectrum resources from the spectrum coordination device. When allocating spectrum resources, the spectrum division device400may calculate the transmission power of each of secondary systems on the allocated frequency band. If it is found that the transmission power of the frequency band allocated to a certain secondary system or some secondary systems is less than a predetermined threshold, the generation unit430may generate a spectrum usage report to send to the spectrum coordination device. According to an embodiment of the present disclosure, the spectrum division device400may receive spectrum modification information from the spectrum coordination device via the communication unit420. The spectrum modification information includes the spectrum resources reallocated to the spectrum division device400. According to an embodiment of the present disclosure, the division unit440may allocate, according to the spectrum modification information, spectrum sources to one or more secondary systems in coexistence groups managed by the spectrum division device400. For example, the spectrum modification information includes frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands which do not satisfy the coexistence management requirement, and then, the division unit440may allocate, to the secondary system in the coexistence group, spectrum resources in the frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands which do not satisfy the coexistence management requirement. For another example, the spectrum modification information includes frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands which are allocated to the secondary system in the second coexistence system, and then, the division unit440may allocate, to the secondary system in the coexistence group, spectrum resources in the frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands which are allocated to the secondary system in the second coexistence system. The spectrum division device400according to an embodiment of the present disclosure may allocate spectrum resources to the secondary system according to the usable spectrum resources received from the spectrum coordination device300. Therefore, all embodiments of the spectrum coordination device300described in the foregoing are suitable for this embodiment. 4. Example of a Configuration of a Spectrum Authorization Device FIG.5is a block diagram showing a structure of a spectrum authorization device500according to an embodiment of the present disclosure. The spectrum authorization device500here may be, for example, a SAS. The spectrum authorization device500according to an embodiment of the present disclosure is applied to a wireless communication system. The wireless communication system includes a first coexistence system and a second coexistence system. The first coexistence system includes a spectrum coordination device and one or more secondary systems. The second coexistence system includes a spectrum authorization device500and one or more secondary systems. As shown inFIG.5, the spectrum authorization device500may include an authorization unit520and a communication unit510. Here, all units of the spectrum authorization device500may be included in a processing circuitry. It should be noted that the spectrum authorization device500may include one processing circuitry or multiple processing circuitries. Further, the processing circuitry may include various discrete functional units to perform different functions and/or operations. It should be noted that these functional units may be physical entities or logical entities, and units with different names may be implemented by the same physical entity. According to an embodiment of the present disclosure, the spectrum authorization device500may receive spectrum modification information from the spectrum coordination device via the communication unit510. For example, the spectrum authorization device500may receive the spectrum modification information from the spectrum coordination device through the first spectrum authorization device in the first coexistence system. According to an embodiment of the present disclosure, the authorization unit520may modify the spectrum resource of the secondary system in the second coexistence system according to the spectrum modification information. According to an embodiment of the present disclosure, the spectrum modification information may include spectrum resources allocated to the secondary system in the second coexistence system. For example, when the spectrum modification information includes frequency bands in the usable spectrum resources of the second coexistence system except the frequency bands which do not satisfy the coexistence management requirement of the first coexistence system, the authorization unit520may allocate, to the secondary system in the second coexistence system, spectrum resources in the frequency bands in the usable spectrum resources of the second coexistence system except the frequency bands which do not satisfy the coexistence management requirement of the first coexistence system. For another example, when the spectrum modification information includes frequency bands located at the edge in the usable spectrum resources of the second coexistence system, the authorization unit520may allocate, to the secondary system in the second coexistence system, spectrum resources in the frequency bands located at the edge in the usable spectrum resources of the second coexistence system. As described above, the spectrum authorization device500according to an embodiment of the present disclosure may modify the spectrum resources of the secondary system managed by the spectrum authorization device500based on information from other coexistence systems, thereby enabling coordination between different coexistence systems becomes possible, to achieve a goal of avoiding interference between secondary systems in different coexistence systems and ensuring the continuity of spectrum resources. The spectrum coordination device300according to an embodiment of the present disclosure may determine the usable spectrum resources for the spectrum division device400according to the usable spectrum resources received from the spectrum authorization device500. Therefore, therefore, all embodiments of the spectrum coordination device300and the spectrum division device400described in the foregoing are suitable for this embodiment. 5. Example of a Configuration of a Wireless Communication System As described above, a wireless communication system is provided according to the present disclosure. The wireless communications system includes a first coexistence system and a second coexistence system. The first coexistence system includes: one or more secondary systems divided into coexistence groups; and a spectrum coordination device configured to allocate spectrum resources to the coexistence groups in the first coexistence system according to the usable spectrum resources of the first coexistence system. The second coexistence system includes: one or more secondary systems; and a spectrum authorization device configured to allocate spectrum resources to a secondary system in the second coexistence system according to usable spectrum resources of the second coexistence system. According to an embodiment of the present disclosure, the spectrum coordination device is configured to generate, in a case where a coexistence management requirement of the first coexistence system is not satisfied, spectrum modification information for modifying spectrum resource of the secondary system in the second coexistence system, and the spectrum authorization device is configured to receive spectrum modification information from the spectrum coordination device; and modify spectrum resources of the secondary system in the second coexistence system according to the spectrum modification information. According to an embodiment of the present disclosure, the first coexistence system may further include a first spectrum authorization device configured to determine the usable spectrum resources of the first coexistence system. Further, according to an embodiment of the present disclosure, the first coexistence system may further include one or more spectrum division devices configured to allocate spectrum resources to one or more secondary systems in the coexistence group managed by the spectrum division device based on the spectrum resources allocated by the spectrum coordination device for the coexistence group. Here, the spectrum coordination device may be implemented by the spectrum coordination device300described above, the spectrum division device may be implemented by the spectrum division device400described above, and the spectrum authorization device in the second coexistence system may be implemented by the spectrum authorization apparatus500described above, and thus, all embodiments described in the foregoing are suitable for this embodiment. FIG.6toFIG.9are signaling flowcharts showing a spectrum management method according to an embodiment of the present disclosure. InFIG.6toFIG.9, SAS1, GSC, C×M, and CBSD1belong to a first coexistence system, and SAS2and CBSD2belong to a second coexistence system. That is, the GSC may be implemented by the spectrum coordination device300described above, the C×M may be implemented by the spectrum division device400described above, and the SAS2may be implemented by the spectrum authorization device500described above. In an embodiment shown inFIG.6, the C×M determines whether a coexistence management requirement of the first coexistence system is satisfied, and in a case that the coexistence management requirement of the first coexistence system is not satisfied, the spectrum coordination device generates the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system. As shown inFIG.6, in step S601, the CBSD2requests spectrum resources to the SAS2. Next, in step S602, the SAS2determines usable spectrum resources of the second coexistence system according to the spectrum usage of the primary system. Next, in step S603, the SAS2sends the usable spectrum resources to the CBSD2. In step S604, the CBSD1requests spectrum resources to the C×M that manages the CBSD1, and then the C×M requests spectrum resources to the GSC, and the GSC requests spectrum resources to the SAS1. Next, in step S605, the SAS1determines usable spectrum resources of the first coexistence system according to the spectrum usage of the primary system. Next, in step S606, the SAS1sends the usable spectrum resources of the first coexistence system to the GSC in the first coexistence system. It is worth noting that the processes of the CBSD2requesting spectrum resources to the SAS2and the CBSD1requesting spectrum resources to the SAS1are independent, and therefore, the sequence numbers in the figure do not indicate the sequence of events. Next, in step S607, the GSC allocates spectrum resources to a coexistence group according to the usable spectrum resources from the SAS1. Next, in step S608, the GSC sends the spectrum resources allocated for the coexistence group to the C×M in the first coexistence system. Next, in step S609, the C×M allocates spectrum resources to the CBSD managed by the C×M according to the spectrum resources from the GSC. Next, in step S610, the C×M determines whether the transmission power of the CBSD managed by the C×M satisfies a coexistence management requirement of the first coexistence system. It is assumed here that the transmission power of the CBSD1does not satisfy the coexistence management requirement of the first coexistence system. Next, in step S611, the C×M sends a spectrum usage report to the GSC. Next, in step S612, the GSC requests secondary system information to the SAS1, where the secondary system information includes information of all secondary systems in a wireless communication system, and may include, for example, a location and spectrum resource usage. Next, in step S613, the GSC obtains secondary system information from the SAS1, so as to know which secondary system or secondary systems in the entire wireless communication system have the same frequency band as the frequency band that does not satisfy the coexistence management requirement. Next, in step S614, the GSC generates spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system. The spectrum modification information includes spectrum resources allocated to the secondary system in the second coexistence system, for example, frequency bands in the usable spectrum resources of the second coexistence system except the frequency bands which do not satisfy the coexistence management requirement of the first coexistence system. Next, in step S615, the GSC sends the spectrum modification information to the SAS2. Next, in step S616, when the CBSD2sends a spectrum use request to the SAS2, in step S617, the SAS2may send the modified spectrum resource to the CBSD2. As described above,FIG.6shows a situation where the C×M determines whether the coexistence management requirement of the first coexistence system is satisfied. In a case that the C×M determines that the coexistence management requirement of the first coexistence system is not satisfied, it means that there may be a secondary system (CBSD2) in other coexistence system in the wireless communication system that uses the same frequency band as the secondary system (CBSD1) in the first coexistence system. For example, in the scenario shown inFIG.1, both the secondary system4and the secondary system5use CH4. In this case, as shown inFIG.6, the GSC may request the SAS2to modify the spectrum resource for the CBSD2, for example, to use a frequency band except CH4, to prevent the secondary system4and the secondary system5from using the same frequency band. In an embodiment shown inFIG.7, C×M determines whether a coexistence management requirement of a first coexistence system is satisfied, and in a case that a coexistence management requirement of a first coexistence system is not satisfied, a spectrum coordination device generates spectrum modification information for modifying the spectrum resource of a secondary system in the first coexistence system. As shown inFIG.6, in step S701, CBSD1requests spectrum resources to the C×M that manages the CBSD1, and then, the C×M requests spectrum resources to GSC, and the GSC requests spectrum resources to SAS1. Next, in step S702, the SAS1determines usable spectrum resources of the first coexistence system according to the spectrum usage of the primary system. Next, in step S703, the SAS1sends the usable spectrum resources of the first coexistence system to the GSC in the first coexistence system. Next, in step S704, the GSC allocates spectrum resources to the coexistence group according to the usable spectrum resources from the SAS1. Next, in step S705, the GSC sends the spectrum resources allocated to the coexistence group to the C×M in the first coexistence system. Next, in step S706, the C×M allocates spectrum resources to CBSD managed by the C×M according to the spectrum resources from the GSC. Next, in step S707, the C×M determines whether the transmission power of the CBSD managed by the C×M satisfies the coexistence management requirement of the first coexistence system. It is assumed here that the transmission power of the CBSD1does not satisfy the coexistence management requirement of the first coexistence system. Next, in step S708, the C×M sends a spectrum usage report to the GSC. Next, in step S709, the GSC requests secondary system information to the SAS1, where the secondary system information includes information of all the secondary systems in a wireless communication system, and may include, for example, a location and spectrum resource usage. Next, in step S710, the GSC obtains the secondary system information from the SAS1, so as to know which secondary system or secondary systems in the entire wireless communication system have the same frequency band as the frequency band that does not satisfy the coexistence management requirement. Next, in step S711, the GSC generates spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system. The spectrum modification information includes spectrum resources allocated to the secondary system in the first coexistence system, for example, frequency bands in the usable spectrum resources of the first coexistence system except the frequency bands which do not satisfy the coexistence management requirement of the first coexistence system. Next, in step S712, the GSC allocates spectrum resources to the coexistence group based on the spectrum modification information. In other words, the spectrum resources allocated to the coexistence group do not include the frequency bands that do not satisfy the coexistence management requirement as described above. Next, in step S713, the C×M allocates spectrum resources to the secondary system managed by the C×M according to the modified spectrum resources from the GSC. Next, in step S714, the C×M sends the spectrum resources allocated to the secondary system managed by the C×M to the corresponding secondary system. As described above,FIG.7shows a situation where the C×M determines whether the coexistence management requirement of the first coexistence system is satisfied. In a case that C×M determines that the coexistence management requirement of the first coexistence system is not satisfied, it means that there may be secondary systems in the other coexistence system in the wireless communication system that use the same frequency band as the secondary system (CBSD1) in the first coexistence system. For example, in the scenario shown inFIG.1, both the secondary system4and the secondary system5use CH4. In this case, as shown inFIG.7, the GSC may modify the spectrum resource for the CBSD1, for example, use a frequency band except CH4, so as to prevent the secondary system4and the secondary system5from using the same frequency band. In an embodiment shown inFIG.8, GSC determines whether a coexistence management requirement of a first coexistence system is satisfied, and in a case that the coexistence management requirement of the first coexistence system is not satisfied, a spectrum coordination device generates spectrum modification information for modifying the spectrum resource of the secondary system in a second coexistence system. As shown inFIG.8, in step S801, CBSD2requests spectrum resources to SAS2. Next, in step S802, the SAS2determines usable spectrum resources of the second coexistence system according to the spectrum usage of the primary system. Next, in step S803, the SAS2sends the usable spectrum resources to CBSD2. In step S804, CBSD1requests spectrum resources to C×M that manages the CBSD1, and then, the C×M requests spectrum resources to GSC, and then GSC requests spectrum resources to SAS1. Next, in step S805, the SAS1determines usable spectrum resources of the first coexistence system according to the spectrum usage of the primary system. Next, in step S806, the SAS1sends the usable spectrum resources of the first coexistence system to the GSC in the first coexistence system. It is worth noting that the processes of the CBSD2requesting spectrum resources to the SAS2and the CBSD1requesting spectrum resources to the SAS1are independent, and therefore, the sequence numbers in the figure do not indicate the sequence of events. Next, in step S807, the GSC allocates spectrum resources to the coexistence group according to the usable spectrum resources from the SAS1. Next, in step S808, the GSC determines whether the coexistence management requirement of the first coexistence system is satisfied. For example, the GSC may determine whether the usable spectrum resources of the first coexistence system are consecutive. It is assumed that the discontinuity of the usable spectrum resources of the first coexistence system causes the coexistence management requirement of the first coexistence system to not be satisfied. Next, in step S809, the GSC requests secondary system information to the SAS1, where the secondary system information includes information of all the secondary systems in a wireless communication system, and may include, for example, a location and spectrum resource usage. Next, in step S810, the GSC obtains the secondary system information from the SAS1, so as to know the spectrum usage of the secondary systems in the entire wireless communication system. Next, in step S811, the GSC generates spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system. The spectrum modification information includes spectrum resources allocated to the secondary system in the second coexistence system, for example, frequency bands located at the edge in the usable spectrum resources of the second coexistence system. Next, in step S812, the GSC sends the spectrum modification information to the SAS2. Next, in step S813, when the CBSD2sends a spectrum use request to the SAS2, in step S814, the SAS2may send the modified spectrum resource to the CBSD2. As described above,FIG.8shows a situation where the GSC determines whether the coexistence management requirement of the first coexistence system is satisfied. When the GSC determines that the coexistence management requirement of the first coexistence system is not satisfied, it means that the secondary systems in other coexistence systems in the wireless communication system use frequency bands in the usable spectrum in a decentralized form. For example, in the scenario shown inFIG.1, the secondary system5uses CH4, the secondary system6uses CH1, and the secondary system7uses CH3. In this case, as shown inFIG.8, the GSC may request SAS2to modify the spectrum resources for the CBSD managed by the SAS2, for example, use CH4or use CH1to ensure the continuity of the remaining spectrum resources as much as possible. In an embodiment shown inFIG.9, GSC determines whether a coexistence management requirement of a first coexistence system is satisfied, and in a case that the coexistence management requirement of the first coexistence system is not satisfied, a spectrum coordination device generates spectrum modification information for modifying the spectrum resource of the secondary system in a second coexistence system and spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system. Steps S901to S910inFIG.9respectively correspond to steps S801to S810inFIG.8, which are not repeated here. Next, only steps inFIG.9that are different from those inFIG.8will be described. In step S911, the GSC generates spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system. The spectrum modification information includes spectrum resources allocated to the secondary system in the second coexistence system, for example, frequency bands located at the edge in usable spectrum resources of the second coexistence system. Furthermore, in step S911, the GSC also generates spectrum modification information for modifying the spectrum resources of the secondary system in the first coexistence system. The spectrum modification information includes spectrum resources allocated to the secondary system in the first coexistence system, for example, resources in usable spectrum resources of the first coexistence system except resources allocated to the second coexistence system. Next, in step S912, the GSC sends the spectrum modification information to SAS2. Next, in step S913, when CBSD2sends a spectrum use request to the SAS2, in step S914, the SAS2may send the modified spectrum resource to the CBSD2. In step S915, the GSC allocates spectrum resources to the coexistence group based on the spectrum modification information. Next, in step S916, C×M allocates spectrum resources to the secondary system managed by the C×M according to the modified spectrum resources from the GSC. Next, in step S917, the C×M sends the spectrum resources allocated to the secondary system managed by the C×M to the corresponding secondary system. As described above,FIG.9shows a situation where the GSC determines whether the coexistence management requirement of the first coexistence system is satisfied. When the GSC determines that the coexistence management requirement of the first coexistence system is not satisfied, it means that the secondary systems in other coexistence systems in the wireless communication system use frequency bands in the usable spectrum in a decentralized form. For example, in the scenario shown inFIG.1, the secondary system5uses CH4, the secondary system6uses CH1, and the secondary system7uses CH3. In this case, as shown inFIG.9, the GSC may request the SAS2to modify spectrum resources for the CBSD managed by the SAS2, for example, to use CH4, and may allocate spectrum to the coexistence group based on the spectrum resources allocated to the secondary system in the second coexistence system, for example, allocate CH1to CH3to the coexistence group in the first coexistence system, so as to ensure the continuity of the spectrum resources of the first coexistence system as much as possible. As described above,FIG.6toFIG.9illustrate the spectrum resource allocation process according to an embodiment of the present disclosure in an exemplary manner. Those skilled in the art should understand that, without violating the spirit of the present disclosure, adaptive modification and variation may be performed on the foregoing spectrum resource allocation process. 6. Method Embodiment Subsequently, a wireless communication method performed by a spectrum coordination device300in a wireless communication system according to an embodiment of the present disclosure is described in detail. The wireless communication system includes a first coexistence system and a second coexistence system. The first coexistence system includes a spectrum coordination device300and one or more secondary systems divided into coexistence groups, and the second coexistence system includes one or more secondary systems. FIG.10is a flowchart showing a wireless communication method performed by a spectrum coordination device300in a wireless communication system according to an embodiment of the present disclosure. As shown inFIG.10, in step S1010, it is determined whether a coexistence management requirement of a first coexistence system is satisfied. Next, in step S1020, in a case that the coexistence management requirement of the first coexistence system is not satisfied, spectrum modification information for modifying spectrum resource of a secondary system in the first coexistence system and/or for modifying spectrum resource of the a secondary system in a second coexistence system is generated. Preferably, the first coexistence system includes one or more spectrum division devices, and the wireless communication method further includes: receiving a spectrum usage report from the spectrum division device; and determining, according to the spectrum usage report, whether the coexistence management requirement of the first coexistence system is satisfied. Preferably, the wireless communication method further includes: determining, according to the spectrum usage report, frequency band which does not satisfy the coexistence management requirement of the first coexistence system. Preferably, the wireless communication method further includes: determining a consecutive usage period of time of usable spectrum resources of the first coexistence system; and determining, in a case where a consecutive usage period-of-time requirement with respect to frequency bands of one coexistence group is not satisfied, that the coexistence management requirement of the first coexistence system is not satisfied. Preferably, the wireless communication method further includes: sending the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system to the spectrum division device in the first coexistence system. Preferably, the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system includes: allocating, to the coexistence groups in the first coexistence system, frequency bands in the usable spectrum resources of the first coexistence system which satisfy the coexistence management requirement. Preferably, the second coexistence system further includes a second spectrum authorization device for determining usable spectrum resources of the second coexistence system, and the wireless communication method further includes: sending the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system to the second spectrum authorization device. Preferably, the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system includes: allocating, to the secondary system in the second coexistence system, frequency bands in the usable spectrum resources of the second coexistence system except the frequency bands which do not satisfy the coexistence management requirement. Preferably, the spectrum modification information for modifying the spectrum resource of the secondary system in the second coexistence system includes: allocating, to the secondary system in the second coexistence system, frequency bands at the edge in the usable spectrum resources of the second coexistence system. Preferably, the spectrum modification information for modifying the spectrum resource of the secondary system in the first coexistence system includes: allocating, to the coexistence groups in the first coexistence system, frequency bands in the usable spectrum resources of the first coexistence system except frequency bands allocated to the secondary systems in the second coexistence system. Preferably, the first coexistence system further includes a first spectrum authorization device for determining usable spectrum resources of the first coexistence system, and the wireless communication method further includes: allocating spectrum resources to one or more coexistence groups in the first coexistence system according to the usable spectrum resources of the first coexistence system. According to an embodiment of the present disclosure, the above method may be performed by the spectrum coordination device300according to the embodiment of the present disclosure. Therefore, all embodiments of the spectrum coordination device300described above are suitable for this embodiment. Subsequently, a wireless communication method performed by a spectrum division device400in a wireless communication system according to an embodiment of the present disclosure is described in detail. The wireless communication system includes a first coexistence system, and the first coexistence system includes a spectrum coordination device, the spectrum division device and one or more secondary systems divided into coexistence groups. FIG.11is a flowchart showing a wireless communication method performed by a spectrum division device400in a wireless communication system according to an embodiment of the present disclosure. As shown inFIG.11, in step S1110, it is determined whether transmission power of one or more secondary systems in coexistence groups managed by a spectrum division device satisfies a coexistence management requirement of a first coexistence system. Next, in step S1120, in a case that the transmission power of one or more secondary systems does not satisfy the coexistence management requirement of the first coexistence system, a spectrum usage report is sent to a spectrum coordination device. Preferably, the wireless communication method further includes: determining, in a case where the transmission power of the one or more secondary systems is less than a predetermined threshold, that the coexistence management requirement of the first coexistence system is not satisfied. Preferably, the wireless communication method further includes: generating the spectrum usage report, to include frequency band information on one or more secondary systems which do not satisfy the coexistence management requirement of the first coexistence system. Preferably, the wireless communication method further includes: receiving spectrum modification information from the spectrum coordination device; and allocating, according to the spectrum modification information, spectrum sources to one or more secondary systems in coexistence groups managed by the spectrum division device, to allocate, to a secondary system in the coexistence groups, frequency bands in usable spectrum resources of the first coexistence system which satisfy the coexistence management requirement. According to an embodiment of the present disclosure, the above method may be performed by the spectrum division device400according to the embodiment of the present disclosure. Therefore, all embodiments of the spectrum division device400described above are suitable for this embodiment. Next, a wireless communication method performed by a spectrum authorization device500in a wireless communication system according to an embodiment of the present disclosure is described in detail. The wireless communication system includes a first coexistence system and a second coexistence system. The first coexistence system includes a spectrum coordination device and one or more secondary systems, and the second coexistence system includes a spectrum authorization device and one or more secondary systems. FIG.12is a flowchart showing a wireless communication method performed by a spectrum authorization device500in a wireless communication system according to an embodiment of the present disclosure. As shown inFIG.12, in step S1210, spectrum modification information is received from a spectrum coordination device. Next, in step S1220, spectrum resources of a secondary system in a second coexistence system are modified according to the spectrum modification information. Preferably, the wireless communication method further includes: modifying, according to the spectrum modification information, the spectrum resources of the secondary system in the second coexistence system, to allocate, to the secondary system in the second coexistence system, frequency bands in usable spectrum resources of the second coexistence system except frequency bands which do not satisfy a coexistence management requirement of the first coexistence system. According to an embodiment of the present disclosure, the above method may be performed by the spectrum authorization device500according to the embodiment of the present disclosure. Therefore, all embodiments of the spectrum authorization device500described above are suitable for this embodiment. 7. Application Example The technology according to the present disclosure may be applied to various products For example, the spectrum coordination device300, the spectrum division device400and the spectrum authorization device500may be realized as any type of server such as a tower server, a rack server, and a blade server. The spectrum coordination device300, the spectrum division device400and the spectrum authorization device500may be a control module (such as an integrated circuit module including a single die, and a card or a blade that is inserted into a slot of a blade server) mounted on a server. FIG.13is a block diagram showing an example of a server1300which may implement a spectrum coordination device300, a spectrum division device400and a spectrum authorization device500according to the present disclosure. The server1300includes a processor1301, a memory1302, a storage device1303, a network interface1304, and a bus1306. The processor1301may be, for example, a central processing unit (CPU) or a digital signal processor (DSP), and controls functions of the server1300. The memory1302includes a random access memory (RAM) and a read only memory (ROM), and stores a program that is executed by the processor1301and data. The storage device1303may include a memory medium, such as a semiconductor memory and a hard disc. The network interface1304is a wired communication interface for connecting the server1300to a wired communication network1305. The wired communication network1305may be a core network such as an evolved packet core (EPC), or a packet data network (PDN) such as the Internet. The bus1306connects the processor1301, the memory1302, the storage device1303, and the network interface1304to each other. The bus1306may include two or more buses (such as a high speed bus and a low speed bus) each of which has different speed. In the server1300shown inFIG.13, the coordination unit310and the determination unit330described inFIG.3, the determination unit410, the generation unit430, and the division unit440described inFIG.4, as well as the authorization unit520described inFIG.5may be implemented by the processor1301, and the communication unit320described inFIG.3, the communication unit420described inFIG.4and the communication unit510described inFIG.5may be implemented by the network interface1304. For example, the processor1301may execute the functions of determining usable spectrum resources, determining whether the coexistence management requirement of the coexistence system is satisfied and generating a spectrum usage report by executing instructions stored in the memory1302or the storage device1303. Preferred embodiments of the present disclosure are described above with reference to the accompanying drawings. However, the present disclosure is not limited to the above examples. Those skilled in the art can make various changes and modifications within the scope of the appended claims, and it should be understood that such changes and modifications naturally fall within the technical scope of the present disclosure. For example, units shown by a dotted line block in the functional block diagram shown in the drawings indicate that the functional units are optional in the corresponding device, and the optional functional units may be combined appropriately to achieve the required function. For example, multiple functions implemented by one unit in the above embodiments may be implemented by separate devices. Alternately, in above embodiments, multiple functions implemented by multiple units may be implemented by separate device. In addition, one of above functions may be implemented by multiple units. Needless to say, such a configuration is included in the technical scope of the present disclosure. In this specification, the steps described in the flowchart include not only processing performed in the order in time series, but also include processing parallel or individually and not necessarily performed in time series. Furthermore, the steps performed in time series may be performed in other order appropriately. Although the embodiments of the present disclosure have been described above in detail in connection with the drawings, it is appreciated that the embodiments as described above are merely illustrative but not limitative of the present disclosure. Those skilled in the art can make various modifications and changes to the above embodiments without departing from the spirit and scope of the present disclosure. Therefore, the scope of the present disclosure is defined only by the appended claims and their equivalents.
86,740
11943632
All figures © Copyright 2019-2020 Charter Communications Operating, LLC. All rights reserved. DETAILED DESCRIPTION Reference is now made to the drawings wherein like numerals refer to like parts throughout. As used herein, the term “access node” refers generally and without limitation to a network node which enables communication between a user or client device and another entity within a network, such as for example a CBRS CBSD, or a cellular xNB. As used herein, the term “application” (or “app”) refers generally and without limitation to a unit of executable software that implements a certain functionality or theme. The themes of applications vary broadly across any number of disciplines and functions (such as on-demand content management, e-commerce transactions, brokerage transactions, home entertainment, calculator etc.), and one application may have more than one theme. The unit of executable software generally runs in a predetermined environment; for example, the unit could include a downloadable Java Xlet™ that runs within the JavaTV™ environment. As used herein, the term “CBRS” refers without limitation to the CBRS architecture and protocols described in Signaling Protocols and Procedures for Citizens Broadband Radio Service (CBRS): Spectrum Access System (SAS)—Citizens Broadband Radio Service Device (CBSD) Interface Technical Specification—Document WINNF-TS-0016, Version V1.2.1.3, January 2018, incorporated herein by reference in its entirety, and any related documents or subsequent versions thereof. As used herein, the terms “client device” or “user device” or “UE” include, but are not limited to, set-top boxes (e.g., DSTBs), gateways, modems, personal computers (PCs), and minicomputers, whether desktop, laptop, or otherwise, and mobile devices such as handheld computers, PDAs, personal media devices (PMDs), tablets, “phablets”, smartphones, and vehicle infotainment systems or portions thereof. As used herein, the term “computer program” or “software” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.) and the like. As used herein, the term “DOCSIS” refers to any of the existing or planned variants of the Data Over Cable Services Interface Specification, including for example DOCSIS versions 1.0, 1.1, 2.0, 3.0, 3.1 and 4.0. As used herein, the term “headend” or “backend” refers generally to a networked system controlled by an operator (e.g., an MSO) that distributes programming to MSO clientele using client devices. Such programming may include literally any information source/receiver including, inter alia, free-to-air TV channels, pay TV channels, interactive TV, over-the-top services, streaming services, and the Internet. As used herein, the terms “Internet” and “internet” are used interchangeably to refer to inter-networks including, without limitation, the Internet. Other common examples include but are not limited to: a network of external servers, “cloud” entities (such as memory or storage not local to a device, storage generally accessible at any time via a network connection, and the like), service nodes, access points, controller devices, client devices, etc. As used herein, the term “LTE” refers to, without limitation and as applicable, any of the variants or Releases of the Long-Term Evolution wireless communication standard, including LTE-U (Long Term Evolution in unlicensed spectrum), LTE-LAA (Long Term Evolution, Licensed Assisted Access), LTE-A (LTE Advanced), and 4G/4.5G LTE. As used herein, the term “memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM, PROM, EEPROM, DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), 3D memory, and PSRAM. As used herein, the terms “microprocessor” and “processor” or “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., FPGAs), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components. As used herein, the terms “MSO” or “multiple systems operator” refer to a cable, satellite, or terrestrial network provider having infrastructure required to deliver services including programming and data over those mediums. As used herein, the terms “MNO” or “mobile network operator” refer to a cellular, satellite phone, WMAN (e.g., 802.16), or other network service provider having infrastructure required to deliver services including without limitation voice and data over those mediums. As used herein, the terms “network” and “bearer network” refer generally to any type of telecommunications or data network including, without limitation, hybrid fiber coax (HFC) networks, satellite networks, telco networks, and data networks (including MANs, WANs, LANs, WLANs, internets, and intranets). Such networks or portions thereof may utilize any one or more different topologies (e.g., ring, bus, star, loop, etc.), transmission media (e.g., wired/RF cable, RF wireless, millimeter wave, optical, etc.) and/or communications or networking protocols (e.g., SONET, DOCSIS, IEEE Std. 802.3, ATM, X.25, Frame Relay, 3GPP, 3GPP2, LTE/LTE-A/LTE-U/LTE-LAA, 5G NR, WAP, SIP, UDP, FTP, RTP/RTCP, H.323, etc.). As used herein, the term “network interface” refers to any signal or data interface with a component or network including, without limitation, those of the FireWire (e.g., FW400, FW800, etc.), USB (e.g., USB 2.0, 3.0. OTG), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), LTE/LTE-A/LTE-U/LTE-LAA, Wi-Fi (802.11), WiMAX (802.16), Z-wave, PAN (e.g., 802.15), or power line carrier (PLC) families. As used herein the terms “5G” and “New Radio (NR)” refer without limitation to apparatus, methods or systems compliant with 3GPP Release 15, and any modifications, subsequent Releases, or amendments or supplements thereto which are directed to New Radio technology, whether licensed or unlicensed. As used herein, the term “QAM” refers to modulation schemes used for sending signals over e.g., cable or other networks. Such modulation scheme might use any constellation level (e.g. QPSK, 16-QAM, 64-QAM, 256-QAM, etc.) depending on details of a network. A QAM may also refer to a physical channel modulated according to the schemes. As used herein, the term “quasi-licensed” refers without limitation to spectrum which is at least temporarily granted, shared, or allocated for use on a dynamic or variable basis, whether such spectrum is unlicensed, shared, licensed, or otherwise. Examples of quasi-licensed spectrum include without limitation CBRS, DSA, GOGEU TVWS (TV White Space), and LSA (Licensed Shared Access) spectrum. As used herein, the term “SAE (Spectrum Allocation Entity)” refers without limitation to one or more entities or processes which are tasked with or function to allocate quasi-licensed spectrum to users. Examples of SAEs include SAS (CBRS). PMSE management entities, and LSA Controllers or Repositories. As used herein, the term “SAS (Spectrum Access System)” refers without limitation to one or more SAS entities which may be compliant with FCC Part 96 rules and certified for such purpose, including (i) Federal SAS (FSAS), (ii) Commercial SAS (e.g., those operated by private companies or entities), and (iii) other forms of SAS. As used herein, the term “server” refers to any computerized component, system or entity regardless of form which is adapted to provide data, files, applications, content, or other services to one or more other devices or entities on a computer network. As used herein, the term “shared access” refers without limitation to (i) coordinated, licensed sharing such as e.g., traditional fixed link coordination in 70/80/90 GHz and the U.S. FCC's current rulemaking on potential database-coordinated sharing by fixed point-to-multipoint deployments in the C-band (3.7-4.2 GHz); (ii) opportunistic, unlicensed use of unused spectrum by frequency and location such as TV White Space and the U.S. FCC's proposal to authorize unlicensed sharing in the uplink C-band and other bands between 5925 and 7125 MHz; (iii) two-tier Licensed Shared Access (LSA) based on geographic areas and database assist such as e.g., within 3GPP LTE band 40 based on multi-year sharing contracts with tier-one incumbents; and (iv) three-tier shared access (including quasi-licensed uses) such as CBRS. As used herein, the term “storage” refers to without limitation computer hard drives, DVR device, memory, RAID devices or arrays, optical media (e.g., CD-ROMs, Laserdiscs, Blu-Ray, etc.), or any other devices or media capable of storing content or other information. As used herein, the term “users” may include without limitation end users (e.g., individuals, whether subscribers of the MSO network, the MNO network, or other), the receiving and distribution equipment or infrastructure such as a CPE/FWA or CBSD, venue operators, third party service providers, or even entities within the MSO itself (e.g., a particular department, system or processing entity). As used herein, the term “Wi-Fi” refers to, without limitation and as applicable, any of the variants of IEEE Std. 802.11 or related standards including 802.11 a/b/g/n/s/v/ac or 802.11-2012/2013, 802.11-2016, as well as Wi-Fi Direct (including inter alia, the “Wi-Fi Peer-to-Peer (P2P) Specification”, incorporated herein by reference in its entirety). As used herein, the term “wireless” means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth/BLE, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CBRS, CDMA (e.g., IS-95A, WCDMA, etc.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, Zigbee®, Z-wave, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/LTE-U/LTE-LAA, 5G NR, LoRa, IoT-NB, SigFox, analog cellular, CDPD, satellite systems, millimeter wave or microwave systems, acoustic, and infrared (i.e., IrDA). As used herein, the term “xNB” refers to any 3GPP-compliant node including without limitation eNBs (eUTRAN) and gNBs (5G NR). Overview In one salient aspect, the present disclosure describes methods and apparatus for optimizing CPE antenna position, orientation and beam configuration within a power-limited system so that maximal data rates and performance can be achieved, including after installation and in high-density applications where proper antenna alignment and beam configuration are critical. In one embodiment, the methods and apparatus utilize so-called “quasi-licensed” CBRS (Citizens Broadband Radio Service) wireless spectrum in conjunction with a controller architecture that dynamically optimizes the antenna orientation and transmit/receive beam resources in an installed fixed wireless apparatus (FWA) for optimum delivery of services to user or subscriber premises. In one configuration, the CPE/FWA includes indigenous control logic that obtains signal and performance data via its antenna elements and radio head, and uses the data to adjust the antenna elements so as to optimize performance of the CPE/FWA (as well as aid in optimization of other nearby CPE/FWA devices in some scenarios). Extant performance or signal quality measurements resident within the underlying wireless protocols (e.g., SRS and CRS data associated with 3GPP channel quality estimates) may also be leveraged for characterizing the wireless environment and as inputs to the CPE optimization process. The CPE's local control logic may also in some variants be supported by a network-based operations support system (OSS) disposed with the service provider's infrastructure (such as at a headend, EPC, or 5GC thereof, or even more locally within a 5G gNB CU tasked with controlling a plurality of DU's), such that a more “global” perspective can be obtained for coordination of a given CPE/FWA with others in the area, than through use of the localized CPE/FWA controller itself. As such, CPE/FWA antenna system optimization as described herein provides the capability for enhancing the performance and data rates at consumer premises, including in post-installation scenarios where base station inventory changes, more local CPE are added, and/or RF propagation paths change due to e.g., natural or man-made effects. This capability also advantageously obviates maintenance calls or “truck rolls” and other network operating expenses, and enhances customer satisfaction through reduced-latency correction of performance issues, and accelerated new service velocity. The methods and apparatus described herein can also advantageously be extended to other shared-access architectures (i.e., other than CBRS) such as for example DSA, LSA, and TVWS systems, as well as those utilizing (fully) licensed and/or unlicensed RF spectrum. Detailed Description of Exemplary Embodiments Exemplary embodiments of the apparatus and methods of the present disclosure are now described in detail. While these exemplary embodiments are described in the context of the previously mentioned wireless access points (e.g., CBSDs) associated with e.g., a managed network (e.g., hybrid fiber coax (HFC) cable architecture having a multiple systems operator (MSO), digital networking capability, IP delivery capability, and a plurality of client devices), the general principles and advantages of the disclosure may be extended to other types of radio access technologies (“RATs”), networks and architectures that are configured to deliver digital data (e.g., text, images, games, software applications, video and/or audio). Such other networks or architectures may be broadband, narrowband, or otherwise, the following therefore being merely exemplary in nature. It will also be appreciated that while described generally in the context of a network providing service to a customer or consumer or end user or subscriber (i.e., within a prescribed venue, or other type of premises), the present disclosure may be readily adapted to other types of environments including, e.g., outdoors, commercial/retail, or enterprise domain (e.g., businesses), or even governmental uses, such as those outside the proscribed “incumbent” users such as U.S. DoD and the like. Yet other applications are possible. Also, while certain aspects are described primarily in the context of the well-known Internet Protocol (described in, inter alia,Internet Protocol DARPA Internet Program Protocol Specification, IETF RCF 791 (September 1981) and Deering et al.,Internet Protocol, Version6 (IPv6)Specification, IETF RFC 2460 (December 1998), each of which is incorporated herein by reference in its entirety), it will be appreciated that the present disclosure may utilize other types of protocols (and in fact bearer networks to include other internets and intranets) to implement the described functionality. Moreover, while the current SAS framework is configured to allocate spectrum in the 3.5 GHz band (specifically 3,550 to 3,700 MHz), it will be appreciated by those of ordinary skill when provided the present disclosure that the methods and apparatus described herein may be configured to utilize other “quasi-licensed” or shared access systems or other spectrum, including without limitation DSA, LSA, or TVWS systems, and those above 4.0 GHz (e.g., currently proposed allocations up to 4.2 GHz, and even millimeter wave bands such as those between 24 and 100 GHz). Additionally, while described primarily in terms of GAA106spectrum allocation (seeFIG.1), the methods and apparatus described herein may also be adapted for allocation of other “tiers” of CBRS or other unlicensed spectrum (whether in relation to GAA spectrum, or independently), including without limitation e.g., so-called Priority Access License (PAL) spectrum104. Moreover, while described in the context of quasi-licensed or unlicensed spectrum, it will be appreciated by those of ordinary skill given the present disclosure that various of the methods and apparatus described herein may be applied to reallocation/reassignment of spectrum or bandwidth within a licensed spectrum context; e.g., for cellular voice or data bandwidth/spectrum allocation, such as in cases where a given service provider must alter its current allocation of available spectrum to users. Further, while some aspects of the present disclosure are described in detail with respect to so-called “4G/4.5G” 3GPP Standards (aka LTE/LTE-A) and so-called 5G “New Radio” (3GPP Release 15 and TS 38.XXX Series Standards and beyond), such aspects—including allocation/use/withdrawal of CBRS spectrum—are generally access technology “agnostic” and hence may be used across different access technologies, and can be applied to, inter alia, any type of P2MP (point-to-multipoint) or MP2P (multipoint-to-point) technology, including e.g., Qualcomm Multefire. Other features and advantages of the present disclosure will immediately be recognized by persons of ordinary skill in the art with reference to the attached drawings and detailed description of exemplary embodiments as given below. Antenna Optimization Architecture FIG.5illustrates an exemplary CPE/FWA antenna optimization architecture500according to the present disclosure. As illustrated, the architecture includes an inventive CPE/FWA509(described in greater detail below) disposed at/on a premises, such as a customer house or building. The CPE/FWA509is served in this example by two base stations (xNB 1206aand xNB 2206b) having respective coverage areas505a,505b, within which the instant CPE/FWA509lies, although it will be appreciated that other numbers and/or types of base stations may be used to service the CPE/FWA509. Moreover, it will be appreciated that instead of base stations, other CPE/FWA apparatus configured for supplementation or out-of-coverage service to the instant CPE/FWA509may be used to provide services, such as those described in co-pending U.S. patent application Ser. No. 16/738,889 filed Jan. 9, 2020 and entitled “METHODS AND APPARATUS FOR SERVICE PROVISION TO OUT-OF-COVERAGE APPARATUS IN WIRELESS SYSTEMS”, as well as U.S. patent application Ser. No. 16/676,188 filed Nov. 6, 2019 and entitled “METHODS AND APPARATUS FOR ENHANCING COVERAGE IN QUASI-LICENSED WIRELESS SYSTEMS,” each of the foregoing incorporated herein by reference in its entirety. For example, as described therein, wireless coverage for an exemplary unlicensed or quasi-licensed CPE that is at or beyond a coverage “edge” of a given network (or is otherwise experiencing less-than-adequate signal strength for whatever reason) may be provided service via “relay” and/or supplementation of services from a better-positioned “in coverage” CPE of the same network. As such, the various radios and antenna elements (and decision logic) of the various embodiments of the present disclosure can be used to great advantage in such operational scenarios, such as to enable establishment of one or more wireless connections between respective ones of the sectorized radios and corresponding CBSDs within suitable range thereof, including pursuant to 3GPP “D2D” mechanisms. Returning again toFIG.5, the illustrated CPE/FWA509includes a CPE device511(e.g., a gateway, DSTB, modem, or other such form factor of computerized premises device), Wi-Fi or other routers523, PoE apparatus525(such as in the architecture ofFIG.2Bdiscussed above), one or more antenna elements521, and performance monitoring (e.g., “iPerf” or other performance assessment logic or software) agent517. As discussed in greater detail below, in the exemplary embodiment, the iPerf agent at each inventive CPE/FWA509measures (depending on its connection status) key performance indicators (KPIs) such as data throughput (TP), latency and jitter, which are useful in assessing the needs and capabilities of each individual premises. The CPE/FWA logic may also be configured to utilize one or more signals indigenous within the underlying air interface protocols (e.g., 3GPP LTE/LTE-A or 5G NR in the exemplary configurations described herein) to assess signal quality for a given antenna element or set of elements (e.g., within a spatial diversity/MIMO group), such as SRS (sounding reference signal) for uplink (UL) signals, and CRS (cell-specific reference signal). As a brief aside, reference signals such as SRS and CRS in LTE support various functions, including channel estimation for MIMO decoding (demodulation), determination of PMI/CQI/RI feedback, and determination of multi-user resource allocation (scheduling). In a downlink (DL), the cell-specific reference signals (CRS) are transmitted by the eNB on unique resource elements for each antenna port, and are allocated in frequency and time. Since the CRS for each antenna port are mutually orthogonal to one another, channel estimation techniques such as interpolation can be used to determine estimates for the MIMO channel. This channel estimate can be used to derive PMI, CQI and RI feedback to determine the transmission scheme, and additional CQI reports may be requested from a given UE by the eNB for the purpose of multiuser scheduling. In the UL direction, the reference signal scheme is different from that of the DL to that of the downlink, since each UE must transmit its own reference signals to the eNB. Two types of uplink reference signal are utilized in 3GPP; Demodulation Reference Signals (DM-RS) and Sounding Reference Signals (SRS). DM-RS are used to support data demodulation, and are transmitted only on the resource block to which that UE is allocated. The DM-RS signals are derived from Zadoff-Chu (CAZAZ) sequences, and hence channel impulse response can be estimated by cross-correlation with a copy of the transmitted signal. Sounding reference signals (SRS) support multiuser scheduling, and enable the eNB to estimate the channel quality between each UE and the eNB over the entire system bandwidth (versus on a per resource block basis). Generally, SRS are transmitted only on request by the eNB, and only within the last SC-FDMA symbol of a subframe. The sounding bandwidth is also configurable, including allowing the eNB to “trade-off” between accuracy and reference signal overhead. Hence, the mechanisms for assessing channel quality present within for example the underlying LTE (or 5G NR) protocols may be leveraged by the logic of the CPE/FWA509in gathering information for subsequent evaluation/analysis, whether by the CPE/FWA locally, by one or more network processes, or combinations thereof. In addition, the exemplary CPE/FWA is configured in some embodiments to measure in one or more RF parameters (e.g., prior to achieving any connected state with a base station), such as RSSI, RSRP, RSRQ for each antenna element within a prescribed frequency range via its installed radios and associated RF front ends. As such, the CPE/FWA509can act somewhat as a spectrum analyzer to canvass the existing RF spectrum, such as during pre-provisioning, or even after installation/initial provisioning. It will be appreciated that the use of exemplary performance measurement (e.g., iPerf) processes at the CPE/FWA device509(and at others generally in proximity thereto; see subsequent discussion herein) advantageously allows for a very low-overhead and efficient mechanism by which to judge whether a given CPE/FWA is deficient or over-performing in terms of one or more criteria relating to e.g., its SLA. Moreover, in cases where the CPE/FWA is used for possible supplementation or relay of signals to another CPE/FWA (as referenced above), the iPerf measurements and/or other data can be used to evaluate whether that CPE/FWA can sustain provision of such services to one or more other (i.e., out of coverage or OOC) CPE/FWA. Specifically, using a performance-based mechanism such as iPerf in the exemplary embodiments obviates more sophisticated analyses of channel conditions; rather, the net or actual user-plane performance of any given link and its associated channel conditions at any given time are readily determined and used as a basis of determining whether any changes in CPE/FWA configuration (such as changes in azimuth or elevation of a given antenna element, change to a new serving base station, utilization of different spatial diversity or MCS scheme, use of supplementation from a second base station or another CPE/FWA in tandem, etc.) is required. It will be appreciated, however, that SLAs for OOC premises may also be established initially at levels known to be supportable by other (primary) CPE, such as based on installation testing, or iPerf analysis of the other CPE in “worst case” conditions. For instance, if it is known that a maximum theoretical SLA for the OOC CPE is (based on worst-case scenarios for all eligible primary CPE) is X Mbps in UL and Y Mbps in DL, then the SLA for that OOC CPE may purposely not be established above those values, thereby avoiding customer disappointment or frustration. If subsequently additional capacity becomes available, then the OOC device can be given “upgrades” on its SLA, whether explicitly by contract, or implicitly via added capacity when available even with no formal commitment by the MSO to do so. FIG.6is a logical block diagram of one exemplary embodiment of antenna controller apparatus and operation thereof according to the present disclosure. In this example, the controller logic is part of the CPE/FWA509utilizing quasi-licensed frequency bands such as CBRS, and is generally closed-loop in configuration (i.e., utilizes at least some form of output—here relevant like or performance data—as a feedback input). Within the generalized apparatus ofFIG.6, the CPE/FWA includes an RF front end601, a baseband processor603, a KPI tracker process605, including an iPerf process517integrated therein, a local database611, and a control system module520. Additionally, the CPE/FWA includes antenna elements522(seeFIG.5) that are installed usually on the rooftop or a façade of the premises, as well as an azimuth actuator608, and a tilt actuator607. The actuators can be mechanical actuators (such as e.g., mechanical assemblies driven by motors capable of precise adjustment such as stepper motors or the like), or electronic actuators (e.g., RF switches, varactors), or combinations of the foregoing. Moreover, as described in greater detail below, each of the antenna elements522is also capable of forming transmit/receive beams (seeFIG.7) at prescribed angles, whether steered mechanically and/or electronically. The components of CPE/FWA509shown inFIG.6may be individually or partially implemented in software, firmware or hardware. The RF front end601includes RF circuits to operate in e.g., quasi-licensed or unlicensed spectrum (e.g., CBRS GAA or PAL, NR-U, C-Band, etc.). The front-end module601converts the radio frequency signals received via the connected antenna element(s)522to baseband signals to be processed by the baseband processor603. The baseband processor603includes baseband signal processing and radio control functions, including in one variant Layer 2 functions such as media access control (MAC). The KPI tracker process605of the illustrated embodiment collects and tracks KPI data associated with a given connection (and hence base station or serving CPE/FWA in the case of supplementation) such as throughput, latency, jitter, error rates, using the iPerf client511. The collected information is saved in the database611, including for formation of historical profiles associated with the various base stations or other devices with which the CPE/FWA509may communicate (e.g., those within signal range). The control system module609uses the collected KPI data produced by the KPI tracker605or from the database611, and generates the control data/commands to adjust antenna azimuth and/or tilt. The actuators607and608receive the control commands from the control module609, and adjusts the azimuth and tilt of the antenna element(s)522accordingly. As shown inFIG.6, the KPI tracker process605may also be configured to provide KPI and other data (e.g., RF spectrum canvassing data obtained via the antenna elements and RF front end601) to a network process such as an OSS (operations support system) disposed within e.g., the MSO headend, a 3GPP EPC or 5GC core entity, or even within a CU (controller unit) of a 5G NR gNB used to control multiple DUs (distributed units) associated with the gNB. As discussed in greater detail subsequently herein, in some embodiments, the network process is used to supplement or even replace evaluation by onboard logic within the CPE/FWA509itself of the measured KPI or RF data in order to determine control system adjustments for the antenna. Such evaluation data (or even direct control command for provision to the actuators607,608) generated by the network process such as an OSS is received at the control system520as shown inFIG.6. In one variant, the logical channels established between the CPE/FWA509and the OSS are borne on underlying wireless physical channels established between the CPE/FWA and its serving xNB at any given time, and backhaul from that xNB to the EPC/5GC core or MSO headend, although other approaches may be used, including alternate “side” channels or bearers that may exist between the premises and a network node (whether within or external to the serving RAN and its associated wireless infrastructure). In that the performance data and subsequent evaluation or control data to/from the CPE/FWA509are not particularly time sensitive (e.g., adjustments are contemplated to be made on a progressive or iterative basis over a period of time in some cases), resource contention and mapping to extant channel bearers (e.g., data or control channels of the underlying 3GPP protocols, or higher layer channels) is generally avoided and not a salient issue. Moreover, in some variants, the CPE/FWA tracker process and control system logic is configured so as to start with, or “fall back” on, purely local or indigenous data and evaluation conducted by the CPE/FWA509itself, including for instance in cases of (i) initial pre-provisioning such as for coarse initial adjustments; (ii) during post-provisioning of the device509, such as when requisite SLA levels are not being met and no network/OSS assist is available; or (iii) where there is only a single or limited number of base stations eligible to serve the CPE/FWA, and no significant other interferers affecting the CPE/FWA being optimized, thereby reducing the evaluation and analysis to a much simpler problem (e.g., where a sufficiently close adjustment generated by the CPE/FWA itself is sufficient to meet/exceed SLA requirements since the prevailing interference level is low and signal strength from a serving base station is good). It will also be appreciated that while a single antenna element522is shown inFIG.6for purposes of clarity, various different configurations are contemplated herein, such as e.g.; (i) one tilt actuator and one azimuth actuator per single (discrete) antenna element within an array of multiple antenna elements; (ii) one tilt and one azimuth actuator per two or more “ganged” antenna elements (e.g., two or more antenna elements juxtaposed so as to form a common mechanical assembly); and (iii) one tilt actuator and one azimuth actuator for a single omnidirectional antenna element. Moreover, the present disclosure contemplates changes in azimuth of the antenna array as a unit; e.g., by rotation of the array around a vertical or other central axis of the array. FIG.7is a composite view of one embodiment of the articulated antenna apparatus of the disclosure, illustrating top and side views, as well as transmit/receive beam configuration and parameters related thereto. As shown, this embodiment of the CPE/FWA antenna array521includes a plurality (e.g., 6 or 8) individual elongate antenna elements522disposed in a generally radial fashion around a central axis (Z). In this embodiment, each element of the array521is individually controllable within a prescribed range of azimuth (φ) and elevation or tilt (θ) angles, such as +/−10 degrees, by corresponding electro-mechanical/electrical azimuth and tilt actuators607,608of the type previously described. As such, each antenna element522can be individually positioned relative to others in the array521so as to e.g., maximize one or more desired parameters such as throughput, SINR, etc. In general each transmit/receive beam710generated by a given antenna element522is oriented in a direction orthogonal to the plane of the antenna element face as shown; however, this is but merely one configuration, and the present disclosure contemplates implementations where this is not the case, including electronic generation of beams via two or more discrete elements (including sub-elements of a given antenna element522, not shown, or by two or more different antenna elements). Beam dispersion can also be adjusted via e.g., electronic means or use of narrow dispersion antenna elements; this approach has the advantage of reducing unwanted overlap or interference with other antenna elements of the same CPE/FWA, as well as other CPE/FWA that may be operating in the area. This capability is enabled in large part due to the post-installation adjustment capability of the inventive CPE/FWA509; under prior art paradigms, not only would precise alignment of such narrow dispersion beams with a serving base station be required at initial installation in order to obtain sufficient channel quality, but such installations would also be very unforgiving in terms of subsequent (post installation) variations in position, changes in RF propagation paths due to man-made or other sources, removal or deactivation of the serving base station, etc. In contrast, the inventive CPE/FWA509may, whether autonomously or with network assistance, dynamically reposition itself under such scenarios to re-acquire the serving base station (or stablish connection with a new one), all without need for service personnel intervention at the premises. The foregoing combination of narrow beam dispersion and dynamic adjustment capability also cooperate to enable, inter alia, higher CPE and customer density within a given geographic area. Specifically, interference levels generated by each antenna element of the CPE/FWA509for neighboring elements (and neighboring CPE) are reduced due to narrow dispersion (i.e., the beams can be very precisely pointed in a desired direction and maintained that way throughout the installation lifetime of the CPE/FWA), and hence more CPE can be packed into a neighborhood, city, region, etc. without exceeding requisite interference levels for each operating CPE. As a coarse analogy, many more conversations can co-exist in a finite room full of people when each is whispering into another's ear, as opposed to trying to shout over the prevailing din. As such, radiated RF energy (as measured by e.g., EIRP) from each antenna element can be reduced without sacrificing channel quality or throughput as compared to systems with less precise/broader dispersion transmit/receive beams. As discussed in greater detail below, the foregoing advantage can also be leveraged by the network at a higher level of abstraction; by utilizing narrow beam widths and maintaining precise alignment over time for each antenna element in use, and replicating such functionality across all managed CPE/FWA within a given area, the network operator (process) can maximize throughput across the managed CPE/FWA of its customers, whether on an individual or statistical basis. FIG.8illustrates an exemplary implementation of a CPE (e.g., FWA or other device)509configured according to the present disclosure. As shown, the CPE includes, inter alia, a CPU processor apparatus or subsystem845, a program memory module850, mass storage848(including a database with iPerf and RF data relating to various detected CBSDs or other entities proximate to the CPE/FWA509), CPE controller logic module520, one or more front end wireless network interfaces601for communication with e.g., CBSD/xNB, DP (if any), the MSO network and RAN829, as well as one or more back end interfaces859such as for establishment of a WLAN AP within the served premises, Gigabit Ethernet or other LAN connectivity, support of home or premises gateways, DSTBs, UE's etc. within the premises, etc., and for communicating with e.g., local equipment such as test/configuration devices or terminals. At a high level, the CPE/FWA509includes two (2) sub-elements; i.e., an outdoor portion513, and an indoor or processing portion511. The outdoor portion513in the exemplary embodiment includes one or more antenna tilt and azimuth actuators607,608(seeFIG.6), as well as RF front end components necessary for receipt and processing of the RF signals, including logic to determine radio path parameters of interest such as amplitude/RSSI, phase, timing. As indicated by its name, the CPE outdoor module or radio head513is typically disposed on a premises structure (e.g., rooftop, tower, utility pole, etc.) outdoors so as to minimize intervening interfering structures and RF signal attenuation as much as possible. The indoor unit511is in communication with the outdoor unit via e.g., interposed coaxial cable or other medium, and includes logic responsible for detecting and demodulating the received RF signals from different paths (received via e.g., different ones of the antenna elements522) and combining them into one logical data stream (and converting to an appropriate protocol for distribution within the premises such as IEEE Std. 802.3 Ethernet packets. Combination of the received constituent signals (e.g., user data accessed via the assigned TDD slots and carrier(s) and beams) is accomplished in one embodiment via stream, CBSD/xNB and beam ID data (i.e., each stream of data from the different beam from a different contributing CBSD/xNB206will have unique ID data that can be used to temporally reconstruct the packet data associated with that stream in proper order and relation). In the exemplary embodiment, the processor845may include one or more of a digital signal processor, microprocessor, field-programmable gate array, GPU, or plurality of processing components mounted on one or more substrates. The processor may also comprise an internal cache memory, and is in communication with a memory subsystem850, which can comprise, e.g., SRAM, flash and/or SDRAM components. The memory subsystem may implement one or more of DMA type hardware, so as to facilitate data accesses as is well known in the art. The memory subsystem of the exemplary embodiment contains computer-executable instructions which are executable by the processor1302. The processor845is configured to execute at least one computer program stored in memory850(e.g., a non-transitory computer readable storage medium); in the illustrated embodiment, such programs include logic to implement the KPI tracker functions, and radio path controller logic (RPC)866. Other embodiments may implement such functionality within dedicated hardware, logic, and/or specialized co-processors (not shown). The CBRS stack of the CPE509is implemented and controlled via the RPC controller process (logic)866of the CPE such that CBSD/xNB-to-CPE communication protocols are used to enable the RF detection and reporting, and scheduling/asset assignment data receipt functionality previously described, including CPE functions such as (i) generation and transmission of periodic, on-demand or ad hoc RF detection reports; and (ii) receipt of network controller-generated TDD slot, carrier, and CBSD/xNB and wireless beam assignments. The logic866may also manage other aspects of CPE/FWA operation, including “intelligent” monitoring and storage of data for use in e.g., historical characterizations of the various CBSD/xNB in radio range of the CPE/FWA in terms of signal strength, signal stability, azimuth, receive beam configuration, cell or base station identifiers, and the like. Management of SRS and CRS data obtained by the CPE/FWA509is also performed in one embodiment by the RPC logic866. The KPI tracker logic605and iPerf logic517, and control system logic520enable measuring and storing the KPI data and other data (e.g., RF parametric data) in the database, tracking the received signal from several base stations (or supplementing FWA), and selecting the best serving base station/FWA as previously described, including generation of the control commands for adjusting antenna azimuth and tilt in order to optimize channel or link performance and mitigate interference. The controller logic520also includes an antenna system interface (ASI) which is a physical and logical control interface for the tilt actuator607and azimuth actuator608of the external portion513of the CPE509. In one implementation, this interface uses a signaling protocol of the type know to those of ordinary skill in the control system arts to (i) provide data representing commands for actuation of the actuators to a desired position or state (depending on whether electro-mechanical or electronic, as well as (ii) data indicative of actual position of the affected antenna element(s) so as to determine actual versus commanded position (e.g., from a position sensor, limit switch, or other such mechanism of the antenna array apparatus521). This interface can advantageously be implemented using comparatively low complexity and bandwidth technologies and protocols due to its low overhead; “feedback” for the closed-loop control system (FIG.6) is obtained via the performance monitoring process (e.g., iPerf517) or via analysis of RF data, each obtained via the RF front end601by the baseband processor693of the CPE509, thereby obviating any high-bandwidth data flow over the ASI. Also shown in the embodiment ofFIG.8Ais a network-based process (OSS802) which, as described elsewhere herein, is in logical communication with the CPE/FWA509in order to support network-assisted radio path evaluation and control of the antenna element(s) of each of a plurality of CPE509under its cognizance. While shown as part of the EPC/SGC, the network OSS may be included within an MSO headend or other node, including within a 5G gNB CU. The OSS802may also be logically distributed in nature, such as where the OSS control functions of multiple RAN under control of the MSO or other operator are logically communicative with one another so as to optimize operation of the broader network operator infrastructure, such as load balancing, re-routing of service in the event of equipment failure, maintenance outages, natural disasters, etc. For instance, one RAN experiencing a blackout or other loss of service may have its served customers that are within range of another RAN (not shown) switch over to that second RAN based on control data resident within those CPE/FWA or transmitted to the CPE by the secondary RAN's OSS. FIG.8Aillustrates an alternate embodiment of the external portion513of the CPE/FWA apparatus509of the disclosure, wherein an array of individual radio front end elements601and associated actuators607,608support each of a plurality of antenna elements522, the latter which are adjustable in azimuth and elevation by the respective actuators. This embodiment utilizes a plurality of configurable logic blocks (CLBs)855in support of the RF and iPerf measurements needed for the control system, and the control system logic520itself may be supported within one or more CLBs of the FPGA. Exemplary implementation details for the embodiment ofFIG.8Aare described in co-pending U.S. patent application Ser. No. 16/741,509 filed Jan. 13, 2020 and entitled “METHODS AND APPARATUS FOR RADIO CONFIGURATION IN A WIRELESS SYSTEM,” previously incorporated herein by reference in its entirety. Methods Referring now toFIG.9, one embodiment of a method for pre-provisioning a CPE/FWA antenna apparatus according to the present disclosure is shown and described. Per step902of the method900, pre-provisioning characterization of the wireless environment or conditions for a given target CPE/FWA509is initiated. For instance, during installation of the CPE/FWA at a premises, the installer or a remote entity (e.g., OSS802) may invoke the CPE/FWA to conduct RF spectrum analysis and parameter scans. To the degree that one or more base stations (e.g., CBSDs) are active and can be connected to by the CPE509, such connections may also be utilized for gathering link performance data such as via the iPerf module517as previously described. In one variant, this initial characterization may include iteratively repositioning each antenna element522of the array521at a prescribed azimuth and tilt value specified by a training or characterization plan stored as data within the CPE memory, such that a series of desired measurements are taken for each position. As such, the CPE509can generate a “heat map” of sorts for the parameters in question, such as SINR, RSSI, RSRP, throughput, latency, jitter, etc. (depending whether or not a data connection has been established). Moreover, this heat map can be used to evaluate sensitivity for different position adjustment (or other changes, such as MCS, carrier band, FEC type, spatial diversity settings, etc.); e.g., how sensitive the channel quality and performance is to movement of say 1 degree in azimuth for a given element522. Note that while the elements of the array521are spatially co-located (each within a foot or two of each other on the rooftop/façade), they may have significantly differing properties due to e.g., azimuth, multipath propagation, presence of nearby components of the building, etc. It will also be appreciated that the foregoing characterization may be conducted with different ones of the antenna elements522in different positions relative to a “DUT” element. That is, the performance of antenna element n in the same azimuth and elevation positions may differ depending on the orientation of other elements522of the same array, and as such a complete set of data wherein all possible combinations of all element positions would ideally identify cases where mutual interference or other phenomena existing between two or more antenna elements are present. This also includes aggregated subsets of the antenna elements522, such as where a pair of antenna elements used for a common spatial diversity configuration (2×MIMO) are jointly evaluated as a set, either holding all other elements constant, or conversely holding the evaluated subset constant and varying one or more others of the array or elements522. However, it will be appreciated that depending on the number of antenna elements, the increments of adjustment for elevation and azimuth, the carrier frequencies tested, the beam dispersion, and other such factors, it may be unduly burdensome to perform such a complete characterization. As such, the present disclosure contemplates use of an abbreviated or lower-complexity characterization plan for use on (at least) pre-provisioning, which has been “intelligently” constructed based on tenable assumptions regarding the performance of each antenna element, including MATLAB simulations or similar channel and antenna performance models. As one simple example, if a narrow beam dispersion for each antenna element exists (whether by physical design or selection of one or more operating parameters), then cycling each individual element522through its entire range of azimuth may be unnecessary, especially where such beam dispersion has been considered as part of the initial design of the array521. Likewise, if certain carrier frequencies are unlikely to be utilized during any operation of the array (e.g., are precluded from use, such as by a SAS), then characterization of the array at those frequencies may be obviated. The training plan referenced above may also be constructed based on known CBSD/xNB locations relative to the premises. For instance, where two CBSDs206are known to exist at azimuths corresponding to 0-degrees and 150-degrees relative to the CPE509, respectively, then those two azimuths can be selected as the initial (or even sole) bases of characterization; e.g., the training plan may iterate around those azimuth values, yet ignore others where no CBSDs are expected to exist. It will be appreciated, however, that in some cases, CBSDs may be added or removed (seeFIG.4), and hence the training plan for a given CPE might periodically undergo “update” searches, especially when changes in CBSD installations and other nearby CPE have occurred. Returning toFIG.9, per step904, the obtained data from step902is stored, such as within the local database of the CPE/FWA509(seeFIG.8), and/or at a network storage location such as one associated with or designated by the OSS802. Per step906, the data is evaluated, and an optimized antenna configuration identified. For example, in one approach, the data is evaluated locally by logic on the CPE itself (e.g., within the iPerf tracker module605or RPC logic866), and a putative initial setting for azimuth and elevation for each controlled element522identified based on e.g., algorithmic analysis of the aforementioned heat map data (e.g., by looking for local maxima within one or more parameters such as iPerf throughput, and correlating that maxima to the azimuth/elevation settings when the data was obtained). Per step908, the data (including the raw stored data, and optionally the evaluated data from step906) is transmitted to the connected base station, whether while the characterization is ongoing (e.g., “streaming” data as it is generated), or at some later time, including after the characterization is completed. This data may be used by the recipient base station (e.g., such as by a CU of a gNB), and/or passed towards the EPC/5GC or MSO headend for use by e.g., the OSS802. FIG.9Ais a logical flow diagram of an exemplary implementation of the method for pre-provisioning ofFIG.9, wherein network assistance is utilized. As shown, step922of the method920includes characterization of the CPE/FWA environment as in the method900ofFIG.9). Gathered data is stored per step924, and an initial antenna array configuration selected per step926. This initial selection may be e.g., a coarse first estimate based on e.g., completion of a partial training plan or characterization, so as to enable reduced latency in getting the CPE/FWA509operational (albeit not fully optimized). Per step928, the data is transmitted to the base station(s) connected to the CPE and ultimately the OSS802, where further optimization is conducted per step930. For example, in one variant, the BS/OSS may take the data transmitted from the CPE509(which represents for instance a partial or limited scope characterization, and/or limited scope data set derived from the characterization), and in effect “picks up” where the CPE509evaluation left off by conducting more detailed algorithmic analysis, including in light of other data it may possess relating to e.g., other CPE which could impact the instant CPE/FWA509during operation. Alternatively, the OSS or BS may obtain a complete data set from the CPE509, such as in the case where the CPE conducted a full (or comprehensive) data collection, but merely did not perform (or lacks capability to perform) the request degree of analysis in order to generate a meaningful optimization, including by lack of visibility into other nearby CPE data, or lack of sufficient processing capability/algorithms. Per step932, the OSS/BS transmits further optimization-related data (which may be in the form of additional raw data which the CPE itself can utilize for further optimization/refinement, or alternatively fully processed or “end-result” data such as parameters or even commands to be used by the CPE controller logic520in repositioning all or portions of the array521, selecting proper parameters, or other aspects). The received data is then used by the CPE509to implement the optimized configuration as determined by the network assistance of the BS or OSS802. FIG.10is a logical flow diagram of another exemplary embodiment of a method for pre-provisioning a CPE/FWA antenna apparatus according to the present disclosure, wherein multiple base stations are available to the CPE/FWA. In this embodiment, the CPE/FWA509has no a priori knowledge of base stations in its area. Per step1002of the method1000, a characterization of the environment of the CPE509is conducted, including at least some measurements taken from all of the available antenna elements522of the CPE array521, whether contemporaneously or individually (or in subsets). Since the CPE has no knowledge of CBSDs/xNBs nearby, it must first gather sufficient data to identify or localize putative azimuths and elevations which may be associated with serving CBSDs/xNBs. The obtained data is stored per step1004, and evaluated per step1006to identify M likely base stations in terms of azimuth and elevation. This may be conducted for instance by evaluating RF spectrum data such as peaks or local maxima within the gathered heat map data. For example, a local maximum in antenna gain, RSSI or SINR at 30 degrees relative azimuth would be considered a possible xNB location. Per step1008, for each (M) identified base stations of step1006, one or more antenna elements522of the array are correlated to the base station. For instance, one simple approach is to correlate the element with highest SINR or RSSI (or two highest elements) with a given base station. More sophisticated approaches may be used as well. Next, per step1010, the CPE509attempts to establish a connection with each of the M base stations, such as according to 3GPP protocols to establish RRC_Connected status. If connection occurs per step1012, then per step1016, connection-enabled parameter data (such as throughput, latency, jitter, etc.) is obtained and stored for the connected xNB, and the process iterates per step1018to increment the counter for M (step1014) and perform steps1008and subsequent for the next xNB detected (if any). If a connection cannot be established, this data is logged and the counter is incremented per step1014. FIG.11is a logical flow diagram of an exemplary embodiment of a method for post-provisioning a CPE/FWA antenna apparatus according to the present disclosure. In this method1100, the CPE/FWA509has already been installed and is operational (in contrast with pre-provisioning discussed above). Per step1102, a post-provisioning scan is initiated for the target CPE/FWA509to assess its wireless performance. This step may be initiated according to a prescribed periodicity, a schedule, upon one or more parameters falling below prescribed threshold values, upon instigation by a network process (e.g., based on an OSS-issued commend), or even by the customer themselves, such as via a diagnostic menu which the customer may follow when they perceive that their service is in some aspect deficient. For instance, in on variant, data rate/throughput is assessed via the installed iPerf process517of the CPE based on e.g., test data transmitted to/from the CPE as measured by the iPerf process. Per step1104, the scan data is evaluated against a prescribed criterions, such as historical data (e.g., has performance degraded significantly as compared to one or more historical periods for the same CPE?), and/or prevailing SLA requirements (if applicable), such as for UL/DL data rates. Per step1106, if the assessed performance is not sub-optimal, operation continues per step1108. Conversely, if performance is deemed sub-optimal (e.g., below SLA, or even if above SLA, on a downward trajectory or otherwise less that what would be deemed “normal” for that particular installation based on historicals), then the historical/stored data for that CPE/FWA509is then evaluated in greater detail per step1110to attempt to identify one or more other configuration options which may enhance performance. Such options may include for instance: (i) selection of a different azimuth/tilt configuration for one or more antenna elements522of the array which may enhance the signal form the same base station; (ii) selection of additional spatial diversity, MCS, or other configurations which may enhance performance; (iii) selection of different ones of the antenna array elements522which may enhance connectivity with the same (existing base station); (iv) selection of different antenna elements and/or azimuth and tilt values which may enable connection to a higher SINR base station available to the CPE509; and/or (v) selection of one or more supplementing CPE/FWA apparatus available to the instant apparatus509from which additional capacity can be obtained. FIG.11Ais a logical flow diagram of an exemplary implementation of the method for post-provisioning ofFIG.11, wherein network assistance (including evaluation of impacts on other CPE) is utilized. In this method1130, the CPE/FWA509first initiates post-provisioning performance analysis on its connection(s) per step1132. If per step1134the current performance level is not sub-optimal, operation continues per step1138. If performance is sub-optimal, however, then the gathered performance data from step1132(and any other ancillary data which may have been collected by the CPE previously, such as SRS/CRS data, RF spectrum scan data, etc.) is transmitted to the connected xNB per step1136, and the cognizant network process (e.g., OSS802) utilizes the transmitted data for the target CPE509further evaluation per step1140. Per step1142, other potentially impacted CPE/FWA devices are identified; for instance, those within a prescribed radius or geographical proximity, those within LOS between the CPE and a base station, etc. are identified. This identification may be through access to an extant database maintained by the network operator (e.g., MSO), or alternatively may be derived through analysis of actual scan/throughput data obtained from the target CPE (and others), such as upon a request from the OSS802issued to each CPE. Per step1144, the network (e.g., OSS) generates an adjustment plan, which may include adjustment of one or more configurations of the target CPE/FWA509, as well as those identified per step1142as potentially being impacted by (or impacting) the target CPE/FWA adjustments. As a simple example, two adjacent CPE within a neighborhood may be so close that each generates interference for the other when operating and connected to a prescribed base station (even with extant mechanisms for interference reduction and multiple access such as spatial diversity, time-frequency resource allocation diversity, etc., since they physically occupy a very narrow azimuth as seen by the serving BS). As such, one of the CPE may simply be reconfigured to select an alternate base station at a different relative azimuth (and possibly different elevation), thereby alleviating the problem. Based on pre-provisioning scan data as described previously herein, the reconfiguration of the selected CPE by the OSS can be performed almost seamlessly, since the CPE being reconfigured already has characterized its environment including other possible CBSDs/xNBs, thereby obviating a service technician from coming out and manually re-adjusting the CPE configuration. Per step1146, the relevant portions of the adjustment plan from step1144are transmitted to the affected CPE/FWA device(s), and after implementation thereof, performance is again measured per step1148. If satisfactory, operation continues per step1138. If sub-optimal, then a counter (N) is incremented, and the process returns to step1144, wherein a new iteration or increment of the adjustment plan is generated by the OSS802or proxy node, including based on the data obtained from step1148(e.g., whether there was any noted improvement or degradation after the first adjustment was entered, or other parameter changes of interest in developing the (updated) adjustment plan. It will be readily apparent from the foregoing that development and implementation of such adjustment plans may be carried out incrementally or iteratively across all or a subset of a population of CPE/FWA under control of a given OSS. For example, one model used by the OSS logic may “perturb” one CPE/FWA within the population, and then obtain data from others to assess the impact, with subsequent perturbations being applied based on the assessed impact. Alternatively, the entire plan may be developed at once (e.g., based on available data and modeling), implemented en masse, and the results evaluated based on performance data from various of the affected CPE. Generally speaking, an incremental approach is preferred in most scenarios, since the likelihood of significant degradation to one or more customers' service quality is minimal through only small/limited changes to a given CPE. FIG.12is a logical flow diagram of an exemplary embodiment of a method for post-provisioning a CPE/FWA antenna apparatus within a CBRS network with a SAS, CBSD/xNBs206, network OSS802, and CPE/FWA509as previously described. At step1203of the method1200, performance and RF data is measured by the CPE509e.g., via iPerf517which measures the KPIs (e.g., throughput, latency, jitter, error rates and various network performance related KPIs). Next, per step1205, the iPerf process compares the KPI data with one or more thresholds, the thresholds which may be directed by the network and varied dynamically in some embodiments (e.g., to various values above the minimum prevailing SLA for the CPE)509. If the measured KPI data indicated performance greater than the prescribed threshold, the required system performance is met, and the KPIs are stored in the data base and a wait state is entered (step1206). If the measured KPI data indicates performance less than the threshold(s), the method proceeds to step1207, wherein the data is sent to the OSS802, whereby at step1209, the OSS selects self-optimization for the CPE in order to generate control data to adjust antenna azimuth and tilt. Next, per step1213, CPE evaluates the current and historical data (or subsets thereof) of its database to generate control data (via the controller520). The results of the evaluation are sent from the KPI tracker process605to the control system module520, which generates the control signals for adjusting antenna azimuth and tilt based on the data received from the KPI tracker. The antenna azimuth and tilt are adjusted by the actuators607and608per step1215. After the adjustment, new measured (performance and RF) data are stored in the data base, per step1217. Per step1219, the new performance data is evaluated against then then prevailing threshold value(s), and if less, the process iterates to generate new control data based on a subsequent evaluation of the locally stored historical (and newly generated current) data by the CPE. Finally, per step1221, when the CPE509has converged on a suitable configuration (i.e., performance per step1219is satisfactory), the new azimuth and tilt are reported to the network via the connected CBSD/xNB (and ultimately the SAS), and the method proceeds again to step1203for further monitoring. As discussed above, the CBSD/xNB(s) may interface with the host SAS directly, or via one or more interposed entities such as computerized domain proxy (DP) entities208. For the purposes of illustration, it will be assumed that each of the registering CBSD/xNBs is/are associated with a common network operator (NO) domain, although this is not a requirement for practicing the method1200. It will also be recognized that the level of reporting made to the network regarding antenna and/or radio configurations and changes thereto may be (i) varied in scope, and (ii) made on either a final or intermittent basis. For example, in the exemplary context of a CBRS network, the cognizant SAS may require certain data regarding the antenna configuration, such as transmit power, azimuth, elevation, etc. As such, the data set sent from the CPE509(or a network node further upstream, including for instance a 5G NR CU) may be tailored so as to provide the OSS and/or SAS with the requisite data alone, or include supplementary data which may not be required but which may assist the OSS (or SAS) in further analysis of the operating environment of the particular CPE. For instance, the OSS may utilize the collected iPerf data sent from the CPE509, but strip such data off from reporting to the SAS. Alternatively, the CPE509or a proxy node therefore (such as the xNB CU) may report two different data sets; e.g., one to the SAS, and a different one to the OSS, the latter with additional data which is of use to the OSS in characterizing the particular environment of the CPE509. Moreover, whileFIG.12illustrates one exemplary configuration where the final antenna configuration is reported to the network (and SAS), incremental position changes may also be reported upstream, especially for the OSS, for instance to provide a more comprehensive data set on antenna element/CPE response as a function of the inserted control commands or adjustments (as opposed to merely “start and finish” data). FIG.12Ais a logical flow diagram of an exemplary implementation of the method for post-provisioning ofFIG.12, wherein OSS assistance and updates are utilized. In this variant, the CPE/FWA509logic is configured to default to OSS-assisted operation based on e.g., firmware configuration, prior mode selection by the network/OSS, lack of indigenous processing capability, etc. It will be appreciated, however, that the present disclosure contemplates CPE configurations and operating modes wherein both network-assisted and CPE/FWA local processing (such as inFIG.12) are utilized, whether in tandem or sequentially. For instance, the CPE/FWA may be configured to make a first attempt at assessment/optimization, and where that effort falls short of the desired performance level, only then invoke the network-assisted approach. At step1233of the method1230ofFIG.12A, performance and RF data is measured by the CPE509e.g., via iPerf517which measures the KPIs (e.g., throughput, latency, jitter, error rates and various network performance related KPIs). The obtained data is also reported to the OSS directly per step1233. Next, per step1235, the iPerf process compares the KPI (or other) data with one or more thresholds, the thresholds which may be directed by the network and varied dynamically in some embodiments (e.g., to various values above the minimum prevailing SLA for the CPE)509. If the measured KPI data indicated performance is greater than the prescribed threshold, the required system performance is met, and the KPIs are stored in the database and a wait state is entered (step1237). If the measured KPI data indicates performance less than the threshold(s), the method proceeds to step1239, wherein the data sent to the OSS802is used as a basis for OSS selection of network-assisted (OSS controlled) operation for the CPE in order to generate control data to adjust antenna azimuth and tilt. For instance, in one variant, the OSS may look at the level of deficiency of the performance/RF data, and determine therefrom that the particular CPE509requires a greater degree of intervention than can be provided through CPE-based (local) assessment and adjustment alone, including cases where other CPE proximate to the target CPE509must also be contemporaneously adjusted in order to achieve the desired performance/SLA. Next, per step1243, CPE evaluates the current and historical data (or subsets thereof) of its database to generate control data for transmission to the CPE. The results of the evaluation are sent from the OSS802via the connected base station (or other communication channel) to the CPE control system module520, which generates the control signals for adjusting antenna azimuth and tilt based on the data received. The antenna azimuth and tilt are adjusted by the actuators607and608per step1245. The control data/position updates are also recorded locally by the system so as to maintain cognizance of the adjustments that the OSS has made, such as for use when the CPE returns to “autonomous” or locally-controlled operation. After the adjustment, new measured (performance and RF) data are stored in the database, per step1247. Per step1249, the new performance data is evaluated against then then prevailing threshold value(s). In one variant, this comparison is performed locally by the indigenous iPerf or similar process executing on the CPE. If less than the desired values, the process iterates to transmit the newly generated performance/RF data to the OSS, the latter which will then generate new control data based on a subsequent evaluation of the previously forwarded historical (and newly generated current) data by the CPE. In another variant (not shown), the data measured and (locally) stored per step1247is also sent to the OSS directly, wherein the evaluation of step1249is conducted by the OSS (versus the CPE KPI tracker or other local process). Finally, per step12532, when the OSS (and CPE) has converged on a suitable configuration (i.e., performance per step1249is satisfactory), the new azimuth and tilt are reported to the SAS. FIG.13is a ladder diagram illustrating exemplary communication flow between CPE/FWA, CBSD/xNB, OSS and SAS according to one embodiment of the post-provisioning methodology of the present disclosure. As shown, this flow1300corresponds generally to the methodology1200ofFIG.12(i.e., self-optimization by the CPE/FWA509). FIG.14is a ladder diagram illustrating exemplary communication flow between CPE/FWA, CBSD/xNB, OSS and SAS according to another embodiment of the post-provisioning methodology of the present disclosure. As shown, this flow1400corresponds generally to the methodology1230ofFIG.12A(i.e., network-assisted optimization of the CPE/FWA509by the OSS). FIGS.15A-15Dare functional block diagrams illustrating various spatial diversity use cases/configurations according to some embodiments of the present disclosure. As shown, the present disclosure contemplates a variety of different CBSD/xNB206, beam, and propagation path (i.e., direct/indirect multipath) combinations by which a given CPE/FWA509can transact multiple signals with one or more of the service CBSDs/xNBs206, only a few of which are illustrated, but all of which will be appreciated by one of ordinary skill given the present disclosure. For instance, in the embodiment ofFIG.15A, the spatial diversity elements 1 . . . n1504associated with a given CBSD/xNB may be communicative with respective ones of the antenna elements522of a given radio601of the CPE509(e.g., elements 1 and 2 respectively of the configuration shown inFIG.7). FIG.15Bshows multiple antenna elements522of the same radio apparatus601communicating with each of multiple corresponding antenna elements of a single CBSD. FIG.15Cshows a single antenna element522of the same radio apparatus601communicating with each of multiple corresponding antenna elements of multiple different CBSDs206. As a further example,FIG.15Dshows a single antenna element522of different radio apparatus601of the CPE509communicating with a corresponding antenna element of respective different CBSDs206. Consistent with the use of multiple different radio apparatus and connections (whether with a single CBSD or multiple CBSD), it will further be appreciated that multipath packet processing may be utilized, such as that described in co-pending U.S. patent application Ser. No. 16/738,889 filed Jan. 9, 2020 and entitled “METHODS AND APPARATUS FOR SERVICE PROVISION TO OUT-OF-COVERAGE APPARATUS IN WIRELESS SYSTEMS”, as well as U.S. patent application Ser. No. 16/676,188 filed Nov. 6, 2019 and entitled “METHODS AND APPARATUS FOR ENHANCING COVERAGE IN QUASI-LICENSED WIRELESS SYSTEMS,” each of the foregoing incorporated herein by reference in its entirety. For example, as described therein, MPTCP or STCP-based protocol stacks and processing may be used to allow for packet aggregation or dis-aggregation at e.g., the transport layer of the CPE509(e.g., via an SCTP or MPTCP logic stack operative to execute on the CPE/FWA509), thereby avoiding typical “head of the line” blocking of a standard protocol such as TCP. It will also be appreciated that while spatial diversity examples are shown inFIGS.15A-15D, the present disclosure contemplates use of frequency diversity, as well a spectrum “type” diversity across multiple different radios of the CPE509. For instance, in one variant, different radios601and their associated antenna elements522use different carrier frequencies for communication with different CBSDs. In another variant, one radio may use GAA spectrum (unlicensed), while another used for a particularly “contentious” or interference-laden physical propagation path or azimuth uses PAL (which is ostensibly much cleaner due to having at least some licensing-type restrictions on use associated therewith). It will be recognized that while certain aspects of the disclosure are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed embodiments, or the order of performance of two or more steps permuted. All such variations are considered to be encompassed within the disclosure disclosed and claimed herein. While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims. It will be further appreciated that while certain steps and aspects of the various methods and apparatus described herein may be performed by a human being, the disclosed aspects and individual methods and apparatus are generally computerized/computer-implemented. Computerized apparatus and methods are necessary to fully implement these aspects for any number of reasons including, without limitation, commercial viability, practicality, and even feasibility (i.e., certain steps/processes simply cannot be performed by a human being in any viable fashion).
75,737
11943633
DETAILED DESCRIPTION OF EMBODIMENTS Embodiments of the present disclosure are described below with reference to the drawings. Elements and features described in one of the drawings or one embodiment of the present disclosure may be combined with elements and features described in one or more other drawings or embodiments. It should be noted that representations and descriptions of components and processing which are irrelevant to the present disclosure or known by those skilled in the art are omitted in the drawings and the specification for clarity. As shown inFIG.1, an electronic device100for wireless communication according to this embodiment includes processing circuitry110. The processing circuitry110, for example, may be implemented as a specific chip, a chipset, a central processing unit (CPU) or the like. The processing circuitry110includes a determination unit111and a control unit113. It should be noted that although the determination unit111and the control unit113are shown in a form of functional blocks in the drawings, it should be understood that functions of units may be implemented by the processing circuitry as a whole, and may be not necessarily implemented by discrete actual components in the processing circuitry. In addition, although the processing circuitry is shown as a box in the drawings, the electronic device may include multiple processing circuitry, and the functions of the units may be distributed into the multiple processing circuitry, so that the multiple processing circuitry cooperates to implement these functions. The determination unit111is configured to determine whether a first user equipment satisfies a condition for performing a sidelink communication with a second user equipment using an unlicensed band resource. According to an embodiment, the condition is related to one or more of the followings: a service priority of the sidelink communication to be performed; a current link quality between the first user equipment and the second user equipment; the number of failures of Listen Before Talk performed by the first user equipment and/or the second user equipment previously with respect to the unlicensed band resource; a battery level of the first user equipment and/or the second user equipment; and a delay generated in the course of a base station transmitting information, which is to be forwarded by a delay UE of the first user equipment and the second user equipment to a remote UE of the first user equipment and the second user equipment, to the delay UE of the first user equipment and the second user equipment. In an example, in the above condition, the service priority may include ProSe Per-Packet Priority (PPPP). The link quality may include Reference Signal Receiving Power (RSRP). The number of failures may include the number of subframes for which Listen Before Talk fails within a previous time window having a predetermined length. The delay may include a delay caused by the base station performing Listen Before Talk (LBT) with respect to the unlicensed band resource used for transmitting the information. Before using an unlicensed sub-resource, a UE is required to perform LBT, resulting in increase in energy consumption of the UE, and delay may be increased due to failure of LBT. In this embodiment, before the unlicensed resource is selected, it is determined whether to select the unlicensed resource based on current link quality, the battery level, data service priority, and a cumulative number of failures of LBT, in order to ensure service quality for the UE. More specifically, in a case where the relay UE or remote UE selects an unlicensed resource from a resource pool, the condition for using the unlicensed resource by the relay UE or the remote UE may include the followings: (1) In a case where a data service priority (PPPP) level of sidelink is at a certain level or the required delay is less than a threshold, the relay UE or remote UE directly selects a licensed resource, otherwise the UE is allowed to select the unlicensed resource. (2) In a case where link quality (RSRP) of a current link is less than a threshold, the relay UE or remote UE directly uses the licensed resource. (3) In a case where the relay UE or remote UE counts the accumulated number of subframes for which LBT fails within a previous time window and determines that the accumulated number is greater than a threshold, the relay UE or remote UE directly selects the licensed resource. (4) In a case where a current battery level of the relay UE or remote UE is lower than a threshold, the relay UE or remote UE directly uses the licensed resource. (5) In a case where the eNB transmits data related to sidelink to the relay UE using the unlicensed resource, and the delay caused by the failure of LBT performed by the eNB is greater than a threshold, the eNB instructs the relay UE to directly use the licensed resource. The relay UE directly selects the licensed resource, so as to avoid excessive delay generated in an entire forwarding process. The control unit113is configured to control the first user equipment to perform the sidelink communication with the second user equipment using the unlicensed band resource in a case where the determination unit111determines that the above condition is satisfied. FIGS.11A to11Cshow scenarios for using an unlicensed band in FeD2D (further enhanced D2D) as application examples of embodiments of the present disclosure. In a unidirectional scenario as shown inFIG.11A, the relay UE and the remote UE both are in an in-coverage (IC) state, and the remote UE may use the unlicensed resource in sidelink communication and sidelink discovery. In a bidirectional scenario as shown inFIG.11B, the relay UE and the remote UE both are in the IC state, the relay UE may use the unlicensed resource in sidelink communication and sidelink discovery, and the remote UE may use the unlicensed resource in sidelink communication and sidelink discovery. In a bidirectional scenario as shown inFIG.11C, the relay UE is in the IC state, the remote UE is in an out-of-coverage (OOC) state, the relay UE may use the unlicensed resource in sidelink communication and sidelink discovery, and the remote UE may use the unlicensed resource in sidelink communication and sidelink discovery. In the embodiment of the present disclosure, the relay UE and the remote UE are configured to perform sidelink communication using an unlicensed band for different scenarios and different states of the UE. Specifically, manners for allocating a resource may include, for example, a manner of eNB direct allocation, a manner of eNB in-direct allocation, a manner of relay UE assisted allocation, and a manner of UE autonomous allocation, and a manner of shared MCOT. Next, exemplary embodiments are described with respect to the above aspects. It should be noted that the unlicensed band resource used for sidelink communication described in the embodiments of the present disclosure may include an unlicensed band resource used for sidelink communication, or may include an unlicensed band resource used for sidelink discovery. According to an embodiment, the determination unit111is further configured to determine an acquisition manner of the unlicensed band resource according to indication information from the base station. The acquisition manner includes specifying the unlicensed band resource by the base station, or selecting the unlicensed band resource by a user equipment from a configured resource pool or a resource pool list. More specifically, the indication information may be, for example, included in radio resource control (RRC) signaling. In addition, the configured resource pool or resource pool list may be, for example, indicated by the RRC signaling or a system information block (SIB). In addition, according to an embodiment, the determination unit111is further configured to determine the unlicensed band resource used for the sidelink communication according to indication information from the base station. More specifically, the indication information may be transmitted through a physical downlink control channel PDCCH. In addition, resource pool information used for the sidelink discovery may be acquired through the RRC signaling and the SIB. Next, an exemplary embodiment in which a base station allocates an unlicensed band resource is described with reference to specific examples. In a case where the UE is connected to the RRC (RRC_CONNECTED) established by a primary cell, signaling from the base station may be configured as, for example, RRC→sl-commconfig→commTxResources→scheduled. In this case, the relay UE or remote UE may acquire DCI format 5 by blindly detecting the PDCCH, so as to acquire a resource used to transmit the PSCCH and PSSCH on the sidelink. In this case, a process for using the unlicensed resource is shown inFIG.12. First, in a case where the relay UE or remote UE is configured with an unlicensed resource in the RRC signaling, the relay UE or remote UE monitors PDCCH DCI 5B to obtain configuration of the unlicensed resource. Then, according to the configuration of the unlicensed resource, the relay UE or remote UE performs LBT to access the unlicensed resource. In a case where the cumulative number of times that the relay UE or remote UE fails to access the allocated unlicensed resource is greater than a threshold, the relay UE or remote UE directly requests for a licensed resource to perform the sidelink communication. Otherwise, the relay UE or remote UE occupies the unlicensed band to perform the communication. In this case, fields such as SLUnlicensed may be added in RRC, as shown in Table 1. TABLE 1Fields added in RCCFieldDescriptionSLUnlicensedindicating whether unlicensed resource can beused on Sidelink, 1 bitsl-subframeUnlisubframe scheduled in semi-static schedulingSL-ConfigUnlisidelink unlicensed resource configuration IESL-ConfigCommUnlisidelink communication procedure unlicensedresource configuration IESL-ConfigDiscUnlisidelink discovery procedure unlicensedresource configuration IE Further, fields included in SL-ConfigUnli IE may be for example as shown in Table 2 below. TABLE 2Fields included in SL-ConfigUnli IENew fieldDescription and valueCarrier indicatorindicating cross-carrier scheduling, 0 bitor 3 bitsPSSCH starting positionindicating a position of a starting symbolfor PSSCH, 2 bitsPSSCH ending symbol0 indicating the last symbol of a subframe;and 1 indicating the penultimate symbol ofthe subframe, 1 bitChannel Access Typetype of channel access priority, 1 bitChannel Access Priorityclass of channel access priority, 2 bits,Class{1, 2, 3, 4}MaxnumberofsubframesSLmaximum number of subframes for whichLBT fails, 2 bits In addition, the relay UE or remote UE may acquire the configuration of the unlicensed resource, for example, by monitoring PDCCH DCI format 5B. A field of SCI format 0A may be added in the PDCCH DCI format 5B, to indicate the configuration of an unlicensed resource pool for communication data on the sidelink. In an example, PSCCH SCI 0A\2A and PDCCH DCI 5B\5C may be added. SCI 0A is newly added to indicate configuration information of the unlicensed resource pool. SCI 0A is based on SCI 0, and the newly added fields are used to indicate configuration of the unlicensed resource used by the PSSCH, including carrier indicator, starting and ending symbols, a type of channel access and a class of channel access priority. Newly added fields in SCI 0A based on SCI 0 are shown in Table 3 below. TABLE 3Fields configuration for SCI 0AFieldDescription and valueCarrier indicatorindicating cross-carrier scheduling, 0 bitor 3 bitsPSSCH starting positionindicating a position of a starting symbolfor PSSCH, 2 bitsPSSCH ending symbol0 indicating the last symbol of a subframe;and 1 indicating the penultimate symbol ofthe subframe, 1 bitChannel Access Typetype of channel access priority, 1 bitChannel Access Priorityclass of channel access priority, 2 bits,Class{1, 2, 3, 4}MaxnumberofsubframesSLmaximum number of subframes for whichLBT fails, 2 bits SCI 2A is newly added to indicate the configuration information of the unlicensed resource pool. This field is used to indicate the configuration information of an unlicensed resource configured by the relay UE for the remote UE to transmit a PSSCH resource. SCI 2A is based on SCI 0, and newly added fields are used to indicate configuration of an unlicensed resource used by PSSCH, including carrier indicator, starting and ending symbols, a type of channel access and a class of channel access priority. Newly added fields in SCI 2A based on SCI 0 are shown in Table 4 below. TABLE 4Fields configuration for SCI 2AFieldDescription and valueCarrier indicatorindicating cross-carrier scheduling, 0 bitor 3 bitsPSSCH starting positionindicating a position of a starting symbolfor PSSCH, 2 bitsPSSCH ending symbol0 indicating the last symbol of a subframe;and 1 indicating the penultimate symbol ofthe subframe, 1 bitChannel Access Typetype of channel access priority, 1 bitChannel Access Priorityclass of channel access priority, 2 bits,Class{1, 2, 3, 4}MaxnumberofsubframesSLmaximum number of subframes for whichLBT fails, 3 bits The newly added PDCCH DCI 5B control signaling is transmitted by the eNB to the relay UE or remote UE, indicates configuration of unlicensed resource used by the relay UE or remote UE to transmit the PSCCH and the PSSCH, and includes SCI 0A. Based on PDCCH DCI 5, configuration of SCI 0A is added. The newly added PDCCH DCI5C control signaling is transmitted by the eNB to the relay UE, includes a resource used by the relay UE to transmit the PSCCH and configuration of an unlicensed resource used by the relay UE to instruct the remote UE to transmit the PSSCH, and includes SCI 2A. Based on PDCCH DCI 5, configuration of SCI 2A is added. It should be noted that the signaling configuration described above is only illustrative rather than restrictive. In addition, the eNB may configure the unlicensed resource pool in a dynamic manner or a semi-static manner. In the dynamic manner, the eNB dynamically configures the unlicensed resource pool for the relay UE and the remote UE through the PDCCH DCI 5B signaling. The eNB notifies the relay UE or remote UE through the RRC signaling that a scheduling manner of the resource is “scheduled”. Then, the eNB transmits the PDCCH DCI 5B (SCI 0A) to the relay UE or remote UE. The relay UE or remote UE performs LBT according to the configuration of the unlicensed resource, to access the unlicensed channel so as to perform the communication. In the semi-static manner, the eNB semi-statically configures the unlicensed resource pool for the relay UE and remote UE through the PDCCH DCI 5B signaling. The eNB notifies the relay UE or remote UE through the RRC signaling that the scheduling manner of the resource is “scheduled”, and configures a parameter for semi-static scheduling of the unlicensed source. Then, the eNB transmits the PDCCH DCI 5B (SCI 0A) to the relay UE or remote UE and activates semi-static scheduling of the unlicensed resource. The relay UE or remote UE performs LBT according to the configuration of the unlicensed resource, to access the unlicensed channel so as to perform the communication. A signaling process of the above example configuration of the unlicensed resource pool is shown inFIG.13. In this case, fields such as SLUnlicensed and sl-subframeUnli, may be added in RRC, as shown in Table 1. A form of PDCCH DCI 5B (SCI 0A) control signaling is added, as shown in Table 3. The exemplary embodiment in which the unlicensed resource is acquired by specifying by the base station is described above. Next, an exemplary embodiment in which the user equipment selects an unlicensed band resource from the configured resource pool or resource pool list is described. In a case where the UE autonomously selects an unlicensed resource, the signaling from the base station is configured as, for example, RRC→sl-commconfig→commTxResources→ue-selected→commTxPoolNormalDedicated or commTxPoolNormalDedicatedExt. In this case, the relay UE or the remote UE may directly select a resource from a given resource pool to perform the sidelink communication. The signaling process for configuring the unlicensed resource is shown inFIG.14. A process for using the unlicensed resource in this case is shown inFIG.15. First, in a case where the relay UE or remote UE is configured to use an unlicensed resource pool in the RRC signaling, and the RRC signaling indicates “ue-selected”, the relay UE or remote UE selects a resource pool (which may include licensed resources and unlicensed resources) from an RRC resource pool list. Then, in a case where the remote UE or the relay UE selects an unlicensed resource pool, the relay UE or the remote UE performs the LBT according to a parameter configured in the RRC signaling. In a case where the cumulative number of times that the relay UE or remote UE fails to access the allocated unlicensed resource is greater than a threshold, the relay UE or remote UE directly selects a licensed resource from the resource pool list to perform sidelink communication. Otherwise, the relay UE or remote UE occupies the unlicensed band to perform the sidelink communication. The relay UE and the remote UE may acquire the configuration of the unlicensed resource pool by monitoring the RRC signaling. Further, for example, an unlicensed resource for sidelink and configuration information of channel access may be added to PDCCH DCI 5. In a case where the relay UE selects an unlicensed resource in a resource pool list of RRC signaling, a condition for selecting an unlicensed channel is added before the unlicensed resource is selected, so that delay caused by accessing an unlicensed channel is reduced, thereby improving service quality for the UE. In this case, fields such as SLUnlicensed and SL-ConfigCommUnli, may be added in RRC, as shown in Table 1. Exemplary embodiments related to the resource used for sidelink communication are described above. Next, exemplary embodiments related to a resource used for sidelink discovery are described below. Similar to the resource for communication, the resource used for sidelink discovery may be acquired in a manner that the base station specifies the unlicensed band resource or in a manner that a user equipment selects the unlicensed band resource from a configured resource pool or a resource pool list. For a case where the base station specifies the unlicensed band resource used for sidelink discovery (referred to as “discovery” hereinafter), in a case where the UE is in an RRC_CONNECTED state, a resource used by the UE to transmit discovery may be acquired in the RRC signaling. In a case where the RRC signaling indicates “scheduled”, the UE uses a specific resource to transmit discovery. In a case where the RRC signaling indicates “ue-selected”, the resource used by the UE to transmit discovery is selected from a specific resource pool, for example, discTxPoolDedicated. A process for transmitting discovery in the case of “scheduled” is shown in FIG.16. In a case where the relay UE or remote UE is configured with an unlicensed resource in the RRC signaling (scheduled), the relay UE or remote UE performs LBT to access the unlicensed resource. If LBT is performed successfully, the relay UE or remote UE occupies the unlicensed band. If the number of subframes for which the relay UE or remote UE fails to perform LBT reaches a threshold, the relay UE or remote UE may request for a licensed resource pool, for example, through sidelinkUEinformation, to transmit a discovery signal. System capacity can be increased by using unlicensed resource in the sidelink discovery. Further, the delay of using the unlicensed resource by the UE can be reduced by configuring the cumulative maximum number of subframes for which the LBT fails. In this case, fields such as SLUnlicensed and SL-ConfigDiscUnli, may be added in RRC, as shown in Table 1. For the case where the UE selects the unlicensed band resource used for sidelink discovery from the configured resource pool or resource pool list, when the relay UE or remote UE is in the RRC_CONNECTED state and the RRC signaling indicates “ue-selected”, the UE is instructed to select a resource from the resource pool. In a case where the configured resource pool includes an unlicensed resource and the unlicensed resource is selected, the UE performs LBT according to a parameter of channel access. In a case where the relay UE or remote UE fails to access the unlicensed channel in the configured maximum number of subframes, the relay UE or remote UE directly selects a licensed resource pool with high priority. In a case where the LBT is successfully performed, discovery is transmitted on the unlicensed channel. A process for transmitting discovery in the case of “ue-selected” is shown inFIG.17. In a case where the relay UE or remote UE is configured with an unlicensed resource in the RRC signaling (ue-selected), the relay UE or remote UE selects a resource in the RRC signaling. In a case where the unlicensed resource is selected, the relay UE or remote UE performs LBT to access the unlicensed resource. If the LBT is successfully performed, the relay UE or remote UE occupies the unlicensed channel. If the maximum number of subframes for which LBT performed by the relay UE or remote UE fails is greater than the threshold, the relay UE or remote UE selects a licensed resource pool based on the RRC signaling. The UE selects the unlicensed resource pool from the resource pool list configured in the RRC to perform the discovery. Delay and power consumption due to use of the unlicensed resource can be effectively reduced by adding the condition for selecting the unlicensed resource. In this case, new fields such as SLUnlicensed and SL-ConfigDiscUnli may be added in RRC, as shown in Table 1. Next, an exemplary embodiment in which the base station indirectly allocates an unlicensed resource for the remote UEs is described. According to an embodiment, the relay UE forwards one or more of the following to the remote UE: information indicating the unlicensed band resource used for the sidelink communication; and information indicating a resource pool of the unlicensed band resource used for sidelink communication. In a case where the RRC signaling indicates that an SL communication resource is “scheduled”, and the UE is in the RRC_CONNECTED state in the primary cell, the RRC signaling is configured as RRC→sl-commconfig→commTxResources→scheduled. The eNB notifies a relay UE connected to a remote UE of configuration of an unlicensed resource for the remote UE through the RRC signaling. The relay UE completely forwards configuration information of an unlicensed resource pool to the remote UE. A specific signaling process is shown inFIG.18. The eNB notifies the relay UE of configuration information of an unlicensed resource pool for the remote UE through the RRC signaling. The relay UE completely forwards the configuration information of the unlicensed resource pool for the remote UE to the remote UE, for example, through RRCResourceConfig signaling. The remote UE performs LBT according to the configuration information of the unlicensed resource. In this case, fields such as SLUnlicensed and SL-ConfigCommUnli, may be added in RRC, as shown in Table 1. In addition, a field of RRCResourceConfig may be introduced (as shown in Table 5). TABLE 5RRCResourceConfig signalingSignalingDescriptionRRCResourceConfigRelay UE forwards resource configurationinformation for remote UE In addition, in a case where the RRC signaling indicates that the SL communication resource is “ue-selected”, and the UE is in the RRC_CONNECTED state in the primary cell, the RRC signaling may be configured as, for example, RRC→sl-commconfig→commTxResources→ue-selected→commTxPoolNormalDedicated or commTxPoolNormalDedicatedExt. The eNB notifies the relay UE connected to the remote UE of configuration of the unlicensed resource for the remote UE through the RRC signaling. The relay UE completely forwards configuration information of an unlicensed resource pool to the remote UE. The resource pool list (including licensed resources and unlicensed resources) may be configured in the RRC signaling of the relay UE. A specific process is shown inFIG.19. The eNB transmits the configuration information of the unlicensed resource pool for the remote UE to the relay UE through the RRC signaling. The relay UE completely forwards the configuration information of the unlicensed resource pool for the remote UE to the remote UE through the RRCResourceConfig signaling. If the remote UE selects an unlicensed resource pool according to the configuration of the resource pool, the remote UE performs LBT to access the unlicensed resource. In this exemplary embodiment, the remote UE acquires a resource forwarded via the relay UE, so that the configuration of the unlicensed resource may be configured in the RRC signaling of the relay UE. After acquiring configuration information of the resource for the remote UE, the relay UE completely forwards the configuration information to the remote UE. The remote UE may acquire the configuration of the unlicensed resource to access the unlicensed channel according to the configuration information forwarded by the relay UE. In this case, fields such as SLUnlicensed and SL-ConfigCommUnli, may be added in RRC, as shown in Table 1. In addition, a field of RRCResourceConfig may be introduced (as shown in Table 5). FIGS.20and21show processes that the remote UE uses an unlicensed resource according to an exemplary embodiment. As shown inFIG.20, in the case where the remote UE is configured with the unlicensed resource, the remote UE performs LBT according to a usage parameter of the configured unlicensed resource. If the cumulative number of times that the remote UE fails to access the allocated unlicensed resource is greater than the threshold, the remote UE directly requests for the licensed resource to perform the sidelink communication. Otherwise, the remote UE occupies the unlicensed band to perform the communication. As shown inFIG.21, in the case where the remote UE is configured with an unlicensed resource list, and the remote UE selects an unlicensed resource from the unlicensed resource pool list, the remote UE performs LBT according to a usage parameter of the configured unlicensed resource. If the cumulative number of times that the remote UE fails to access the allocated unlicensed resource is greater than the threshold, the remote UE directly selects the licensed resource to perform the sidelink communication. Otherwise, the remote UE occupies the unlicensed band to perform communication. Next, embodiments of relay UE assisted resource allocation are described. According to an embodiment, the relay UE receives information indicating the unlicensed band resource used for the sidelink communication from the base station, and notifies the remote UE of the indicated unlicensed band resource. Alternatively, the relay UE may receive information indicating a resource pool of the unlicensed band resource used for the sidelink communication from the base station, select the unlicensed band resource used for the sidelink communication for the remote UE, and notify the remote UE of the selected unlicensed band resource. For example, the relay UE may perform this notification through a physical Sidelink control channel (PSCCH). More specifically, the eNB may notify a relay UE connected to the remote UE of the unlicensed resource configuration (ue-selected or scheduled) for the remote UE through the RRC signaling. In a case where the RRC signaling indicates “ue-selected”, the relay UE may select a resource from the resource pool for the remote UE, and notify the remote UE of the configuration of the unlicensed resource in the sidelink communication through PSCCH SCI 2A. In a case where the RRC signaling indicates “scheduled” and the scheduled resource is an unlicensed resource, the relay UE may notify the remote UE of the configuration of the unlicensed resource in the sidelink communication through, for example, PSCCH SCI 2A. The remote UE performs LBT according to the configuration information to access the unlicensed band. A configuration process is shown inFIG.22. In this exemplary embodiment, the relay UE assists the remote UE in acquiring configuration information of the unlicensed resource in the RRC signaling, and allocates the configuration information of the unlicensed resource to the remote UE through the PSCCH. The remote UE directly performs LBT. If the PSSCH is transmitted after a channel is occupied, the PSCCH is unnecessary to be transmitted. Therefore, signaling overhead and energy consumption of the remote UE can be reduced. In this case, fields such as SLUnlicensed and SL-ConfigCommUnli (as shown in Table 1) and SCI 2A sidelink control information (as shown in Table 4) may be added in RRC. In addition, according to an embodiment, the relay UE may notify the remote UE of the unlicensed band resource that is allocated to the relay UE. More specifically, in the case where the relay UE is configured with the unlicensed resource through the RRC signaling (ue-selected) or the activated semi-static unlicensed resource, the relay UE may configure the unlicensed resource for the remote UE according to the configuration of the unlicensed resource pool of the relay UE. The relay UE may notify the remote UE of the configuration of the unlicensed resource in the sidelink communication through, for example, PSCCH SCI 0. The remote UE performs LBT according to the configuration information to access the unlicensed band. A configuration process is shown inFIG.23. The relay UE shares the configured unlicensed resource with the remote UE, and allocates the configured unlicensed resource to the remote UE through the PSCCH. The remote UE directly performs LBT. If the PSSCH is transmitted after a channel is occupied, the PSCCH is unnecessary to be transmitted. Therefore, the signaling overhead and energy consumption of the remote UE can be reduced. Further, signaling overhead for requesting for a sidelink resource from the base station can be reduced. In this case, fields such as SLUnlicensed and SL-ConfigCommUnli (as shown in Table 1) may be added to RRC. Further, SCI 2A sidelink control information (as shown in Table 4) may be added. In addition, according to an embodiment, the relay UE may perform LBT with respect to an unlicensed band resource to be allocated to the remote UE, and notify the remote UE of a successfully accessed unlicensed channel. More specifically, in the case where the relay UE selects the unlicensed resource for the remote UE, in order to reduce the energy consumption of the remote UE performing LBT, the relay UE may perform LBT for the remote UE and then access the unlicensed channel, and notify the remote UE of the unlicensed channel through, for example, PSCCH SCI2A. In this case, the remote UE may perform type-2 LBT or not perform LBT when accessing the unlicensed channel. A configuration process is shown inFIG.24. A type of LBT is described briefly here. In the current cellular network, unlicensed bands used in uplink data transmission and downlink data transmission channels are dynamically scheduled. The UE or eNB performs LBT before accessing the unlicensed channel. The standard stipulates that at least a clear channel assessment (CCA) detection, that is, energy detection, is performed. In a case where energy of the unlicensed band is detected to exceed a threshold, it is indicated that the unlicensed channel is occupied. Currently, there are four types of LBT: CAT1 LBT in which LBT is not performed; CAT2 LBT in which LBT is performed while random rollback is not performed; CAT3 LBT in which LBT is performed and a rollback competition window has a fixed size; and CAT4 LBT in which LBT is performed and a rollback competition window has a variable size. The 3GPP standard specifies two types of uplink unlicensed channel access: type-1 in which CAT4 LBT is adopted and LBT parameters are configured according to the channel access priority class; and type-2 in which LBT is performed for 25 us. In a case where the relay UE configures unlicensed information for the remote UE, the relay UE assists the remote UE in performing LBT to occupy the unlicensed channel, thereby reducing the signaling overhead and the energy consumption of the remote UE. In this case, fields such as SLUnlicensed and SL-ConfigCommUnli (as shown in Table 1) may be added to RRC. Further, SCI 2A sidelink control information (as shown in Table 4) may be added. For a process that the remote UE accesses the unlicensed channel, one may refer to the process that the remote UE uses the unlicensed resource according to the above exemplary embodiment. FIG.2shows a configuration example of an electronic device for wireless communication according to an embodiment. As shown inFIG.2, an electronic device200includes processing circuitry210. The processing circuitry210includes a determination unit211, a control unit213and a selection unit215. The determination unit211and the control unit213are similar to the determination unit111and the control unit113, respectively. The selection unit215is configured to select an unlicensed band resource used for the sidelink communication based on zone configuration information. Next, selection of the unlicensed resource based on a zone is described by examples. In a case where the remote UE is in an RRC_IDLE state, a resource used for the remote UE to perform sidelink communication, for example, may be selected from a resource pool of commTxPoolNormalCommon in SIB18. In a case where a field of prioritylist (priority list) is configured, a resource pool is selected based on a priority in this field. In a case where this field is not configured, for example, a first resource pool may be selected. Since the remote UE selects the resource in SIB 18, multiple remote UEs may select the same unlicensed resource on same subframes, thereby resulting in a channel access conflict. Therefore, the zone is introduced when the unlicensed resource is used. The eNB acquires the unlicensed resource from the SIB 18 based on the zone, to reduce conflict among the remote UEs accessing the unlicensed channel.FIG.25shows an example of zone division. For example, the unlicensed resource may be configured in a resource pool list of SIB18, and UnlicensedEnabledZoneList and ZoneConfig are configured in SIB18. When a UE selects a resource pool in SIB 18 and selects an unlicensed resource pool, the UE checks whether its zone ID is in the UnlicensedEnabledZoneList list. If the zone ID is in the list, the UE is allowed to use the selected unlicensed resource. In this configuration, a process that the sidelink communication is performed using the unlicensed resource is shown inFIG.26. The Unlicensed resource is configured in a resource pool list of SIB18. If the remote UE selects the unlicensed resource pool and the zone ID of the remote UE is included in the UnlicensedEnabledZoneList, the UE performs LBT to access the unlicensed resource based on the configuration information. In a case where the LBT is performed successfully, the UE occupies the unlicensed resource. In a case where the number of subframes for which LBT fails is greater than the maximum number of subframes, the UE, for example, may directly select a licensed resource pool. Through the above embodiments, usage of unlicensed resources in each zone can be flexibly enabled, thereby effectively reducing unlicensed channel access conflict and reducing delay. In this case, fields such as SLUnlicensed and SL-ConfigCommUnli (as shown in Table 1) and SCI 2A sidelink control information (as shown in Table 4) may be added to the RRC. In addition, configuration information related to the zone may be introduced, as shown in Table 6. TABLE 6Fields added in SIB18FieldDescriptionZoneConfigCommindicating zone configuration for D2DcommunicationSLCommZoneListUnliindicating a list of zone IDs for whichunlicensed resources can be used to performD2D communicationSL-ConfigCommUnliSidelink communication procedure unlicensedresource configuration IE Further, an unlicensed resource pool used for sidelink discovery may also be selected based on the zone. Specifically, in a case where the UE is in the RRC_IDLE state, the relay UE and the remote UE, for example, may acquire resources for transmitting discovery from SIB19. In a case where an unlicensed resource pool used for the sidelink discovery is configured in SIB19, an unlicensed channel access conflict may be caused. In order to reduce an access collision of the unlicensed resource pool, zone configuration may be introduced, as shown inFIG.25. The eNB divides the zone to control the sidelink to use the unlicensed resource to transmit the discovery. In a case where the unlicensed resource is configured in SIB19, a field of UnlicensedEnabledZoneList may be configured to indicate a zone list for unlicensed resources in SIB19, and a field of ZoneConfig may be configured for zone. In a case where the relay UE or remote-UE selects an unlicensed resource in SIB19, the relay UE or remote-UE checks whether its zone ID is in the UnlicensedEnabledZoneList list. If the zone ID is in this list, the relay UE or remote UE is allowed to use the selected unlicensed resource. A process for using an unlicensed resource in SIB 19 is similar to that shown inFIG.26. In this case, fields such as SLUnlicensed and SL-ConfigDiscUnli (as shown in Table 1) may be added to the RRC and configuration information related to the zone may be introduced, as shown in Table 7. TABLE 7Fields added in SIB19FieldDescriptionZoneConfigDiscindicating zone configuration for D2D discoverySLDiscZoneListUnliindicating a list of zone IDs for which unlicensedresources can be used to perform D2D discoverySL-ConfigDiscUnliSidelink discovery procedure unlicensed resourceconfiguration IE Next, embodiments for sharing MCOT are described. According to an embodiment, the control unit113or213may be further configured to perform control to transmit information indicating maximum channel occupancy time MCOT of unlicensed band resource occupied by the first user equipment to the second user equipment, to share the MCOT with the second user equipment. The sharing the MCOT described here includes that: the first user equipment shares a remaining subframe of the unlicensed resource occupied by the first user equipment to the second user equipment. The first user does not use the unlicensed resources during the sharing period. In addition, the maximum channel occupancy time MCOT of unlicensed band resource occupied by the first user equipment may also be used for the sidelink discovery. For the relay UE and the remote UE, according to an embodiment, the relay UE receives information on the maximum channel occupancy time (MCOT) of unlicensed band resource for a cellular link from the base station, and uses the unlicensed band resource to perform sidelink communication with the remote UE within the MCOT. Next, manners for sharing the MCOT are described in connection with specific examples. In a first example, in a case where a Uu link uses an unlicensed resource, the relay UE uses the unlicensed resource to perform sidelink communication. The relay UE shares the unlicensed resource occupied by the relay UE to the remote UE and adds a field of MCOTConfig. The remote UE performs type-2 LBT or does not perform LBT to access an unlicensed channel. The field of MCOTConfig includes the number of remaining unlicensed subframes and an unlicensed channel access parameter. The remote UE uses the unlicensed resource within the MCOT. The remote UE may not transmit the PSCCH, and directly transmits the PSSCH to save available unlicensed subframes. Signaling configuration is shown inFIG.27. The relay UE activates the remote UE to use the unlicensed resource through PSCCH SCI 2A. If the relay UE is to terminate the use of the unlicensed resource by the remote UE, the relay UE deactivates the use of the unlicensed resource through PSCCH SCI 2A. In this case, the relay UE configures the unlicensed channel access parameter in MCOTconfig. The MCOTconfig IE may include the number of available subframes and the unlicensed channel access parameter. For example, fields included in the MCOTconfig IE are shown in Table 8. TABLE 8MCOTconfig IEFieldDescription and valueNumberSubframenumber of remaining available subframes, 3 bitsPSSCH startingindicating a position of a starting symbol forpositiontransmission, 2 bitsPSSCH ending0 indicating that the last symbol of a subframesymbolis a termination transmission symbol; and1 indicating that the penultimate symbol of thesubframe is the termination transmission symbol,1 bitChannel Access Typeindicating a type of unlicensed channel access,1 bitChannel Accessindicating a class of channel access priority, 2Priority Classbits, {1, 2, 3, 4} In a second example, in a case where the Uu link uses an unlicensed resource, the remote UE uses the unlicensed resource to perform the sidelink communication. The remote UE shares the unlicensed resource occupied by the remote UE to the relay UE and adds a field of MCOTConfig. The relay UE performs type-2 LBT or does not perform LBT to access the unlicensed channel. The field of MCOTConfig includes the number of remaining unlicensed subframes and an unlicensed channel access parameter. The Relay UE uses the unlicensed resource within the MCOT. The relay UE may not transmit the PSCCH, and directly transmits the PSSCH to save available unlicensed subframes. Signaling configuration is shown inFIG.28. Through this configuration, the relay UE and the remote UE share unlicensed information of the MCOT. The relay UE or remote UE may perform simple LBT (for example, type-2 LBT) or may not perform LBT to access the unlicensed channel, thereby reducing signaling overhead, energy consumption and delay. In this case, the remote UE configures the unlicensed channel access parameter in MCOTconfig, such as NumberSubframe, PSSCH starting position, PSSCH ending symbol, Channel Access Type, and Channel Access Priority Class, as shown in Table 8. In the above first and second examples, the MCOT is shared between the relay UE and the remote UE, and the Uu link may use the licensed resource or the unlicensed resource. In a third example, in a case where the Uu link uses an unlicensed resource pool to transmit data related to sidelink to the relay UE, the relay UE may share the unlicensed resource for Uu link to perform sidelink communication. A process is shown inFIG.29. The eNB transmits sidelink data to the relay UE, and shares the remaining unlicensed subframes with the relay UE to perform sidelink communication. The eNB notifies the relay UE of parameter configuration of accessing the unlicensed band through the field of MCOTConfig. The remote UE performs type-2 LBT or does not perform LBT according to the configuration information to occupy the unlicensed channel. The eNB activates the relay UE to use the unlicensed resource through PDCCH DCI 5B, and configures the unlicensed resource access parameter. If the eNB is to terminate the use of the unlicensed resource by the relay UE, the eNB deactivates the use of the unlicensed resource through the PDCCH DCI 5B. In this case, the eNB configures the unlicensed channel access parameter in MCOTconfig, as shown in Table 8. In a fourth example, in a case where the relay UE or remote UE uses a model B (“who is there” mode) discovery to perform a discovery process, a UE at the receiving side may access the unlicensed channel within the MCOT. In a case where a UE at the transmitting side transmits a discovery solicitation message to use the unlicensed resource, within the MCOT, the UE at the receiving side may perform type-2 LBT to access the unlicensed channel to transmit a discovery response message. Alternatively, the UE at the receiving side may not perform LBT to access the unlicensed channel to transmit the discovery response message. The discovery solicitation message includes configuration information of the MCOT, and at least a channel access parameter, and the number of remaining available unlicensed subframes. In this manner, delay and energy consumption generated when the UE at the receiving side accesses the unlicensed resource can be effectively reduced. A signaling process in this case is shown inFIG.30. In this case, the eNB configures the unlicensed channel access parameter in MCOTconfig, as shown in Table 8. In the present disclosure, the configuration process of the unlicensed parameter on the sidelink is provided, so that a D2D user can acquire the unlicensed resource. Processes of using unlicensed resources in D2D communication and D2D discovery are provided in the present disclosure, and include the configuration and use of the unlicensed resource. When selecting a resource, the UE considers data service priority, link quality or current batter level of an apparatus of a user, so that delay and service quality degradation caused by the use of the unlicensed resource by the UE can be reduced while increasing system capacity. Further, energy consumption of a low-power apparatus is reduced. In the above description of the electronic device according to the embodiments of the present disclosure, it is apparent that some methods and processes are also disclosed. Next, a description of the methods according to embodiments of the disclosure is given without repeating the details described above. As shown inFIG.3, a wireless communication method according to an embodiment includes the following steps S310to320. In S310, it is determined whether a first user equipment satisfies a condition for performing a sidelink communication with a second user equipment using unlicensed band resource. In S320, if the condition is satisfied, the first user equipment is controlled to perform the sidelink communication with the second user equipment using the unlicensed band resource. FIG.4shows a configuration example of an electronic device for wireless communication according to another embodiment. An electronic device400includes processing circuitry410. The processing circuitry410includes a control unit411, which is configured to control the first user equipment to perform the sidelink communication with the second user equipment using the unlicensed band resource and perform control to transmit information indicating MCOT of unlicensed band resource occupied by the first user equipment to the second user equipment, to share the MCOT with the second user equipment. According to an embodiment, one of the first user equipment and the second user equipment operates as a relay UE, and the other of the first user equipment and the second user equipment operates as a remote UE. The remote UE receives information from the base station via the relay UE. The relay UE receives from the base station information indicating the MCOT of unlicensed band resource for a cellular link, and performs sidelink communication with the remote UE using the unlicensed band resource within the MCOT. According to an embodiment, the maximum channel occupancy time (MCOT) of the unlicensed band resource occupied by the first user equipment is used for sidelink discovery. As shown inFIG.5, a wireless communication method according to an embodiment includes the following steps S510to S520. In S510, the first user equipment is controlled to perform sidelink communication with a second user equipment using unlicensed band resource. In S520, information indicating MCOT of unlicensed band resource occupied by the first user equipment is transmitted to the second user equipment, to share the MCOT with the second user equipment. More specifically, the second user equipment may perform LBT according to configuration of the MCOT, to use the unlicensed resource. FIG.6shows a configuration example of an electronic device for wireless communication according to another embodiment. The electronic device600includes processing circuitry610. The processing circuitry610includes a control unit611, which is configured to perform control to transmit indication information to a user equipment. The indication information indicates an acquisition manner of unlicensed band resource used for a sidelink communication of the user equipment. Alternatively, the indication information indicates the unlicensed band resource. The acquisition manner may include: specifying the unlicensed band resource by the base station; or selecting the unlicensed band resource by a user equipment from a configured resource pool or a resource pool list. As shown inFIG.7, a wireless communication method according to an embodiment includes the following step S710. In S710, indication information is transmitted to a user equipment. The indication information indicates an acquisition manner of unlicensed band resource used for a sidelink communication of the user equipment. Alternatively, the indication information indicates the unlicensed band resource. In addition, a computer readable medium is further provided according to an embodiment of the present disclosure. The computer readable medium includes executable instructions that, when executed by an information processing apparatus, cause the information processing apparatus to execute the methods according to the embodiments of the present disclosure. For example, steps of the above methods and modules and/or units of the above devices may be implemented as software, firmware, hardware, or a combination thereof. In a case where steps of the above methods and modules and/or units of the above devices are implemented by software or firmware, a computer (for example, a general-purpose computer800shown inFIG.8) having a dedicated hardware structure may be installed with a program constituting software for implementing the above methods from a storage medium or a network. When being installed with various programs, the computer is capable of performing various functions. InFIG.8, an arithmetic processing unit (that is, CPU)801performs various processing according to a program stored in a read-only memory (ROM)802or a program loaded from a storage part808to a random-access memory (RAM)803. Data required when the CPU801performs various processing is also stored in the RAM803as needed. The CPU801, the ROM802, and the RAM803are linked to each other via a bus804. An input/output interface805is also linked to the bus804. The following components are linked to the input/output interface805: an input part806(including a keyboard, a mouse or the like), an output part807(including a display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a speaker or the like), a storage part808(including a hard disk or the like), and a communication part809(including a network interface card such as a LAN card, a modem or the like). The communication part809performs communication processing via a network such as the Internet. A driver810may also be linked to the input/output interface805as needed. A removable medium811such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory may be installed on the driver810as needed, so that a computer program read from the removable medium811is installed into the storage part808as needed. In a case where the above series of processing are implemented by software, a program constituting the software is installed from a network such as the Internet, or a storage medium such as the removable medium811. Those skilled in the art should understand that the storage medium is not limited to the removable medium811shown inFIG.8that stores a program and is distributed separately from the apparatus so as to provide the program to the user. The removable medium811, for example, may include: a magnetic disk (including a floppy disk (registered trademark)); an optical disk (including a compact disk read only memory (CD-ROM) and a digital versatile disc (DVD)); a magneto-optical disk (including a minidisc (MD) (registered trademark)); and a semiconductor memory. Alternatively, the storage medium may be the ROM802, a hard disk included in the storage part808or the like. The storage medium has a program stored therein and is distributed to the user together with an apparatus in which the storage medium is included. A program product storing machine-readable instruction codes is further provided according to an embodiment of the present disclosure. The instruction codes, when being read and executed by a machine, may perform the methods according to the above embodiments of the present disclosure. Accordingly, a storage medium for carrying the program product storing the machine-readable instruction codes is also provided according to the present disclosure. The storage medium may include but is not limited to a floppy disk, an optical disk, a magneto-optical disk, a memory card, a memory stick or the like. The following electronic apparatus is involved in the embodiments of the present disclosure. In a case where the electronic apparatus is used for base station side, the electronic apparatus may be implemented as any type of gNB or evolved node B (eNB), such as a macro eNB and a small eNB. The small eNB may be an eNB of a cell having a smaller coverage than a macro cell, such as a pico-cell eNB, a micro eNB and a home (femto) eNB. Alternatively, the electronic apparatus may be implemented as any other types of base stations, such as a NodeB and a base transceiver station (BTS). The electronic apparatus may include: a main body (also referred to as a base station apparatus) configured to control the wireless communication; and one or more remote radio heads (RRH) provided at a different position from the main body. In addition, various types of terminals, which are described below, may each serve as a base station by performing functions of the base station temporarily or semi-persistently. In a case where the electronic apparatus is used for user equipment side, the electronic apparatus may be implemented as a mobile terminal (such as a smartphone, a tablet personal computer (PC), a notebook PC, a portable game terminal, a portable/dongle mobile router and a digital camera) or a vehicle terminal (such as an automobile navigation apparatus). Furthermore, the electronic apparatus may be a wireless communication module (such as an integrated circuitry module including a single die or multiple dies) mounted on each of the terminals described above. Application Example for a Terminal Apparatus FIG.9is a block diagram showing an exemplary configuration of a smartphone2500to which technology according to the present disclosure may be applied. The smartphone2500includes a processor2501, a memory2502, a storage device2503, an external connection interface2504, a camera device2506, a sensor2507, a microphone2508, an input device2509, a display device2510, a speaker2511, a wireless communication interface2512, one or more antenna switches2515, one or more antennas2516, a bus2517, a battery2518and an auxiliary controller2519. The processor2501may be, for example, a CPU or a system on chip (SoC), and controls functions of an application layer and another layer of the smartphone2500. The memory2502includes an RAM and an ROM, and stores data and a program executed by the processor2501. The storage device2503may include a storage medium such as a semiconductor memory and a hard disk. The external connection interface2504is an interface for connecting an external device (such as a memory card and a universal serial bus (USB) device) to the smartphone2500. The camera device2506includes an image sensor (such as a charge coupled device (CCD) and a complementary metal oxide semiconductor (CMOS)), and generates a captured image. The sensor2507may include a group of sensors such as a measurement sensor, a gyro sensor, a geomagnetic sensor, and an acceleration sensor. The microphone2508converts sound that is inputted to the smartphone2500into an audio signal. The input device2509includes, for example, a touch sensor configured to detect touch on a screen of the display device2510, a keypad, a keyboard, a button, or a switch, and receives an operation or information inputted from a user. The display device2510includes a screen (such as a liquid crystal display (LCD) and an organic light-emitting diode (OLED) display), and displays an output image of the smartphone2500. The speaker2511is configured to convert an audio signal outputted from the smartphone2500into sound. The wireless communication interface2512supports any cellular communication scheme (such as LTE and LTE-Advanced), and performs wireless communication. The wireless communication interface2512may include, for example, a baseband (BB) processor2513and radio frequency (RF) circuitry2514. The BB processor2513may perform, for example, coding/decoding, modulating/demodulating and multiplexing/de-multiplexing, and perform various types of signal processing for wireless communications. The RF circuitry2514may include, for example, a mixer, a filter and an amplifier, and transmits and receives a wireless signal via an antenna2516. The wireless communication interface2512may be a chip module having the BB processor2513and the RF circuitry2514integrated thereon. As shown inFIG.9, the wireless communication interface2512may include multiple BB processors2513and multiple RF circuitry2514. AlthoughFIG.9shows an example in which the wireless communication interface2512includes the multiple BB processors2513and the multiple RF circuitry2514, the wireless communication interface2512may include a single BB processor2513or single RF circuitry2514. Besides the cellular communication scheme, the wireless communication interface2512may support an additional type of wireless communication scheme, such as a short-distance wireless communication scheme, a near field communication scheme and a wireless local area network (LAN) scheme. In this case, the wireless communication interface2512may include the BB processor2513and the RF circuitry2514for each wireless communication scheme. Each of the antenna switches2515switches connection destinations of the antennas2516among multiple circuitry (such as circuitry for different wireless communication schemes) included in the wireless communication interface2512. Each of the antennas2516includes a single or multiple antenna elements (such as multiple antenna elements included in an MIMO antenna), and is used for the wireless communication interface2512to transmit and receive a wireless signal. The smartphone2500may include multiple antennas2516, as shown inFIG.9. AlthoughFIG.9shows an example in which the smartphone2500includes the multiple antennas2516, the smartphone2500may also include a single antenna2516. In addition, the smartphone2500may include an antenna2516for each type of wireless communication scheme. In this case, the antenna switches2515may be omitted from the configuration of the smartphone2500. The processor2501, the memory2502, the storage device2503, the external connection interface2504, the camera device2506, the sensor2507, the microphone2508, the input device2509, the display device2510, the speaker2511, the wireless communication interface2512, and the auxiliary controller2519are connected to each other via the bus2517. The battery2518supplies power to blocks of the smartphone2500shown inFIG.9via feeders which are partially shown with dashed lines in the drawings. The auxiliary controller2519, for example, operates a minimum necessary function of the smartphone2500in a sleep mode. In the smartphone2500shown inFIG.9, the transceiving device of the apparatus for user equipment side according to an embodiment of the present disclosure may be implemented by the wireless communication interface2512. At least a part of functions of the processing circuitry and/or units of the electronic device or the information processing apparatus for user equipment side according to the embodiments of the present disclosure may be implemented by the processor2501or the auxiliary controller2519. For example, the auxiliary controller2519may perform a part of functions of the processor2501, to reduce power consumption of the battery2518. Further, the processor2501or the auxiliary controller2519may perform at least a part of functions of the processing circuitry and/or the units of the electronic device or the information processing apparatus for user equipment side according to the embodiments of the present disclosure by executing a program stored in the memory2502or the storage device2503. Application Example for a Base Station FIG.10is a block diagram showing an exemplary configuration of a base station such as an evolved (eNB) to which the technology according to the present disclosure may be applied. An eNB2300includes one or more antennas2310and a base station apparatus2320. Each of the antennas2310is connected to the base station apparatus2320via a radio frequency (RF) cable. Each of the antennas2310includes a single antenna element or multiple antenna elements (such as multiple antenna elements included in a multiple-input multiple-output (MIMO) antenna), and is used for the base station apparatus2320to transmit and receive a wireless signal. The eNB2300may include multiple antennas2310, as shown inFIG.10. For example, the multiple antennas2310may be compatible with multiple frequency bands used by the eNB2300. AlthoughFIG.10shows an example in which the eNB2300includes multiple antennas2310, the eNB2300may include a single antenna2310. The base station apparatus2320includes a controller2321, a memory2322, a network interface2323, and a wireless communication interface2325. The controller2321may be, for example, a CPU or a DSP, and operate various functions of a high layer of the base station apparatus2320. For example, the controller2321generates a data packet based on data in a signal processed by the wireless communication interface2325and transmits the generated packet via the network interface2323. The controller2321may bundle data from multiple baseband processors to generate a bundled packet and transmit the generated bundled packet. The controller2321may have a logic function that performs control such as radio resource control, wireless bearer control, mobility management, admission control, and scheduling. The control may be performed in combination with a nearby eNB or core network node. The memory2322includes an RAM and an ROM, and stores a program executed by the controller2321and various types of control data (such as a terminal list, transmission power data and scheduling data). The network interface2323is a communication interface via which the base station apparatus2320is connected to a core network2324. The controller2321may communicate with a core network node or another eNB via the network interface2323. In this case, the eNB2300may be connected to the core network node or other eNB via a logical interface (such as an S1 interface and an X2 interface). The network interface2323may also be a wired communication interface or a wireless communication interface for wireless backhaul line. If the network interface2323is the wireless communication interface, the network interface2323may use a frequency band for wireless communication higher than a frequency band used by the wireless communication interface2325. The wireless communication interface2325supports any cellular communication scheme (such as long term evolution (LTE) and LTE-Advanced), and provides wireless connection to a terminal positioned in a cell of the eNB2300via an antenna2310. The wireless communication interface2325may include, for example, a BB processor2326and RF circuitry2327. The BB processor2326may perform, for example, encoding/decoding, modulating/demodulating and multiplexing/de-multiplexing, and various types of signal processing of layers (such as L1, medium access control (MAC), radio link control (RLC) and packet data convergence protocol (PDCP)). Instead of the controller2321, the BB processor2326may have a part or all of the above logic functions. The BB processor2326may be implemented as a memory storing a communication control program, or a module including a processor configured to execute a program and related circuitry. The function of the BB processor2326may be changed by updating the program. The module may be a card or blade inserted into a slot of the base station apparatus2320. Alternatively, the module may be a chip installed on the card or the blade. Further, the RF circuitry2327may include, for example, a mixer, a filter or an amplifier, and transmits and receives a wireless signal via the antenna2310. As shown inFIG.10, the wireless communication interface2325may include multiple BB processors2326. For example, the multiple BB processors2326may be compatible with multiple frequency bands used by the eNB2300. As shown inFIG.10, the wireless communication interface2325may include multiple RF circuitry2327. For example, the multiple RF circuitry2327may be compatible with multiple antenna elements. AlthoughFIG.10shows an example in which the wireless communication interface2325includes multiple BB processors2326and multiple RF circuitry2327, the wireless communication interface2325may include a single BB processor2326or single RF circuitry2327. In the eNB2300shown inFIG.10, the transceiving device of the apparatus for base station side according to an embodiment of the present disclosure may be implemented by the wireless communication interface2325. At least a part of functions of the processing circuitry and/or units of the electronic device or the information processing apparatus for base station side according to the embodiment of the present disclosure may be implemented by the controller2321. For example, the controller2321may perform at least a part of functions of the processing circuitry and/or the units of the electronic device or the information processing apparatus for base station side according to the embodiment of the present disclosure by executing the program stored in the memory2322. In the above description of specific embodiments of the present disclosure, features described and/or illustrated for one embodiment may be used in one or more other embodiments in the same or similar manner, or may be combined with features in other embodiments, or may replace features in other embodiments. It should be emphasized that terms of “include/comprise” used herein indicate presence of a feature, an element, a step, or a component, but do not exclude presence or addition of one or more other features, elements, steps or components. In the above embodiments and examples, reference signs consisting of numbers are used to represent steps and/or units. Those skilled in the art should understand that these reference numerals are only for purpose of illustration and drawing and are not indicative of the order or any other limitations thereof. In addition, the method according to the present disclosure is not limited to be performed in the chronological order described herein, and may be performed in other chronological order, in parallel or independently. Therefore, the order in which the method is performed described herein does not limit the technical scope of the present disclosure. Although the present disclosure is described above through the specific embodiments of the present disclosure, it should be understood that all embodiments and examples described above are illustrative rather than restrictive. Various modifications, improvements and equivalents may be made to the present disclosure by those skilled in the art within the scope and spirit of the attached claims. These modifications, improvements or equivalents should fall within the protection scope of the present disclosure.
68,394
11943634
DETAILED DESCRIPTION FIG.1throughFIG.12, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. The following documents are hereby incorporated by reference into the present disclosure as if fully set forth herein: 3GPP TS 38.211 v15.4.0, “NR; Physical channels and modulation”; 3GPP TS 38.212 v15.4.0, “NR; Multiplexing and Channel coding”; 3GPP TS 38.213 v15.4.0, “NR; Physical Layer Procedures for Control”; 3GPP TS 38.214 v15.4.0, “NR; Physical Layer Procedures for Data”; and 3GPP TS 38.331 v15.4.0, “NR; Radio Resource Control (RRC) Protocol Specification.” FIGS.1-3below describe various embodiments implemented in wireless communications systems and with the use of orthogonal frequency division multiplexing (OFDM) or orthogonal frequency division multiple access (OFDMA) communication techniques. The descriptions ofFIGS.1-3are not meant to imply physical or architectural limitations to the manner in which different embodiments may be implemented. Different embodiments of the present disclosure may be implemented in any suitably-arranged communications system. FIG.1illustrates an example wireless network according to embodiments of the present disclosure. The embodiment of the wireless network shown inFIG.1is for illustration only. Other embodiments of the wireless network100could be used without departing from the scope of this disclosure. As shown inFIG.1, the wireless network includes a gNB101(e.g., base station, BS), a gNB102, and a gNB103. The gNB101communicates with the gNB102and the gNB103. The gNB101also communicates with at least one network130, such as the Internet, a proprietary Internet Protocol (IP) network, or other data network. The gNB102provides wireless broadband access to the network130for a first plurality of UEs within a coverage area120of the gNB102. The first plurality of UEs includes a UE111, which may be located in a small business; a UE112, which may be located in an enterprise (E); a UE113, which may be located in a WiFi hotspot (HS); a UE114, which may be located in a first residence (R); a UE115, which may be located in a second residence (R); and a UE116, which may be a mobile device (M), such as a cell phone, a wireless laptop, a wireless PDA, or the like. The gNB103provides wireless broadband access to the network130for a second plurality of UEs within a coverage area125of the gNB103. The second plurality of UEs includes the UE115and the UE116. In some embodiments, one or more of the gNBs101-103may communicate with each other and with the UEs111-116using 5G/NR, LTE, LTE-A, WiMAX, WiFi, or other wireless communication techniques. Depending on the network type, the term “base station” or “BS” can refer to any component (or collection of components) configured to provide wireless access to a network, such as transmit point (TP), transmit-receive point (TRP), an enhanced base station (eNodeB or eNB), a 5G/NR base station (gNB), a macrocell, a femtocell, a WiFi access point (AP), or other wirelessly enabled devices. Base stations may provide wireless access in accordance with one or more wireless communication protocols, e.g., 5G/NR 3GPP new radio interface/access (NR), long term evolution (LTE), LTE advanced (LTE-A), high speed packet access (HSPA), Wi-Fi 802.11a/b/g/n/ac, etc. For the sake of convenience, the terms “BS” and “TRP” are used interchangeably in this patent document to refer to network infrastructure components that provide wireless access to remote terminals. Also, depending on the network type, the term “user equipment” or “UE” can refer to any component such as “mobile station,” “subscriber station,” “remote terminal,” “wireless terminal,” “receive point,” or “user device.” For the sake of convenience, the terms “user equipment” and “UE” are used in this patent document to refer to remote wireless equipment that wirelessly accesses a BS, whether the UE is a mobile device (such as a mobile telephone or smartphone) or is normally considered a stationary device (such as a desktop computer or vending machine). Dotted lines show the approximate extents of the coverage areas120and125, which are shown as approximately circular for the purposes of illustration and explanation only. It should be clearly understood that the coverage areas associated with gNBs, such as the coverage areas120and125, may have other shapes, including irregular shapes, depending upon the configuration of the gNBs and variations in the radio environment associated with natural and man-made obstructions. As described in more detail below, one or more of the UEs111-116include circuitry, programing, or a combination thereof for UEs. In certain embodiments, and one or more of the gNBs101-103includes circuitry, programing, or a combination thereof for UEs. AlthoughFIG.1illustrates one example of a wireless network, various changes may be made toFIG.1. For example, the wireless network could include any number of gNBs and any number of UEs in any suitable arrangement. Also, the gNB101could communicate directly with any number of UEs and provide those UEs with wireless broadband access to the network130. Similarly, each gNB102-103could communicate directly with the network130and provide UEs with direct wireless broadband access to the network130. Further, the gNBs101,102, and/or103could provide access to other or additional external networks, such as external telephone networks or other types of data networks. FIG.2illustrates an example gNB102according to embodiments of the present disclosure. The embodiment of the gNB102illustrated inFIG.2is for illustration only, and the gNBs101and103ofFIG.1could have the same or similar configuration. However, gNBs come in a wide variety of configurations, andFIG.2does not limit the scope of this disclosure to any particular implementation of a gNB. As shown inFIG.2, the gNB102includes multiple antennas205a-205n, multiple RF transceivers210a-210n, transmit (TX) processing circuitry215, and receive (RX) processing circuitry220. The gNB102also includes a controller/processor225, a memory230, and a backhaul or network interface235. The RF transceivers210a-210nreceive, from the antennas205a-205n, incoming RF signals, such as signals transmitted by UEs in the network100. The RF transceivers210a-210ndown-convert the incoming RF signals to generate IF or baseband signals. The IF or baseband signals are sent to the RX processing circuitry220, which generates processed baseband signals by filtering, decoding, and/or digitizing the baseband or IF signals. The RX processing circuitry220transmits the processed baseband signals to the controller/processor225for further processing. The TX processing circuitry215receives analog or digital data (such as voice data, web data, e-mail, or interactive video game data) from the controller/processor225. The TX processing circuitry215encodes, multiplexes, and/or digitizes the outgoing baseband data to generate processed baseband or IF signals. The RF transceivers210a-210nreceive the outgoing processed baseband or IF signals from the TX processing circuitry215and up-converts the baseband or IF signals to RF signals that are transmitted via the antennas205a-205n. The controller/processor225can include one or more processors or other processing devices that control the overall operation of the gNB102. For example, the controller/processor225could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceivers210a-210n, the RX processing circuitry220, and the TX processing circuitry215in accordance with well-known principles. The controller/processor225could support additional functions as well, such as more advanced wireless communication functions. For instance, the controller/processor225could support beam forming or directional routing operations in which outgoing/incoming signals from/to multiple antennas205a-205nare weighted differently to effectively steer the outgoing signals in a desired direction. Any of a wide variety of other functions could be supported in the gNB102by the controller/processor225. The controller/processor225is also capable of executing programs and other processes resident in the memory230, such as an OS. The controller/processor225can move data into or out of the memory230as required by an executing process. The controller/processor225is also coupled to the backhaul or network interface235. The backhaul or network interface235allows the gNB102to communicate with other devices or systems over a backhaul connection or over a network. The interface235could support communications over any suitable wired or wireless connection(s). For example, when the gNB102is implemented as part of a cellular communication system (such as one supporting 5G/NR, LTE, or LTE-A), the interface235could allow the gNB102to communicate with other gNBs over a wired or wireless backhaul connection. When the gNB102is implemented as an access point, the interface235could allow the gNB102to communicate over a wired or wireless local area network or over a wired or wireless connection to a larger network (such as the Internet). The interface235includes any suitable structure supporting communications over a wired or wireless connection, such as an Ethernet or RF transceiver. The memory230is coupled to the controller/processor225. Part of the memory230could include a RAM, and another part of the memory230could include a flash memory or other ROM. AlthoughFIG.2illustrates one example of gNB102, various changes may be made toFIG.2. For example, the gNB102could include any number of each component shown inFIG.2. As a particular example, an access point could include a number of interfaces235, and the controller/processor225could support routing functions to route data between different network addresses. As another particular example, while shown as including a single instance of TX processing circuitry215and a single instance of RX processing circuitry220, the gNB102could include multiple instances of each (such as one per RF transceiver). Also, various components inFIG.2could be combined, further subdivided, or omitted and additional components could be added according to particular needs. FIG.3illustrates an example UE116according to embodiments of the present disclosure. The embodiment of the UE116illustrated inFIG.3is for illustration only, and the UEs111-115ofFIG.1could have the same or similar configuration. However, UEs come in a wide variety of configurations, andFIG.3does not limit the scope of this disclosure to any particular implementation of a UE. As shown inFIG.3, the UE116includes an antenna305, a radio frequency (RF) transceiver310, TX processing circuitry315, a microphone320, and RX processing circuitry325. The UE116also includes a speaker330, a processor340, an input/output (I/O) interface (IF)345, a touchscreen350, a display355, and a memory360. The memory360includes an operating system (OS)361and one or more applications362. The RF transceiver310receives, from the antenna305, an incoming RF signal transmitted by a gNB of the network100. The RF transceiver310down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal. The IF or baseband signal is sent to the RX processing circuitry325, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The RX processing circuitry325transmits the processed baseband signal to the speaker330(such as for voice data) or to the processor340for further processing (such as for web browsing data). The TX processing circuitry315receives analog or digital voice data from the microphone320or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the processor340. The TX processing circuitry315encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The RF transceiver310receives the outgoing processed baseband or IF signal from the TX processing circuitry315and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna305. The processor340can include one or more processors or other processing devices and execute the OS361stored in the memory360in order to control the overall operation of the UE116. For example, the processor340could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceiver310, the RX processing circuitry325, and the TX processing circuitry315in accordance with well-known principles. In some embodiments, the processor340includes at least one microprocessor or microcontroller. The processor340is also capable of executing other processes and programs resident in the memory360, such as processes for beam management. The processor340can move data into or out of the memory360as required by an executing process. In some embodiments, the processor340is configured to execute the applications362based on the OS361or in response to signals received from gNBs or an operator. The processor340is also coupled to the I/O interface345, which provides the UE116with the ability to connect to other devices, such as laptop computers and handheld computers. The I/O interface345is the communication path between these accessories and the processor340. The processor340is also coupled to the touchscreen350and the display355. The operator of the UE116can use the touchscreen350to enter data into the UE116. The display355may be a liquid crystal display, light emitting diode display, or other display capable of rendering text and/or at least limited graphics, such as from web sites. The memory360is coupled to the processor340. Part of the memory360could include a random access memory (RAM), and another part of the memory360could include a Flash memory or other read-only memory (ROM). AlthoughFIG.3illustrates one example of UE116, various changes may be made toFIG.3. For example, various components inFIG.3could be combined, further subdivided, or omitted and additional components could be added according to particular needs. As a particular example, the processor340could be divided into multiple processors, such as one or more central processing units (CPUs) and one or more graphics processing units (GPUs). Also, whileFIG.3illustrates the UE116configured as a mobile telephone or smartphone, UEs could be configured to operate as other types of mobile or stationary devices. To meet the demand for wireless data traffic having increased since deployment of 4G communication systems and to enable various vertical applications, efforts have been made to develop and deploy an improved 5G/NR or pre-5G/NR communication system. Therefore, the 5G/NR or pre-5G/NR communication system is also called a “beyond 4G network” or a “post LTE system.” The 5G/NR communication system is considered to be implemented in higher frequency (mmWave) bands, e.g., 28 GHz 60 GHz bands, so as to accomplish higher data rates or in lower frequency bands, such as 6 GHz, to enable robust coverage and mobility support. Aspects of the present disclosure may also be applied to deployment of 5G communication system, 6G or even later release which may use terahertz (THz) bands. To decrease propagation loss of the radio waves and increase the transmission distance, the beamforming, massive multiple-input multiple-output (MIMO), full dimensional MIMO (FD-MIMO), array antenna, an analog beam forming, large scale antenna techniques are discussed in 5G/NR communication systems. In addition, in 5G/NR communication systems, development for system network improvement is under way based on advanced small cells, cloud radio access networks (RANs), ultra-dense networks, device-to-device (D2D) communication, wireless backhaul, moving network, cooperative communication, coordinated multi-points (CoMP), reception-end interference cancellation and the like. A communication system includes a downlink (DL) that refers to transmissions from a base station or one or more transmission points to UEs and an uplink (UL) that refers to transmissions from UEs to a base station or to one or more reception points. A time unit for DL signaling or for UL signaling on a cell is referred to as a slot and can include one or more symbols. A symbol can also serve as an additional time unit. A frequency (or bandwidth (BW)) unit is referred to as a resource block (RB). One RB includes a number of sub-carriers (SCs). For example, a slot can have duration of 0.5 milliseconds or 1 millisecond, include 14 symbols and an RB can include 12 SCs with inter-SC spacing of 15 KHz or 30 KHz, and so on. DL signals include data signals conveying information content, control signals conveying DL control information (DCI), and reference signals (RS) that are also known as pilot signals. A gNB transmits data information or DCI through respective physical DL shared channels (PDSCHs) or physical DL control channels (PDCCHs). A PDSCH or a PDCCH can be transmitted over a variable number of slot symbols including one slot symbol. For brevity, a DCI format scheduling a PDSCH reception by a UE is referred to as a DL DCI format and a DCI format scheduling a physical uplink shared channel (PUSCH) transmission from a UE is referred to as an UL DCI format. A gNB transmits one or more of multiple types of RS including channel state information RS (CSI-RS) and demodulation RS (DMRS). A CSI-RS is primarily intended for UEs to perform measurements and provide channel state information (CSI) to a gNB. For channel measurement, non-zero power CSI-RS (NZP CSI-RS) resources are used. For interference measurement reports (IMRs), CSI interference measurement (CSI-IM) resources associated with a zero power CSI-RS (ZP CSI-RS) configuration are used. A CSI process consists of NZP CSI-RS and CSI-IM resources. A UE can determine CSI-RS transmission parameters through DL control signaling or higher layer signaling, such as RRC signaling, from a gNB. Transmission instances of a CSI-RS can be indicated by DL control signaling or be configured by higher layer signaling. A DMRS is transmitted only in the BW of a respective PDCCH or PDSCH and a UE can use the DMRS to demodulate data or control information. FIG.4andFIG.5illustrate example wireless transmit and receive paths according to this disclosure. In the following description, a transmit path400may be described as being implemented in an gNB (such as gNB102), while a receive path500may be described as being implemented in a UE (such as UE116). However, it may be understood that the receive path500can be implemented in an gNB and that the transmit path400can be implemented in a UE. In some embodiments, the receive path500is configured to support the codebook design and structure for systems having 2D antenna arrays as described in embodiments of the present disclosure. The transmit path400as illustrated inFIG.4includes a channel coding and modulation block405, a serial-to-parallel (S-to-P) block410, a size N inverse fast Fourier transform (IFFT) block415, a parallel-to-serial (P-to-S) block420, an add cyclic prefix block425, and an up-converter (UC)430. The receive path500as illustrated inFIG.5includes a down-converter (DC)555, a remove cyclic prefix block560, a serial-to-parallel (S-to-P) block565, a size N fast Fourier transform (FFT) block570, a parallel-to-serial (P-to-S) block575, and a channel decoding and demodulation block580. As illustrated inFIG.400, the channel coding and modulation block405receives a set of information bits, applies coding (such as a low-density parity check (LDPC) coding), and modulates the input bits (such as with quadrature phase shift keying (QPSK) or quadrature amplitude modulation (QAM)) to generate a sequence of frequency-domain modulation symbols. The serial-to-parallel block410converts (such as de-multiplexes) the serial modulated symbols to parallel data in order to generate N parallel symbol streams, where N is the IFFT/FFT size used in the gNB102and the UE116. The size N IFFT block415performs an IFFT operation on the N parallel symbol streams to generate time-domain output signals. The parallel-to-serial block420converts (such as multiplexes) the parallel time-domain output symbols from the size N IFFT block415in order to generate a serial time-domain signal. The add cyclic prefix block425inserts a cyclic prefix to the time-domain signal. The up-converter430modulates (such as up-converts) the output of the add cyclic prefix block425to an RF frequency for transmission via a wireless channel. The signal may also be filtered at baseband before conversion to the RF frequency. A transmitted RF signal from the gNB102arrives at the UE116after passing through the wireless channel, and reverse operations to those at the gNB102are performed at the UE116. As illustrated inFIG.5, the down-converter555down-converts the received signal to a baseband frequency, and the remove cyclic prefix block560removes the cyclic prefix to generate a serial time-domain baseband signal. The serial-to-parallel block565converts the time-domain baseband signal to parallel time domain signals. The size N FFT block570performs an FFT algorithm to generate N parallel frequency-domain signals. The parallel-to-serial block575converts the parallel frequency-domain signals to a sequence of modulated data symbols. The channel decoding and demodulation block580demodulates and decodes the modulated symbols to recover the original input data stream. Each of the gNBs101-103may implement a transmit path400as illustrated inFIG.4that is analogous to transmitting in the downlink to UEs111-116and may implement a receive path500as illustrated inFIG.5that is analogous to receiving in the uplink from UEs111-116. Similarly, each of UEs111-116may implement the transmit path400for transmitting in the uplink to gNBs101-103and may implement the receive path500for receiving in the downlink from gNBs101-103. Each of the components inFIG.4andFIG.5can be implemented using only hardware or using a combination of hardware and software/firmware. As a particular example, at least some of the components inFIG.4andFIG.5may be implemented in software, while other components may be implemented by configurable hardware or a mixture of software and configurable hardware. For instance, the FFT block570and the IFFT block515may be implemented as configurable software algorithms, where the value of size N may be modified according to the implementation. Furthermore, although described as using FFT and IFFT, this is by way of illustration only and may not be construed to limit the scope of this disclosure. Other types of transforms, such as discrete Fourier transform (DFT) and inverse discrete Fourier transform (IDFT) functions, can be used. It may be appreciated that the value of the variable N may be any integer number (such as 1, 2, 3, 4, or the like) for DFT and IDFT functions, while the value of the variable N may be any integer number that is a power of two (such as 1, 2, 4, 8, 16, or the like) for FFT and IFFT functions. AlthoughFIG.4andFIG.5illustrate examples of wireless transmit and receive paths, various changes may be made toFIG.4andFIG.5. For example, various components inFIG.4andFIG.5can be combined, further subdivided, or omitted and additional components can be added according to particular needs. Also,FIG.4andFIG.5are meant to illustrate examples of the types of transmit and receive paths that can be used in a wireless network. Any other suitable architectures can be used to support wireless communications in a wireless network. The present disclosure focuses on the mechanism and methodology for indexing of SS/PBCH block (SSB) on the unlicensed spectrum, which includes two set of indexing, and potential impact of the indexing of SS/PBCH block when utilizing the indexing methods on the unlicensed spectrum. The present disclosure focuses on indexing of SS/PBCH blocks on unlicensed spectrum, wherein the unlicensed spectrum can refer to spectrum operated in a shared channel access manner. For an operation with shared spectrum channel access, an SS/PBCH block can be associated with at least one transmission opportunities, in order to resist the negative impact from listen-before talk (LBT) on the channel access opportunities. An illustration of the multiple transmission opportunities for SS/PBCH block in a window is shown inFIG.6, wherein the interval between neighboring allowed candidate SS/PBCH block locations in the window (e.g., Q in the figure) can be known to UE (e.g., either by configuration or fixed assumption, depending on the application scenario) and the candidate SS/PBCH blocks with the interval of Q are quasi co-located (QCLed). FIG.6illustrates example multiple transmission opportunities for SS/PBCH block600according to embodiments of the present disclosure. An embodiment of the multiple transmission opportunities for SS/PBCH block600shown inFIG.6is for illustration only. One or more of the components illustrated inFIG.6can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions. In one embodiment, two sets of indexing for SS/PBCH blocks can be supported, wherein the first set of indexing represents the index of candidate SS/PBCH block within a time period (e.g., a half frame), and the window is confined within the time period with a maximum duration same as the time period; and the second set of indexing represents the index of SS/PBCH blocks within a group of Q SS/PBCH blocks, wherein no further QCL assumption is applicable within the group of Q SS/PBCH blocks. In one example, the first set of indexing (e.g., each index in the set is termed as “first index of SS/PBCH block” in the disclosure) is denoted as I_SSB1, where 0≤I_SSB1≤Lmax−1, andLmaxis the maximum number of candidate SS/PBCH blocks in the time period (e.g., a half frame). For one instance, for operation with shared channel spectrum access,Lmax=20 for 30 kHz SCS, andLmax=10 for 15 kHz SCS. For another instance, for operation without shared channel spectrum,Lmax=Lmax, where Lmaxis the maximum number of SS/PBCH blocks to be transmitted in the time period (e.g., a half frame). In another example, the second set of indexing (e.g., each index in the set is termed as “second index of SS/PBCH block” in the disclosure) is denoted as I_SSB2, where 0≤I_SSB1≤Q−1, and Q is the QCL assumption parameter defined with a unit of the number of candidate SS/PBCH blocks. For instance, the value of Q is provided to the UE for a given cell, wherein the value could be indicated in system information for a serving cell and indicated in system information and/or RRC parameter for a neighboring cell. In one example, the first set of indexing refers to “index of candidate SS/PBCH block in a half frame” or “index of candidate SS/PBCH block per half frame” or “index of candidate SS/PBCH block,” or “candidate SS/PBCH block index,” and the second set of indexing refers to “SS/PBCH block index” or “index of SS/PBCH block,” or “index of QCLed SS/PBCH block group.” In one example, the first index of SS/PBCH block and the second index of SS/PBCH block have a mapping relationship. For one example, for an operation with shared channel spectrum, for a giving first index I_SSB1, the corresponding second index can be determined as I_SSB2=I_SSB1 mod Q, wherein Q is the QCL parameter. In one variant to this example, the corresponding second index can be determined as I_SSB2=I_DMRS mod Q, wherein I_DMRS is the index of DM-RS sequence of PBCH in the corresponding SS/PBCH block determined by I_DMRS=I_SSB1 mod L_max, and L_max is the maximum number of SS/PBCH blocks per half frame (e.g., L_max=8 for carrier frequency range between 3 GHz to 7 GHz). Note that the variant to this example gives the same value of I_SSBs, when Q is dividable by L_max. For another example, for an operation without shared channel spectrum, for a giving first index I_SSB1, the corresponding second index can be determined as I_SSB2=I_SSB1, e.g., the two indices are the same. FIG.7illustrates a flowchart of a method700for determining first set of indexing of SS/PBCH blocks based on a given index from the second set of indexing of SS/PBCH block according to embodiments of the present disclosure. An embodiment of the method700shown inFIG.7is for illustration only. One or more of the components illustrated inFIG.7can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions. For yet another example, for an operation with shared channel spectrum, for a given second index I_SSB2, the corresponding first index can be determined as a set of indexes (or a subset depending on whether further information on the down-selection is provided), wherein the set of indexes is given by I_SSB2+k*Q, where k is from {0, 1, 2, . . . } such that I_SSB2+k*Q≤Lmax−1, wherein Q is the QCL parameter. For yet another example, for an operation without shared channel spectrum, for a giving second index I_SSB2, the corresponding first index can be determined as I_SSB1=I_SSB2, e.g., the two indices are the same. In one example, the determined set of indexes for the first indexing of SS/PBCH blocks can be different based on a different value and/or use case of Q. For example, if Q is configured for a given cell (e.g., either a serving cell or a neighboring cell), then the determined set of indexes for the first indexing of SS/PBCH blocks can be separately determined for that given cell. For another example, Q is assumed as the configured QCL parameter from serving cell, unless there is an explicit indication of the Q for neighboring cell (e.g., neighboring cell radio resource management (RRM) measurement). In another example, for an operation with shared channel spectrum, for a given second index I_SSB2, the determined set of indexes for the first indexing of SS/PBCH blocks can be further down-selected if further information is provided to the UE. In one example, the further information on the down-selection can be the transmission window for SS/PBCH blocks for a serving cell, such that the set of indexes for the first indexing correspond to SS/PBCH blocks with candidate index I_SSB2+k*Q confined within the transmission window, e.g., I_SSB2+k*Q≤N_SSB−1, wherein N_SSB corresponds to the number of candidate SS/PBCH blocks within the configured transmission window, and Q is indicated to the UE for the serving cell. In another example, the further information on the down-selection can be the measurement window for SS/PBCH blocks, such that the set of indexes for the first indexing correspond to candidate SS/PBCH blocks with index I_SSB2+k*Q confined within the measurement window, e.g., I_SSB2+k*Q≤N_SSB′−1, wherein N_SSB′ is the number of candidate SS/PBCH blocks within the configured measurement window, and Q is indicated to the UE for the given cell to be measured. In yet another example, the further information on the down-selection can be the channel occupancy indicated to the UE (e.g., by group common—physical downlink control channel (GC-PDCCH)), such that the set of indexes for the first indexing correspond to SS/PBCH blocks with candidate index I_SSB2+k*Q confined within the channel occupancy. In yet another example, the further information on the down-selection can be an indication of whether candidate SS/PBCH block(s) are transmitted or not (e.g., by DCI format), such that the set of indexes for the first indexing correspond to candidate SS/PBCH blocks with index I_SSB2+k*Q indicated to be transmitted. In yet another example, the further information on the down-selection can be an indication of whether candidate SS/PBCH block(s) are transmitted or not (e.g., by DCI format), such that the set of indexes for the first indexing correspond to candidate SS/PBCH blocks with index I_SSB2+k*Q not indicated to be not transmitted. In yet another example, the set of indexes for the first indexing can be determined based on combination of above examples, when the corresponding further information on the down-selection is provided to the UE. An example flowchart for determining the first set of indexing of SS/PBCH blocks based on a given index from the second set of indexing of SS/PBCH block is shown inFIG.7. FIG.8illustrates a flowchart of a method800for determining second set of indexing of SS/PBCH blocks based on a given index from the first set of indexing of SS/PBCH block according to embodiments of the present disclosure. An embodiment of the method800shown inFIG.8is for illustration only. One or more of the components illustrated inFIG.8can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions. In one embodiment, the first set of indexing for SS/PBCH blocks can be utilized for timing determination, and the corresponding signal/channel generation. For one example, the 3 least significant bits (LSBs) of the first index of SS/PBCH block can be carried by the DM-RS sequence of PBCH in the corresponding SS/PBCH block, wherein the first index of SS/PBCH block is the candidate SS/PBCH block index. For example, the UE may assume the reference-signal sequence r(m) for an SS/PBCH block is defined by r⁡(m)=12⁢(1-2·c⁡(2⁢m))+j⁢12⁢(1-2·(2⁢m+1)) where the scrambling sequence generator may be initialized at the start of each SS/PBCH block occasion with cinit=211⁢(ι_SSB+1)⁢(⌊NIDCell4⌋+1)+26⁢(ι_SSB+1)+(NIDcell⁢mod⁢4) whereιSSBis the 3 LSBs of the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) forLmax≥8 (e.g., for operation with shared spectrum channel access,Lmax=10 or 20), and a combination of half frame indicator and the 2 LSBs of the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) forLmax=4. For another example, whenLmax≥8, the bits other than 3 LSBs of the first index of SS/PBCH block can be carried by the payload of PBCH in the corresponding SS/PBCH block, wherein the first index of SS/PBCH block is the candidate SS/PBCH block index. TABLE 1 shows the generation of the payload of PBCH. TABLE 1The generation of the payload of PBCHifLmax= 10āĀ+6is reserved.āĀ+7is the MSB of the first index of SS/PBCH block, wherein the firstindex of SS/PBCH block is the candidate SS/PBCH block index.else ifLmax= 20āĀ+6, āĀ+7are the 2 MSBs of the first index of SS/PBCH block, whereinthe first index of SS/PBCH block is the candidate SS/PBCH block index.else ifLmax= 64āĀ+5, āĀ+6, āĀ+7are the 3 MSBs of the first index of SS/PBCH block,wherein the first index of SS/PBCH block is the candidate SS/PBCH blockindex.end if For yet another example, the scrambling sequence of PBCH, after rate matching and before modulation, is based on the 3 LSBs of the first index of SS/PBCH block (e.g., for operation with shared spectrum channel access,Lmax=10 or 20), wherein the first index of SS/PBCH block is the candidate SS/PBCH block index. For example, the UE may assume the block of bits b(0), . . . , b(Mbit−1), where Mbitis the number of bits transmitted on the physical broadcast channel, are scrambled prior to modulation, resulting in a block of scrambled bits {tilde over (b)}(0), . . . , {tilde over (b)}bit−1) according to {tilde over (b)}(i)=(b(i)+c(i+νMbit)) mod 2 where the scrambling sequence may be initialized with cinit=NIDcellat the start of each SS/PBCH block and, ν is the 3 LSBs of the first index of SS/PBCH block index whenLmax≥8 (e.g., for operation with shared spectrum channel access,Lmax=10 or 20), and the 2 LSBs of the first index of SS/PBCH block whenLmax=4, wherein the first index of SS/PBCH block is the candidate SS/PBCH block index. For yet another example, for an operation with shared spectrum channel access, for each SS/PBCH block with the first index of SS/PBCH block, wherein the first index of SS/PBCH block is the candidate SS/PBCH block index, there can be associated slot(s) containing Type-PDCCH monitoring occasions. For example, for a first index of SS/PBCH block (i.e., candidate SS/PBCH block index)ι, where 0≤ι≤Lmax−1 (e.g., for operation with shared spectrum channel access,Lmax=10 or 20), two consecutive slots starting from slot n0include the associated Type0-PDCCH monitoring occasions. The UE determines an index of slot n0as n0=(0·2μ+└ι·M┘) mod Nslotframe,μthat is in a frame with system frame number (SFN) SFNCsatisfying SFNCmod 2=0 if └(0·2μ+└ι·M┘)/Nslotframe,μ┘ mod 2=0, or in a frame with SFN satisfying SFNCmod 2=1 if └(0·2μ+└ι·M┘)/Nslotframe,μ┘ mod 2=1. For yet another example, the scrambling sequence of PBCH payload is based on first index of SS/PBCH block, wherein the first index of SS/PBCH block is the candidate SS/PBCH block index. TABLE 2 shows the scrambling of PBCH payload. TABLE 2Scrambling of PBCH payloadwhile i < Aif aicorresponds to any one of the bits belonging to the first index of SS/PBCH block (i.e.,candidate SS/PBCH block index), the half frame index, and 2ndand 3rdleast significantbits of the system frame numbersi= 0;elsesi= c(j + νM);j = j + 1;end ifi = i + 1;end while The scrambling sequence c(i) is initialized with cinit=NIDcellat the start of each SFN satisfying mod(SFN,8)=0; and M=A−3 forLmax≤8 (e.g.,Lmax=4 orLmax=8), M=A−4 for 8<Lmax≤16 (e.g.,Lmax=10, which can be for operation with shared spectrum channel access), M=A−5 for 16<Lmax≤32 (e.g.,Lmax=20, which can be for operation with shared spectrum channel access), M=A−6 forLmax=64, whereLmaxis the number of candidate SS/PBCH blocks in a half frame. In one embodiment, the second set of indexing for SS/PBCH blocks can be utilized for procedures related to QCL assumption, and/or determining the candidate SS/PBCH blocks corresponding to the second index of SS/PBCH block, wherein the candidate SS/PBCH blocks can be used for determining the potentially transmission of SS/PBCH blocks, in order to perform at least one of PDSCH resource allocation, RACH occasion (RO) validation, PDCCH validation, or physical uplink control channel (PUCCH) validation. For one example, the second index of SS/PBCH block can be utilized as determining the resources associated with a physical random access channel (PRACH). For one example, the PRACH occasions are mapped consecutively per corresponding second index of SS/PBCH block, wherein the second index of SS/PBCH block can be a SS/PBCH block index. The indexing of the PRACH occasion indicated by the mask index value is reset per mapping cycle of consecutive PRACH occasions per the second index of SS/PBCH block. The UE selects for a PRACH transmission the PRACH occasion indicated by PRACH mask index value for the indicated second index of SS/PBCH block in the first available mapping cycle. For another example, for a PRACH transmission initiated by a PDCCH order, the field in DCI format 1_0, utilized for indicating the SS/PBCH blocks that may be used to determine the RACH occasion for the PRACH transmission, can refer to the second index of SS/PBCH block, wherein the second index of SS/PBCH block can be a SS/PBCH block index. For yet another example, for a PRACH transmission triggered by higher layers, the index of resources provided by ssb-ResourceList can refer to the second index of SS/PBCH block, wherein the second index of SS/PBCH block can be a SS/PBCH block index. In one example, the indicated second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. The at least one candidate SS/PBCH blocks are all associated with the PRACH occasion(s) specified in this example. In another example, aspect for example(s) of this approach, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell. In yet another example, the second index of SS/PBCH block can be utilized for determining the resources for radio link monitoring, wherein the second index of SS/PBCH block can be a SS/PBCH block index. For example, for operation with shared spectrum channel access, the UE is expected to perform radio link management (RLM) using the associated SS/PBCH block when the second index of SS/PBCH block is provided by RadioLinkMonitoringRS, wherein the second index of SS/PBCH block can be SS/PBCH block index. In one example, the provided second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. The UE can perform RLM based on the at least one candidate SS/PBCH blocks. For another example, for operation with shared spectrum channel access, when a UE is provided a second index of SS/PBCH block (i.e., SS/PBCH block index) by ssb-Index, the UE is expected to perform radio link monitoring using SS/PBCH block(s) in a discovery burst transmission window and with first index(es) of SS/PBCH block (i.e., candidate SS/PBCH block indexes) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) provided by ssb-Index. In another example, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell. For yet another approach, the second index of SS/PBCH block can be utilized for link recovery procedures, wherein the second index of SS/PBCH block can be SS/PBCH block index For example, a UE can be provided, for each BWP of a serving cell, a setq0of periodic CSI-RS resource configuration indexes by failureDetectionResources and a setq1of periodic CSI-RS resource configuration indexes and/or the second index of SS/PBCH blocks by candidateBeamRSList for radio link quality measurements on the BWP of the serving cell, wherein the second index of SS/PBCH block can be a SS/PBCH block index. In one example, the second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. The UE can perform link recovery based on the at least one candidate SS/PBCH blocks. In another example, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell. For yet another example, the second index of SS/PBCH block can be utilized for indexing the RS for UL power control, wherein the second index of SS/PBCH block can be SS/PBCH block index. For example, for any of PUSCH, PUCCH, or sounding reference signal (SRS), the set of RS resource indexes can include one or both of a set of second indexes of SS/PBCH block, each provided by ssb-Index when a value of a corresponding RS ID maps to a second index of SS/PBCH block, wherein the second index of SS/PBCH block can be a SS/PBCH block index. In one example, the second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. The UE can perform UL power control based on the at least one candidate SS/PBCH blocks. In another example, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell. For yet another example, the second index of SS/PBCH block can be utilized for indexing the RS for UL spatial relation information, wherein the second index of SS/PBCH block can be SS/PBCH block index For example, for any of PUSCH, PUCCH, or SRS, the set of RS resource indexes associated with the configuration of the spatial setting for UL transmission can be the second index of SS/PBCH block. In one example, the second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. The UE can determine spatial relation information based on the at least one candidate SS/PBCH blocks. In another example, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell. In yet another example, the second index of SS/PBCH block can be utilized for beam failure recovery, wherein the second index of SS/PBCH block can be SS/PBCH block index. For example, when configuring the index of resource of SS/PBCH block for beam failure recovery purpose, e.g., BFR-SSB-Resource, the second index of SS/PBCH block can be used, wherein the second index of SS/PBCH block can be a SS/PBCH block index. In one example, the second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. In another example, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell. In yet another example, the second index of SS/PBCH block can be utilized for determining QCL assumption using SS/PBCH block as source RS, wherein the second index of SS/PBCH block can be SS/PBCH block index. For one example, if the RS configured in TCI-State for determining QCL assumption is an SS/PBCH block, the second index of SS/PBCH block can be utilized, wherein the second index of SS/PBCH block can be a SS/PBCH block index. For another example, if RS configured for SRS for determining QCL assumption is an SS/PBCH block, the second index of SS/PBCH block can be utilized, wherein the second index of SS/PBCH block can be a SS/PBCH block index. For yet another example, if RS configured for CSI-RS, for measurement purpose, for determining QCL assumption is an SS/PBCH block, the second index of SS/PBCH block can be utilized, wherein the second index of SS/PBCH block can be a SS/PBCH block index. In one example, the second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. In another example, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell. In yet another example, the second index of SS/PBCH block can be utilized for indexing RS for RRM measurement, wherein the second index of SS/PBCH block can be SS/PBCH block index. For one example, when determining the ssb-Index-RSRP, the second index of SS/PBCH block can be utilized, wherein the second index of SS/PBCH block can be SS/PBCH block index. For another example, when reporting the measurement results for SS/PBCH block, e.g., in ResultsPerSSB-Index, the second index of SS/PBCH block can be utilized. For yet another example, for determining the SS/PBCH block to be measured, e.g., ssb-ToMeasure, associated with an SSB based measurement timing configuration (SMTC), the second index of SS/PBCH block can be utilized. In one example, the indicated second index of SS/PBCH block could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. For one instance, the i-th bit from left of bitmap of ssb-ToMeasure indicates a second index of SS/PBCH block (i.e., SS/PBCH block index) as i−1. For each indicated second index of SS/PBCH block (i.e., SS/PBCH block index) provided by ssb-ToMeasure, the UE can derive a set of SS/PBCH blocks within the associated SMTC window and with the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index). The UE can perform RRM measurement according to the derived all sets of SS/PBCH blocks corresponding to all the indicated second index(es) of SS/PBCH block (i.e., SS/PBCH block index) provided by ssb-ToMeasure. In another example, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell for serving cell measurement, and be the one configured for a neighboring cell for neighboring cell measurement. For yet another example, the second index of SS/PBCH block can be utilized for indexing potentially transmitted SS/PBCH blocks in a burst, wherein the second index of SS/PBCH block can be SS/PBCH block index. For example, the indexing associated with ssb-PositionsInBurst in system information block 1 (SIB1) and/or ssb-PositionsInBurst in ServingCellConfigCommon can refer to the second index of SS/PBCH block (i.e., SS/PBCH block index), which can be further utilized for determining monitoring behavior of a PDCCH candidate, and/or RO validation, and/or resource allocation for PDSCH, and/or validation for UL signal/channels. In one example, the second index of SS/PBCH block (i.e., SS/PBCH block index) could correspond to at least one candidate SS/PBCH blocks, wherein the set of first indexes of the at least one candidate SS/PBCH blocks can be determined according to the approaches specified in this disclosure. The UE can index the potentially transmitted SS/PBCH blocks in a burst based on the at least one candidate SS/PBCH blocks. For one instance of this example, the i-th bit from left of bitmap of ssb-PositionsInBurst indicates a second index of SS/PBCH block (i.e., SS/PBCH block index) as i−1, and a UE can further derive that SS/PBCH blocks within the transmission window and with first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to a second index of SS/PBCH block (i.e., SS/PBCH block index) indicated by ssb-PositionsInBurst can be potentially transmitted. In another, when determining the set of first indexes of the at least one candidate SS/PBCH blocks, the value of QCL parameter (Q) can be the one configured for a serving cell (e.g., provided by a higher layer parameter for a serving cell). In yet another example, for an operation with shared spectrum channel access, a SS/PBCH block symbol is a symbol corresponding to SS/PBCH block in a discovery burst transmission window and with the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) indicated to a UE by ssb-PositionsInBurst in SIB1, or by ssb-PositionsInBurst in ServingCellConfigCommon. In yet another example, for an operation with shared spectrum channel access, if the UE has received ssb-PositionsInBurst in SIB1 and has not received ssb-PositionsInBurst in ServingCellConfigCommon for a serving cell and if the UE does not monitor PDCCH candidates in a Type0-PDCCH CSS set and at least one resource element (RE) for a PDCCH candidate overlaps with at least one RE of a SS/PBCH block within a discovery burst transmission window and the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) provided by ssb-PositionsInBurst in SIB1 the UE is not required to monitor the PDCCH candidate. In yet another example, for an operation with shared spectrum channel access, if a UE has received ssb-PositionsInBurst in ServingCellConfigCommon, for a serving cell and if the UE does not monitor PDCCH candidates in a Type0-PDCCH CSS set and at least one RE for a PDCCH candidate overlaps with at least one RE of a SS/PBCH block within a discovery burst transmission window and with the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) provided by ssb-PositionsInBurst in ServingCellConfigCommon, the UE is not required to monitor the PDCCH candidate. In yet another example, for an operation on a single carrier in unpaired spectrum and with shared spectrum channel access, for a set of symbols of a slot that corresponds to SS/PBCH block(s) within a discovery burst transmission window and with the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) indicated to a UE by ssb-PositionsInBurst in SIB1 or ssb-PositionsInBurst in ServingCellConfigCommon, for reception of SS/PBCH blocks, the UE does not transmit PUSCH, PUCCH, PRACH in the slot if a transmission would overlap with any symbol from the set of symbols and the UE does not transmit SRS in the set of symbols of the slot. The UE does not expect the set of symbols of the slot to be indicated as uplink by tdd-UL-DL-ConfigurationCommon, or tdd-UL-DL-ConfigurationDedicated, when provided to the UE. In yet another example, for an operation with shared spectrum channel access, for a set of symbols of a slot that corresponds to SS/PBCH block(s) within a discovery burst transmission window and with the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) indicated to a UE by ssb-PositionsInBurst in SIB1, or by ssb-PositionsInBurst in ServingCellConfigCommon, for reception of SS/PBCH blocks, the UE does not expect to detect a DCI format 2_0 with an SFI-index field value indicating the set of symbols of the slot as uplink. In yet another example, for an operation with shared spectrum channel access, when receiving the PDSCH scheduled with system information-radio network temporary identifier (SI-RNTI) and the system information indicator in DCI is set to 1, random access-RNTI (RA-RNTI), paging-RNTI (P-RNTI), or temporary cell-RNTI (TC-RNTI), the UE assumes potential SS/PBCH block transmission according to ssb-PositionsInBurst, and if the PDSCH resource allocation overlaps with physical resource blocks (PRBs) containing potential SS/PBCH block transmission resources the UE may assume that the PRBs containing potential SS/PBCH block transmission resources are not available for PDSCH in the OFDM symbols where SS/PBCH block is potentially transmitted. The potential SS/PBCH block transmission is derived by the SS/PBCH block(s) within a discovery burst transmission window and with the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) indicated to a UE by ssb-PositionsInBurst. In yet another example, for an operation with shared spectrum channel access, when receiving PDSCH scheduled by PDCCH with cyclic redundancy check (CRC) scrambled by C-RNTI, modulation coding scheme cell-RNTI (MCS-C-RNTI), configuration scheduling-RNTI (CS-RNTI), or PDSCHs with semi-persistent scheduling (SPS), the REs corresponding to the configured or dynamically indicated resources are not available for PDSCH. Furthermore, the UE assumes potential SS/PBCH block transmission according to ssb-PositionsInBurst if the PDSCH resource allocation overlaps with PRBs containing potential SS/PBCH block transmission resources, the UE may assume that the PRBs containing potential SS/PBCH block transmission resources are not available for PDSCH in the OFDM symbols where SS/PBCH block is potentially transmitted. The potential SS/PBCH block transmission is derived by the SS/PBCH block(s) within a discovery burst transmission window and with the first index of SS/PBCH block (i.e., candidate SS/PBCH block index) corresponding to the second index of SS/PBCH block (i.e., SS/PBCH block index) indicated to a UE by ssb-PositionsInBurst. The present disclosure focuses on the mechanism and methodology for radio link monitoring on an unlicensed spectrum. The details of this disclosure include the following components: candidate RS location determination for RLM; in-sync (IS) and out-of-sync (OOS) evaluation (IS/OOS) evaluation rule; and a UE procedure for RLM. The present disclosure focuses on radio link monitoring on unlicensed spectrum, wherein the unlicensed spectrum can refer to spectrum operated in a shared channel access manner. In one embodiment, for a serving cell, a UE can be configured with at least one index of resource for radio link monitoring (e.g., denoted as RLM-RS), and the UE can determine the resources for RLM based on the index of resource. In one example, for a serving cell, a UE can be configured with at least one index of RLM-RS resource (e.g., from higher layer parameter RadioLinkMonitoringRS), wherein the RLM-RS resource could be either a SS/PBCH block or CSI-RS for example, and the UE can be configured with a QCL information parameter (e.g., from master information block (MIB), or SIBx, or higher layer parameter), then the UE can determine a set of resources for RLM based on the configured at least one index of RLM-RS resource as well as the QCL information parameter. For this example, denote the time-domain location of one configured RLM-RS resource as i, and denote the configured QCL information parameter as Q, then the set of time-domain location of resources for RLM is determined as i+Q*k, wherein k=0, 1, . . . , such that the corresponding time-domain location of the RLM-RS is within the RLM measurement window. In one example, the RLM measurement window is the same as the transmission window for discovery signals and channels (DSCH), wherein the DSCH includes SS/PBCH blocks and/or configurable CSI-RS. FIG.9illustrates an example determination of RLM resources900based on CO according to embodiments of the present disclosure. An embodiment of the determination of RLM resources900shown inFIG.9is for illustration only. One or more of the components illustrated inFIG.9can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions. For one example, when the configured RLM-RS is an SS/PBCH block, denote the index of one configured SS/PBCH block for RLM as i_SSB, and denote the configured QCL information parameter as Q_SSB, then the set of resources for RLM is determined as i_SSB+Q_SSB*k, wherein k=0, 1, . . . , such that the SS/PBCH block location with index i_SSB+Q_SSB*k is within the RLM measurement window. For an SS/PBCH block with candidate location index j_SSB in the RLM measurement window, the UE determines the SS/PBCH block as part of resources for RLM, if (j_SSB mod Q_SSB)=i_SSB. For another example, when the configured RLM-RS is CSI-RS, denote the time-domain location of one configured CSI-RS resource for RLM as i_CSI, and denote the configured QCL information parameter as Q_CSI, then the set of the time-domain location of the resources for RLM is determined as i_CSI+Q_CSI*k, wherein k=0, 1, . . . , such that the corresponding time-domain location of the CSI-RS is within the RLM measurement window. In one example, the configured QCL parameter for an RLM-RS resource as SS/PBCH block (Q_SSB) and the configured QCL parameter for an RLM-RS resource as CSI-RS (Q_CSI) can be separately configured. In one example, the configured QCL parameter for an RLM-RS resource as CSI-RS (Q_CSI) can be determined based on the configured QCL parameter for a RLM-RS resource as SS/PBCH block (Q_SSB). For example, Q_CSI=Q_SSB/2, in the unit of slot. For one further consideration, this one-to-one mapping only applies when the CSI-RS is QCLed with SS/PBCH block. In one example, if the UE is configured with channel occupancy information (CO) from serving cell (e.g., from GC-PDCCH), the UE can further down-select the value of k, such that the time-domain location of resources for RLM with index i+Q*k are within the RLM measurement window and within the CO at the same time. For example, when the configured RLM-RS is an SS/PBCH block, an SS/PBCH block with candidate location index j_SSB in the RLM measurement window and within the CO, the UE determines the SS/PBCH block as part of resources for RLM, if (j_SSB mod Q_SSB)=i_SSB. In one example, all the symbols corresponding the SS/PBCH blocks with index i_SSB+Q_SSB*k are within the RLM measurement window and within the CO at the same time. In another embodiment, all the symbols containing SSS (e.g., third symbol in the corresponding SS/PBCH block) of the SS/PBCH blocks with index i_SSB+Q_SSB*k are within the RLM measurement window and within the CO at the same time. FIG.10illustrates an example determination of RLM resources1000based on bitmap according to embodiments of the present disclosure. An embodiment of the determination of RLM resources1000shown inFIG.10is for illustration only. One or more of the components illustrated inFIG.10can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions. In one example, for a serving cell, a UE can be configured with at least one index of RLM-RS resource using a bitmap, then the UE can determine a set of resources for RLM based on the configured bitmap. In another example, if the UE is configured with CO from serving cell (e.g., from GC-PDCCH), the UE can further down-select the time-domain location of the RLM-RS resource corresponding to the bitmap such that the corresponding time-domain location of the RLM-RS resource are within the RLM measurement window and within the CO at the same time. In yet another example, when the configured RLM-RS is an SS/PBCH block, all the symbols corresponding the SS/PBCH blocks are within the RLM measurement window and within the CO at the same time. In yet another example, when the configured RLM-RS is an SS/PBCH block, all the symbols containing a secondary synchronization signal (SSS) (e.g., third symbol in the corresponding SS/PBCH block) of the SS/PBCH blocks are within the RLM measurement window and within the CO at the same time. In one example, RLM-RS(s) within the configured RLM measurement window are used for IS/OOS evaluation. In one example, a UE does not expect to be configured with RLM-RS outside the RLM measurement window. In another example, there could be RLM-RS(a) outside the configured RLM measurement window, and the RLM-RS(a) outside the configured RLM measurement window can be used for IS evaluation, but not for OOS evaluation. In yet another example, if the UE is configured with CO from serving cell (e.g., from GC-PDCCH), RLM-RS(s) within the configured RLM measurement window and within the CO at the same time are used for IS/OOS evaluation. In one example, a UE does not expect to be configured with RLM-RS outside the RLM measurement window or outside a configured CO. In another example, there could be RLM-RS(a) outside the configured RLM measurement window or outside a configured CO, and the RLM-RS(a) outside the configured RLM measurement window or outside a configured CO can be used for IS evaluation, but not for OOS evaluation. In yet another example, if the UE determines more than one time-domain locations for RLM-RS resources, as detailed in this disclosure, the UE can choose one of the RLM-RS resources for IS/OOS evaluation. In one example, the UE can choose any of the RLM-RS resources by the UE's implementation. In another example, the UE can choose the RLM-RS resource that the UE detects first in the time-domain and stops performing RLM measurement within the same RLM measurement window. In yet another example, if the UE determines more than one time-domain locations for RLM-RS resources, as detailed in this disclosure, the UE can choose more than one (e.g., including all) of them for IS/OOS evaluation. In one example, the UE can determine as IS, if any of the more than one time-domain locations for RLM-RS resources is evaluated as IS. In another example, the UE can determine as OOS, if all of the more than one time-domain locations for RLM-RS resources are evaluated as OOS. FIG.11illustrates a flowchart of a UE procedures1100for RLM measurement for operation with shared spectrum channel access according to embodiments of the present disclosure. An embodiment of the UE procedures1100shown inFIG.11is for illustration only. One or more of the components illustrated inFIG.11can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions. As illustrated inFIG.11, a UE first determines a QCL assumption parameter from a configuration by a serving cell at step1101, and determines an index of RLM-RS resource from a configuration by the serving cell at step1102, wherein the RLM-RS can be at least one of an SS/PBCH block or a CSI-RS resource for example. The UE then determines a set of time-domain locations for RLM at step1103, corresponding to the configured index of RLM-RS resource, within the RLM measurement window and/or the channel occupancy if known to the UE. The UE performs IS/OOS evaluation based on the set of time-domain locations for RLM at step1104, according to the descriptions in this disclosure, and reports the evaluation result at step1106. FIG.12illustrates a flowchart of a method1200for indexing of SS/PBCH block on unlicensed spectrum according to embodiments of the present disclosure, as may be performed by a UE (e.g.,111-116as illustrated inFIG.1). An embodiment of the method1200shown inFIG.12is for illustration only. One or more of the components illustrated inFIG.12can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions. As illustrated inFIG.12, the method1200begins at step1202. In step1202, the UE receives a synchronization signals and physical broadcast channel (SS/PBCH) block. Subsequently, in step1204, the UE determines whether a shared spectrum channel access is enabled. Next, in step1206, the UE determines a first index of the SS/PBCH block as a candidate SS/PBCH block index (ISSB1) based on a number of candidate SS/PBCH blocks in a half frame. Finally, in step1208, the UE determines a second index of the SS/PBCH block as an SS/PBCH block index (ISSB2) based on a QCL parameter (Q) indicated by a PBCH in the SS/PBCH block, wherein the SS/PBCH block index (ISSB2) is determined as: ISSB2=ISSB1mod Q based on a determination that the shared spectrum channel access is enabled, where mod is a modular operation; or ISSB2=ISSB1based on the determination that the shared spectrum channel access is not enabled. In one embodiment, ISSB1is determined as 0≤ISSB1≤Lmax−1, whereLmaxis the number of candidate SS/PBCH blocks in the half frame, and ISSB2is determined as 0≤ISSB2≤Q−1, where Q is the QCL parameter indicated by the PBCH included in the SS/PBCH block. In one embodiment, the UE receives, based on the first index determined as the candidate SS/PBCH block index (ISSB1), a de-modulation reference signal (DMRS) of the PBCH in the SS/PBCH block and a scrambling sequence of the PBCH where: ifLmax=4, two LSBs of the candidate SS/PBCH block index (ISSB1) are used, and ifLmax≥8, three LSBs of the candidate SS/PBCH block index (ISSB1) are used. In one embodiment, the UE receives, based on the first index determined as the candidate SS/PBCH block index (ISSB1), a payload of the PBCH in the SS/PBCH block and a number A of scrambled bits in the payload of the PBCH, where: ifLmax=10, āĀ+6is reserved, and āĀ+7is a most significant bit (MSB) of the candidate SS/PBCH block index (ISSB1), A=M−4; and ifLmax=20, āĀ+6and āĀ+7are two MSBs of the candidate SS/PBCH block index (ISSB1), A=M−5, where M is a number of bits in the payload of the PBCH. In one embodiment, the UE performs, based on the determination that the shared spectrum channel access is enabled, a radio link monitoring based on at least one SS/PBCH block in a discovery burst transmission window and the candidate SS/PBCH block index (ISSB1) corresponding to the SS/PBCH block index (ISSB2) indicated by a higher layer parameter (ssb-Index). In one embodiment, the UE performs, based on a determination that the shared spectrum channel access is enabled, an uplink power control based on at least one SS/PBCH block with the candidate SS/PBCH block index (ISSB1) corresponding to the SS/PBCH block index (ISSB2) indicated by a higher layer parameter (ssb-Index). In one embodiment, the UE performs, based on a determination that the shared spectrum channel access is enabled, an operation of PDCCH candidate validation based on at least one SS/PBCH block with the candidate SS/PBCH block index (ISSB1) corresponding to the SS/PBCH block index (ISSB2) indicated by a higher layer parameter (ssb-PositionsInBurst). The above flowcharts illustrate example methods that can be implemented in accordance with the principles of the present disclosure and various changes could be made to the methods illustrated in the flowcharts herein. For example, while shown as a series of steps, various steps in each figure could overlap, occur in parallel, occur in a different order, or occur multiple times. In another example, steps may be omitted or replaced by other steps. Although the present disclosure has been described with exemplary embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims. None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claims scope. The scope of patented subject matter is defined by the claims.
71,109
11943635
DETAILED DESCRIPTION FIG.1is a drawing of an exemplary communications system100in accordance with an exemplary embodiment. Exemplary communications system100includes a test server108, a database109, an IP edge device110, a base station112, e.g., a WiFi base station, a wireless extender114, e.g., a WiFi extender, and a mobile handset116, e.g., a wireless test tool, e.g., a WiFi test tool, or a mobile device, e.g., a smartphone, wireless tablet, or wireless notepad, including a wireless, e.g., WiFi, test application (APP). The test server108and the database109are located in a cloud system104. The database109includes stored configuration information111, e.g., configuration information corresponding to base station112, wireless extender112, and/or mobile handset116and/or configuration information corresponding to links, e.g., a front haul link between the wireless extender and the mobile handset and a backhaul link between the base station and the wireless extender. Exemplary configuration information includes, e.g., operating frequency, information identifying an IEEE standard being used, bandwidth information, bit rate information, modulation and coding scheme (MCS) information, and/or spatial stream information. The IP edge device110, the base station112, and the wireless extender114are located at a customer premises site102, e.g., a residential or business customer site. The mobile handset116, which is held by user118, e.g., a technician, is currently located within the customer premises102, e.g., near an edge of the customer premises102. Base station112has a wireless coverage area represented by circle113. Wireless extender has a wireless coverage area represented by dotted circle115. The wireless extender114is placed to extend the wireless coverage area for the customer premises beyond the wireless coverage area of the base station112. The final determined placement of the wireless extender114at the customer premises102is based on testing performed using the mobile handset116, test server108, base station112and wireless extender. In various embodiments, a determined transmit power level of the wireless extender114and/or a determined transmit power level of the base station112is based on testing performed using the mobile handset116, test server108, base station112and wireless extender114. Although only one wireless extender114is shown inFIG.1, it should be appreciated that customer premises s102may, and sometimes does, include multiple wireless extenders114, e.g., so that wireless coverage may be available throughout the entire customer premises102. System100includes a connection120between base station120and test server108, which traverses IP edge device110, e.g., a router, and Internet106. Legend102indicates that heavy solid line122represents a front haul link which is between wireless extender114and mobile handset116. The front haul link122is sometimes referred to a first link. Legend102indicates that heavy dashed line124represents a back haul link which is base station102and wireless extender114. The back haul link124is sometimes referred to a second link. Legend102indicates that heavy dotted line126represents an end to end connection between test server108and mobile handset116. End to end connection126traverses the Internet106, IP edge device110, base station112, and wireless extender114. End to end connection126includes backhaul link (second link)124and front haul link (first link)122. Mobile handset116, e.g., including a graphical user interface (GUI), serves an input device116for user118, e.g., a technician, to issue commands to perform various tests, and a display device, e.g., to display test results and/or recommendations to the user. Exemplary input test commands include, e.g., start system test, start first link testing, perform a rate test on the first link, perform a SLA link achievability (SLAM) determination for the first link, start second link testing, perform a rate test on the second link, perform a SLA link achievability (SLAM) determination for the second link, start end to end connection test, and perform a rate test for the end to end connection. Exemplary displayed test results include, e.g., an achieved rate for the first link, a SLAM determination for the first link, a determined optimal transmit power level for the first link, an indication that the first link has been verified, an achieved rate for the second link, a SLAM determination for the second link, a determined optimal transmit power level for the second link, an indication that the second link has been verified, a determined rate for the end to end connection, an indication that the end to end connection test has passed or failed. Exemplary recommendations include, e.g., a recommendation to move the wireless extender closer to the base station, a recommendation to move the extender to a particular location at the customer premises, a recommendation to proceed with the testing, a recommendation to repeat a test, a recommendation to add additional wireless extenders to the customer premises, etc. Other exemplary displayed information communicated to the user of mobile handset116include, e.g., test process, channel change information, an indication that Dynamic Frequency Selection (DFS) channels have been blacklisted or whitelisted, and an indication that a problem has been detected or is suspected with backend systems. Mobile handset116also determines an achieved rate for a rate test performed for the first link and reports the results to the test server108. Test Server108commands the base station112and the wireless extender114to perform various tests and/or evaluations, e.g., in response to a received request from mobile handset116and/or in accordance with steps of an automated testing method. Test server108further receives results from mobile handset116, wireless extender114, and base station112. Test server108evaluates the received results and makes determinations, e.g., did a particular test pass or fail, what action should be taken, etc. Test server108further sends command signals to the wireless extender114and base station112to implement determined actions, e.g., change a transmission power level, remove high throughput traffic, change channels, etc. Wireless extender114, under the control of test server108, performs operations, e.g., initiates a first link rate test and sends signals used in the first link rate test, changes a transmission power level for the first link, performs a SLAM determination for the first link, changes channels for the first link, measures rate for the second link and reports achieved rate for the second link. Base station112, under the control of test server108, performs operations, e.g., initiates a second link rate test and sends signals used in the second link rate test, changes a transmission power level for the second link, performs a SLAM determination for the second link, and changes channels for the second link. FIG.2, comprising the combination ofFIG.2A,FIG.2B,FIG.2C,FIG.2D,FIG.2E,FIG.2E,FIG.2F,FIG.2G,FIG.2HandFIG.2I, is a flowchart200of an exemplary system analysis method, e.g., a method of operating a communications system to perform system analysis and system configuration in accordance with an exemplary embodiment. The exemplary system is, e.g., system100ofFIG.1. Operation start in step202in which the system is powered on and initialized. Operation proceeds from step202to step204. In step204a mobile handset, e.g., mobile handset116, receives user input to initiate the system analysis method. For example, mobile handset116detects that user118has depressed start system analysis button1320on the graphical user interface1300of display1254, e.g., a touchscreen display, of mobile handset116. Operation proceeds from step204to step206. In step206the mobile handset sends a system analysis test initiate signal to a test server, e.g., test server108of cloud system104. Operation proceeds from step206to step208. Operation proceeds from step206to step208. In step208the test server receives the system analysis initiate signal and performs configurations operations, e.g., configuration the system for the system analysis. Operation proceeds from step208to step210. In step210the mobile handset receives user input requesting analysis of a first link, said first link being between a wireless extender, e.g., wireless extender114, and the mobile handset. For example, the mobile handset116detects that user118has depressed wireless extender to mobile handset (first link) button1312on the first link portion1304of the graphical user interface1300of display1254of mobile handset116. The first link is, e.g., front haul link122between wireless extender114and mobile handset116. Operation proceeds from step210to step212. In step212the mobile handset sends a run first link analysis command signal to the test server. Operation proceeds from step212to step218. In step218the test server receives said run first link analysis command signal send from the mobile handset and performs configuration operations for the first link analysis. Operation proceeds from step218to step220. In step220the mobile handset receives user input requesting a speed test on the first link. For example, the mobile handset116detects that user118has depressed speed test (first link) button1314on the first link portion1304of the graphical user interface1300of display1254of mobile handset116. Operation proceeds from step220to step222. In step222the mobile handset sends a run speed test command signal to the test server to request a speed test on the first link. Operation proceeds from step222to step224. In step224the test server receives the run speed test command signal from the mobile handset. Operation proceeds from step224to step226. In step226the test server sends a run speed test execute command signal to the wireless extender to initiate a speed test on the first link. Operation proceeds from step226to step228. In step228the wireless extender receives said run speed test execute command signal. Operation proceeds from step228to step230. In step230the wireless extender starts the speed tests and initiates traffic, e.g., speed test traffic, downstream to the mobile handset. Operation proceeds from step230, via connecting node A232, to step234. In step234the mobile handset: receives traffic, e.g. speed test traffic, processes the received traffic, e.g., performing speed test measurements, determines that the speed test has concluded, determines an achieved speed for the speed test for the first link, reports the achieved speed for the speed test for the first link to the test server, and displays, e.g., on display1254, the achieved speed to the speed test for the first link to the user of the mobile handset. Operation proceeds from step234to step236. In step236the test server receives the reported achieved speed for the first link. Operation proceeds from step236to step238. In step238the test server determines if the achieved speed for the first link is greater than or equal to the expected speed tier for the first link. Operation proceeds from step238to step240. In step240if the achieved speed for the first link is greater than or equal to the expected speed tier, then operation proceeds from step240to step242; otherwise operation proceeds from step240to step244. In step240the test server determines that the first link has been verified. Operation proceeds from step242to step246, in which the test server sends the mobile handset a messing indicating that the first link has been verified. Operation proceeds from step246to step248. In step248the mobile handset: i) receives the message indicating that the first link has been verified; and ii) presents the user of the mobile handset device a notification that the first link has been verified. Operation proceeds from step248, via connecting node D250to step296. Returning to step244, in step244the test server sends a SLA link achievability method (SLAM) determination execute command to the wireless extender commanding the wireless extender to perform SLAM on the first link. Operation proceeds from step244to step252. In step252the wireless extender receives the SLAM execute command. Operation proceeds from step252to step254. In step254the wireless extender performs a SLAM determination for the first link, e.g., the wireless extender performs the method ofFIG.3. Step254includes step256, in which the wireless extender determines one of: i) the physical link does not support the speed tier or ii) the physical link does support the speed tier. Operation proceeds from step254to step258. In step258the wireless extender sends the SLAM determination for the first link to the test server. Operation proceeds from step258to step260. In step260the test server: i) receives the SLAM determination for the first link and ii) sends the SLAM determination for the first link to the mobile handset. Operation proceeds from step260to step262. In step262the mobile handset: i) receives the SLAM determination for the first link and ii) displays the SLAM determination for the first link to the user of the mobile handset. Operation proceeds from step262, via connecting node B264to step266. In step266if the determination is that the physical link does support the speed tier, then operation proceeds from step266to step268; otherwise operation proceeds from step266to step290. In step268, the test server determines if there is traffic on the network. Operation proceeds from step268to step270. In step270, if the determination of step268is that there is traffic on the network, then operation proceeds from step268to step280; otherwise, operation proceeds from step270to step272. In step272, the system is operated to change the operating channel for the first link. Step272includes steps274,275,276,277and278. In step274the test server sends a channel change command to the wireless extender to change the operating channel for the first link. Operation proceeds from step274to step275. In step275the wireless extender: i) receives said channel change command; ii) performs a channel scan; iii) selects a more desirable channel to operate on; and iv) changes to the selected channel. Operation proceeds from step275to step276. In step276the test server: i) receives said channel change information; ii) stores said channel change information, and, in some embodiments, sends said channel change information to the mobile handset. Operation proceeds from step277to step278. In step278the mobile handset displays the status of the channel change to the user of the mobile handset. Operation proceeds from step272, via connecting node C288, to step222, in which the mobile handset sends a run speed test command signal to the test server to run a speed test on the first link, the first link now using a different operating channel. Returning to step280, in step280the system is operated to remover high throughput traffic contributors from the network. Step280includes steps282,284and286. In step282the test server sends a command to the wireless extender to remove high throughput contributors from the network. Operation proceeds from step282to step284. In step284the wireless extender receives said command and, in response to the command, the wireless extender removes high throughput contributors from the network. Operation proceeds from step284to step286. In step286the mobile handset displays the progress of the removal of the high throughput contributors to the user of the mobile handset. Operation proceeds from step280, via connecting node C288, to step228in which the mobile handset sends a run speed test command signal to the test server to request a speed test on the first link, with the traffic on the network having been reduced by the removal of the high throughput contributors. Returning to step290, in step290the test server sends a recommendation to the mobile handset to relocate the wireless extender toward the base station. Operation proceeds from step290to step292. In step292the mobile handset receives the recommendation to relocate the wireless extender, and in response, the mobile handset presents the recommendation to the user of the mobile handset to relocate the wireless extender toward the base station. In some embodiments, the recommendation includes coordinates of a new recommended position of the wireless extender. In some embodiments, the recommendation includes a recommended distance to move to the move the wireless extender and a recommended direction to move the wireless extender. Operation proceeds from step292, via connecting node E294, to step210, in which the mobile handset receive user input requesting analysis of the first link, following the repositioning of the wireless extender, by the user of the mobile handset, in accordance with the recommendation. Returning to step296, in step296the system is operated to perform an optimal power level determination method for the first link, e.g., the system is operated to perform the method ofFIG.4. Step296includes step298and step300. In step298the test server is operated to control the wireless extender to set transmit power, e.g., for its transmissions over the first link, to an optimal level based on speed test results corresponding to speed tests for the first link which are performed at different wireless extender transmit power levels. Operation proceeds from step298to step300. In step300the test server blacklists or whitelists DFS channels based on the determined optimal level and a DFS maximum transmit power level. Operation proceeds from step296to step302. In step302the mobile handset receives user input requesting analysis of a second link, said second link being between said base station and said wireless extender. Operation proceeds from step302to step304. In step304the mobile handset sends a run second link analysis command signal to the test server. Operation proceeds from step304to step306, in which the test server receives the run second link analysis command signal. Operation proceeds from step306to step308. In step308the mobile handset receives user input requesting a speed test on the second link. Operation proceeds from step308to step310. In step310the mobile handset sends a run speed test command signal to the test server to request a speed test on the second link. Operation proceeds from step310to step312. In step312the test server receives the command signal requesting the speed test on the second link. Operation proceeds from step312to step314. In step314the test server sends a run speed test execute command signal to the base station to initiate a speed test on the second link. Operation proceeds from step314to step316. In step316, the base station receives said run speed test execute command signal, and in step318the base station starts the speed test for the second link and initiates traffic, e.g., speed test traffic, downstream to the wireless extender. Operation proceeds from step318to step320. In step320the wireless extender is operated to: receive traffic, determine speed on the second link based on received traffic, determine the speed test has concluded, determine an achieved speed fro the speed test for the second link, and report the achieved speed for the speed test for the second link to the test server. Operation proceeds from step320to step322. In step322the test server receives the reported achieved speed for the second link. Operation proceeds from step322, via connecting node F324, to step326. In step326the test server sends the reported determined achieved speed for the second link to the mobile handset. Operation proceeds from step326to step328. In step328the mobile handset is operated to: i) receive the reported determined achieved speed for the speed test for the second link and ii) display the achieved speed for the second link to the user of the mobile handset. Operation proceeds from step328to step330. In step330the test server determines if the achieved speed for the second link is greater than or equal to the expected speed tier for the second link. Operation proceeds from step330to step332. In step332if the achieved speed for the second link is greater than or equal to the expected speed tier for the second link, then operation proceeds from step332to step334; otherwise, operation proceeds from step332to step342. In step334, the test server determines that the second link has been verified. Operation proceeds from step334to step336. In step336the test server sends the mobile handset a message indicating that the second link has been verified. Operation proceeds from step336to step338. In step338the mobile handset is operated to: i) receive the message indicating that the second link has been verified and ii) present the user of the mobile handset a notification that the second link has been verified. Operation proceeds from step338, via connecting node J340to step392. Returning to step342, in step342the test server sends a SLA link achievability method (SLAM) determination execute command to the base station commanding the base station to perform SLAM on the second link. Operation proceeds from step342to step344. In step344the base station receives the SLAM execute command. Operation proceeds from step344to step346. In step346the base station performs a SLAM determination for the second link, e.g., the base station performs the method ofFIG.5. Step346includes step348, in which the base station determines one of: i) the physical link does not support the speed tier or ii) the physical link does support the speed tier. Operation proceeds from step346to step350. In step350the base station sends the SLAM determination for the second link to the test server. Operation proceeds from step350to step352. In step352the test server: i) receives the SLAM determination for the second link and ii) sends the SLAM determination for the second link to the mobile handset. Operation proceeds from step352to step354. In step354the mobile handset: i) receives the SLAM determination for the second link and ii) displays the SLAM determination for the second link to the user of the mobile handset. Operation proceeds from step354, via connecting node B1356to step358. In step358if the determination is that the physical link does support the speed tier, then operation proceeds from step358to step386; otherwise, operation proceeds from step358to step360. In step360, the test server determines if there is traffic on the network. Operation proceeds from step360to step362. In step362, if the determination of step360is that there is traffic on the network, then operation proceeds from step362to step376; otherwise, operation proceeds from step362to step364. In step364, the system is operated to change the operating channel for the second link. Step364includes steps366,368,370,372and374. In step366the test server sends a channel change command to the base station to change the operating channel for the second link. Operation proceeds from step366to step368. In step368the base station: i) receives said channel change command; ii) performs a channel scan; iii) selects a more desirable channel to operate on; and iv) changes to the selected channel. Operation proceeds from step368to step370. In step370the base station: i) receives said channel change information; ii) stores said channel change information, and sends said channel change information to the mobile handset. Operation proceeds from step372to step374. In step374the mobile handset displays the status of the channel change to the user of the mobile handset. Operation proceeds from step374, via connecting node H384, to step308, in which the mobile handset sends a run speed test command signal to the test server to run a speed test on the second link, the second link now using a different operating channel. Returning to step376, in step376the system is operated to remove high throughput traffic contributors from the network. Step376includes steps378,380and382. In step378the test server sends a command to the base station to remove high throughput contributors from the network. Operation proceeds from step378to step380. In step380the base station receives said command and, in response to the command, the base station removes high throughput contributors from the network. Operation proceeds from step380to step382. In step382the mobile handset displays the progress of the removal of the high throughput contributors to the user of the mobile handset. Operation proceeds from step376, via connecting node H384, to step308in which the mobile handset sends a run speed test command signal to the test server to request a speed test on the second link, with the traffic on the network having been reduced by the removal of the high throughput contributors. Returning to step386, in step386the test server sends a recommendation to the mobile handset to relocate the wireless extender toward the base station. Operation proceeds from step386to step388. In step388the mobile handset receives the recommendation to relocate the wireless extender, and in response, the mobile handset presents the recommendation to the user of the mobile handset to relocate the wireless extender toward the base station. In some embodiments, the recommendation includes coordinates of a new recommended position of the wireless extender. In some embodiments, the recommendation includes a recommended distance to move to the move the wireless extender and a recommended direction to move the wireless extender. Operation proceeds from step388, via connecting node G390, to step210, in which the mobile handset receives user input requesting analysis of the first link, following the repositioning of the wireless extender, by the user of the mobile handset, in accordance with the recommendation. Returning to step392, in step392the system is operated to perform an optimal power level determination method for the second link, e.g., the system is operated to perform the method ofFIG.6. Step392includes step394and step396. In step394the test server is operated to control the base station to set transmit power, e.g., for its transmissions over the first second, to an optimal level based on speed test results corresponding to speed tests for the second link which are performed at different base station transmit power levels. Operation proceeds from step394to step396. In step396the test server blacklists or whitelists DFS channels based on the determined optimal level for the second link and a DFS maximum transmit power level. Operation proceeds from step392to step398. In step398the mobile handset receives user input requesting an end to end connection analysis, said end to end connection being between said test server and said mobile handset, said end to end connection including said first and second links. Operation proceeds from step398to step400. In step400the mobile handset sends a run end to end analysis command signal to the test server. Operation proceeds from step400to step402. In step402the test server receives the run end to end analysis command signal. Operation proceeds from step402to step404. In step404the mobile handset receives user input request a speed test on the end to end connection. Operation proceeds from step404to step406. In step406the mobile handset sends a run speed test command signal to the test server commanding the test server to run an end to end speed test. Operation proceeds from step406to step408, In step408the test server receives the end to end speed test command signal, and in step410the test server starts the speed test and initiates traffic, e.g., speed test traffic, downstream directed to the mobile handset. Operation proceeds from step410to step412. In step412the mobile handset is operated to: receive traffic, determine a speed for the received traffic, determine that the speed test has concluded, determine an achieved speed to the end to end connection, report the achieved speed for the speed test for the end to end connection to the test server, and display the achieved speed for the speed test for the end to end connection to the user of the mobile handset. Operation proceeds from step412to step414. In step414the test server receives the reported determined achieved speed for the end to end connection. Operation proceeds from step414to step416. In step416the test server determines if the speed test passed or failed based on the reported determined achieved speed for the end to end to end connection and an end to end pass/fail threshold value. Operation proceeds from step416, via connecting node K418, to step420. In step420, if the speed test for the end to end connection passed, then operation proceeds from step420to step450; otherwise, operation proceeds from step420to step422. In step422the test server decides to run a speed test between the test server and the base station. Operation proceeds from step422to step424. In step424the test server initiates the speed test between the test server and the base station, and the test server sends traffic, e.g., speed test traffic, to the base station. Operation proceeds from step424to step426. In step426the base station is operated to: receive traffic, determine a speed based on received traffic, determine that the speed test has concluded, and determine an achieved speed for the speed test for the test server to base station connection. Operation proceeds from step426, via connecting node L428to step430. In step430the base station reports the determined achieved speed for the connection between the test server and the base station. Operation proceeds from step420to step432. In step432the test server receives the reported achieved speed for the connection between the test server and the station. Operation proceeds from step432to step434. In step434the test server reports the achieved speed for the connection between the test server and the base station to the mobile handset. Operation proceeds from step434to step436. In step436the mobile handset receives the reported achieved speed for the connection between the test server and the base station and displays the achieved speed for the connection between the test server and the base station to the user of the mobile handset. Operation proceeds from step436to step438. In step438the test server determines if the speed test for the connection between the test server and the base station passed or failed based on the reported achieved speed and a pass/fail threshold value for the connection between the test server and the base station. Operation proceeds from step438to step440. In step440if the speed test between the test server and the base station passed, then operation proceeds from step440to step446; other wise operation proceeds from step440to step442, in which the test server is operated to contact internal backend systems to mitigate. Operation proceeds from step442, via connecting node M444, to end step356. Returning to step446, in step446the test server is operated to re-check variables. Operation proceeds from step446, via connecting node N448, to step219, in which the mobile handset receives user input requesting analysis of the first link, following completion of the variable re-check. Returning to step450, in step450the test server determines that system analysis is complete. Operation proceeds from step450to step452. In step452the test server sends a message to the mobile handset analysis is complete. Operation proceeds from step452to step454. In step454, the mobile handset receives the message communicating that system analysis is completes, and in step456, the mobile handset displays an indication to the user of the mobile handset that the system analysis is complete. Operation proceeds from step456to end step356. FIG.3, comprising the combination ofFIG.3AandFIG.3B, is a flowchart500of an exemplary method of operating a wireless extender to perform a SLA link achievability method (SLAM) in accordance with an exemplary embodiment. Operation starts in step502and proceeds to step503. In step503the wireless extender retrieves, e.g., from memory within the wireless extender and/or from an external database, e.g., database109in cloud system104, configuration information corresponding to the first link, e.g., operating frequency for the first link, information indicating the IEEE standard being used for the first link, bandwidth for the first link, modulation and coding scheme (MCS) for the first link, and/or number of spatial streams (SS) for the first link. In step502the wireless extender further retrieves SLA information corresponding to the customer premises at which the wireless extender is to be located, e.g., the bit rate, e.g. Mbps rate, the speed tier is to support. Operation proceeds from step503to step504. In step504the wireless extender determines if the operating frequency for the first link is 2.4 GHZ or 5 GHz. If the wireless extender determines that the operating frequency for the first link is 2.4 GHZ, then operation proceeds from step504to step508; however, if the wireless extender determines that the operating frequency for the first link is 5 GHz, then operation proceeds from step504to step506. In step506the wireless extender determines if the IEEE standard being used is 802.11n or 802.11ac. If the determination that IEEE standard is 802.11n, then operation proceeds from step506to step508. However, if the determination is that the IEEE standard is 802.11ac, then operation proceeds from step506, via connecting node C520to step522. In step508, the wireless extender determines if the bandwidth for the first link is 20 MHz or 40 MHz. If the wireless extender determines that the bandwidth for the first link is 20 MHz, then operation proceeds from step508to step510; however, if the wireless extender determines that the bandwidth for the first link is 40 MHz, then operation proceeds from step508to step514. In step510the wireless extender determines if the speed tier for the first link is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the wireless extender determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step510to step512; however, if the wireless extender determines that the speed tier is to support 400 Mbps, then operation proceeds from step510, via connecting node A534to step536. In step512, the wireless extender determines if the modulation and coding scheme (MCS) is within the set of {0-4, 8-10, 16-17, 22-24) or within the set of {5-7, 11-15, 18-21, 25-31}. If the wireless extender determines that the MCS is one of {0-4, 8-10, 16-17, 22-24}, then operation proceeds from step512, via connecting node A534to step536; however, if the wireless extender determines that the MCS is one of {5-7, 11-15, 18-21, 25-31}, then operation proceeds from step512, via connecting node B538to step540. In step514the wireless extender determines if the speed tier is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the wireless extender determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step514to step516; however, if the wireless extender determines that the speed tier is to support 400 Mbps, then operation proceeds from step514, to step518. In step516, the wireless extender determines if the modulation and coding scheme (MCS) is within the set of {0-2, 8, 16) or within the set of {3-7, 9-15, 17-23, 24-31}. If the wireless extender determines that the MCS is one of {0-2, 8, 16}, then operation proceeds from step512, via connecting node A534to step536; however, if the wireless extender determines that the MCS is one of {3-7, 9-15, 17-23, 24-31}, then operation proceeds from step512, via connecting node B538to step540. In step518, the wireless extender determines if the modulation and coding scheme (MCS) is within the set of {0-29) or within the set of {30-31}. If the wireless extender determines that the MCS is one of {0-29}, then operation proceeds from step512, via connecting node A534to step536; however, if the wireless extender determines that the MCS is one of {30-31}, then operation proceeds from step518, via connecting node B538to step540. Returning to step522, in step522, the wireless extender determines if the bandwidth for the first link is 80 MHz or 160 MHz. If the wireless extender determines that the bandwidth for the first link is 80 MHz, then operation proceeds from step522to step524; however, if the wireless extender determines that the bandwidth for the first link is 160 MHz, then operation proceeds from step522to step526. In step524the wireless extender determines if the speed tier is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the wireless extender determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step524to step528; however, if the wireless extender determines that the speed tier is to support 400 Mbps, then operation proceeds from step524to step530. In step528, the wireless extender determines if the number of spatial streams (SS) and the modulation and coding scheme (MCS) is within the set of {SS=1|MCS=0) or within the set of {SS=1|MCS=1-9, SS=2|MCS=0-9, SS=3|MCS=0-9, SS=4|MCS=0-9}. If the wireless extender determines that the SS and MCS is one of {SS=1|MCS=0}, then operation proceeds from step528, via connecting node A534to step536; however, if the wireless extender determines that the SS and MCS is one of {SS=1|MCS=1-9, SS=2|MCS=0-9, SS=3|MCS=0-9, SS=4|MCS=0-9}, then operation proceeds from step528, via connecting node B538to step540. In step530, the wireless extender determines if the number of spatial streams (SS) and the modulation and coding scheme (MCS) is within the set of {SS=1|MCS=0-9, SS=2|MCS=0-4, SS=3|MCS=0-3, SS=4|MCS=0-2) or within the set of {SS=2|MCS=5-9, SS=3|MCS=4-9, SS=4|MCS=3-9}. If the wireless extender determines that the SS and MCS is one of {SS=1|MCS=0-9, SS=2|MCS=0-4, SS=3|MCS=0-3, SS=4|MCS=0-2}, then operation proceeds from step530, via connecting node A534to step536; however, if the wireless extender determines that the SS and MCS is one of SS=2|MCS=5-9, SS=3|MCS=4-9, SS=4|MCS=3-9}, then operation proceeds from step530, via connecting node B538to step540. In step526the wireless extender determines if the speed tier is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the wireless extender determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step526, via connecting node B538to step540; however, if the wireless extender determines that the speed tier is to support 400 Mbps, then operation proceeds from step526to step532. In step532, the wireless extender determines if the number of spatial streams (SS) and the modulation and coding scheme (MCS) is within the set of {SS=1|MCS=0-4, SS=2|MCS=0-2, SS=3|MCS=0-1, SS=4|MCS=0} or within the set of {SS=1|MCS=5-9, SS=2|MCS=3-9, SS=3|MCS=2-9, SS=4|MCS=1-9}. If the wireless extender determines that the SS and MCS is one of {SS=1|MCS=0-4, SS=2|MCS=0-2, SS=3|MCS=0-1, SS=4|MCS=0}, then operation proceeds from step532, via connecting node A534to step536; however, if the wireless extender determines that the SS and MCS is one of {SS=1|MCS=5-9, SS=2|MCS=3-9, SS=3|MCS=2-9, SS=4|MCS=1-9}, then operation proceeds from step532, via connecting node B538to step540. In step536the wireless extender determines that the physical link does not support the speed tier. Alternatively, in step540, the wireless extender determines that the physical link does support the speed tier. Operation proceeds from step536or step540to return step542, and the determination whether or not the physical link supports the speed tier is reported from the wireless extender to the test server. FIG.4, comprising the combination ofFIG.4AandFIG.4B, is a flowchart600of an exemplary method of performing a first link optimal power level determination, in accordance with an exemplary embodiment. Operation starts in step602and proceeds to step604. In step604the system is operated run a speed test between the wireless extender and the mobile handset and to evaluate the results. Step604includes steps606,608,610,611,612and614. In step606the test server sends a run speed test execute command signal to the wireless extender to initiate a speed test on the first link. Operation proceeds from step606to step608. In step608the wireless extender receives the run speed test execute command signal and in step610the wireless extender starts the speed test and initiates traffic, e.g., speed test traffic, downstream to the mobile handset. Operation proceeds from step610to step611. In step611the mobile handset is operated to: receive traffic, determine a speed based on received traffic, determine the speed test has concluded, determine an achieved speed for the speed test for the first link, report the achieved speed fort he speed test for the first link to the test server, and display the achieved speed for the speed test for the first link to the user of the mobile handset. Operation proceeds from step611to step612. In step612the test server receives the reported determined achieved speed for the first link and in step614the test server determines if the speed test for the first link passed or failed based on the reported determined achieved speed. Operation proceeds from step604to step616. In step616if the speed test, performed in step604, passed then operation proceeds from step616to step617; otherwise, operation proceeds from step616, via connecting node A624to step626. In step617, the system is operated to reduce the transmit power of the wireless extender over first link by 2 dB. Step617includes steps618,620and622. In step618the test server sends a command to the wireless extender commanding the wireless extender to reduce its transmit power for the first link by 2 dBs. Operation proceeds from step620to step622. In step622the wireless extender reduces its transmit power for the first link by 2 dBs. Operation proceeds from step617to the input of step604, in which the system is operated to run another speed test between the wireless extender and the mobile handset, at reduced power with respect to the last speed test, and to evaluate the results. Returning to step626, in step626the test server determines that the first link transmit power margin has been assessed. Operation proceeds from step626to step628. In step628the system is operated to increase the transmit power of the wireless extender over first link by 2 dBs. Step628includes step630and step632. In step630the test server sends a command to the wireless extender, said command commanding the wireless extender to increase its transmit power for the first link (front haul) by 2 dBs. Operation proceeds from step630to step632. In step632the wireless extender receives the command and increases transmit power for the first link (front haul) by 2 dBs in response to the received command. Operation proceeds from step628to step634. In step634the test server determines that current setting of transmit power of wireless extender over first link is the optimal power setting. Operation proceeds from step634to step636, in which the test server determines if the current setting of the transmit power of the wireless extender for the first link is greater than the DFS maximum transmit power. Operation proceeds from step636to step638. In step638, if the transmit power of the wireless extender for the first link is greater than DFS maximum transmit power, then operation proceeds from step638to step640; in which the test server blacklists DFS channels. In step638, if the transmit power of the wireless extender for the first link is not greater than DFS maximum transmit power, then operation proceeds from step638to step642; in which the test server whitelists DFS channels. Operation proceeds from step640or step642to step644. In step644the test server stores, e.g., in a database in a cloud system, the determined optimal transmit power setting for the first link and information indicating whether the DFS channels have been blacklisted or whitelisted. Operation proceeds from step644to step646. In step646the test server sends a message communicating the determined optimal transmit power setting for the first link and information indicating whether the DFS channels have been blacklisted or whitelisted to the mobile handset. Operation proceeds from step646to step648. In step648the mobile handset receives the message and presents the determined optimal transmit power for the first link and the information indicating if the DFS channels have been blacklisted or whitelisted to the user of the mobile handset. Operation proceeds from step648to return step650. FIG.5, comprising the combination ofFIG.5AandFIG.5B, is a flowchart700of an exemplary method of operating a base station to perform a SLA link achievability method (SLAM) in accordance with an exemplary embodiment. Operation starts in step702and proceeds to step703. In step703the base station retrieves, e.g., from memory within the base station and/or from an external database, e.g., database109in cloud system104, configuration information corresponding to the second link, e.g., operating frequency for the second link, information indicating the IEEE standard being used for the second link, bandwidth for the second link, modulation and coding scheme (MCS) for the second link, and/or number of spatial streams (SS) for the second link. In step702the base station further retrieves SLA information corresponding the customer premises at which the base station is located, e.g., bit rate, e.g., the Mbps rate, the speed tier is to support. Operation proceeds from step703to step704. In step704the base station determines if the operating frequency for the second link is 2.4 GHZ or 5 GHz. If the base station determines that the operating frequency for the second link is 2.4 GHZ, then operation proceeds from step704to step708; however, if the base station determines that the operating frequency for the second link is 5 GHz, then operation proceeds from step704to step706. In step706the base station determines if the IEEE standard being used is 802.11n or 802.11ac. If the determination that IEEE standard is 802.11n, then operation proceeds from step706to step708. However, if the determination is that the IEEE standard is 802.11ac, then operation proceeds from step706, via connecting node C720to step722. In step708, the base station determines if the bandwidth for the second link is 20 MHz or 40 MHz. If the base station determines that the bandwidth for the second link is 20 MHz, then operation proceeds from step708to step710; however, if the base station determines that the bandwidth for the second link is 40 MHz, then operation proceeds from step708to step714. In step710the base station determines if the speed tier for the second link is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the base station determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step710to step712; however, if the base station determines that the speed tier is to support 400 Mbps, then operation proceeds from step710, via connecting node A734to step736. In step712, the base station determines if the modulation and coding scheme (MCS) is within the set of {0-4, 8-10, 16-17, 22-24) or within the set of {5-7, 11-15, 18-21, 25-31}. If the base station determines that the MCS is one of {0-4, 8-10, 16-17, 22-24}, then operation proceeds from step712, via connecting node A734to step736; however, if the base station determines that the MCS is one of {5-7, 11-15, 18-21, 25-31}, then operation proceeds from step712, via connecting node B738to step740. In step714the base station determines if the speed tier is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the base station determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step714to step716; however, if the base station determines that the speed tier is to support 400 Mbps, then operation proceeds from step714, to step718. In step716, the base station determines if the modulation and coding scheme (MCS) is within the set of {0-2, 8, 16) or within the set of {3-7, 9-15, 17-23, 24-31}. If the base station determines that the MCS is one of {0-2, 8, 16}, then operation proceeds from step712, via connecting node A734to step736; however, if the base station determines that the MCS is one of {3-7, 9-15, 17-23, 24-31}, then operation proceeds from step712, via connecting node B738to step740. In step718, the base station determines if the modulation and coding scheme (MCS) is within the set of {0-29) or within the set of {30-31}. If the base station determines that the MCS is one of {0-29}, then operation proceeds from step712, via connecting node A734to step736; however, if the base station determines that the MCS is one of {30-31}, then operation proceeds from step718, via connecting node B738to step740. Returning to step722, in step722, the base station determines if the bandwidth for the second link is 80 MHz or 160 MHz. If the base station determines that the bandwidth for the second link is 80 MHz, then operation proceeds from step722to step724; however, if the base station determines that the bandwidth for the second link is 160 MHz, then operation proceeds from step722to step726. In step724the base station determines if the speed tier is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the base station determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step724to step728; however, if the base station determines that the speed tier is to support 400 Mbps, then operation proceeds from step724to step730. In step728, the base station determines if the number of spatial streams (SS) and the modulation and coding scheme (MCS) is within the set of {SS=1|MCS=0) or within the set of {SS=1|MCS=1-9, SS=2|MCS=0-9, SS=3|MCS=0-9, SS=4|MCS=0-9}. If the base station determines that the SS and MCS is one of {SS=1|MCS=0}, then operation proceeds from step728, via connecting node A734to step736; however, if the base station determines that the SS and MCS is one of {SS=1|MCS=1-9, SS=2|MCS=0-9, SS=3|MCS=0-9, SS=4|MCS=0-9}, then operation proceeds from step728, via connecting node B738to step740. In step730, the base station determines if the number of spatial streams (SS) and the modulation and coding scheme (MCS) is within the set of {SS=1|MCS=0-9, SS=2|MCS=0-4, SS=3|MCS=0-3, SS=4|MCS=0-2) or within the set of {SS=2|MCS=5-9, SS=3|MCS=4-9, SS=4|MCS=3-9}. If the base station determines that the SS and MCS is one of {SS=1|MCS=0-9, SS=2|MCS=0-4, SS=3|MCS=0-3, SS=4|MCS=0-2}, then operation proceeds from step730, via connecting node A734to step736; however, if the base station determines that the SS and MCS is one of SS=2|MCS=5-9, SS=3|MCS=4-9, SS=4|MCS=3-9}, then operation proceeds from step730, via connecting node B738to step740. In step726the base station determines if the speed tier is to support 30 Mbps or 400 Mbps, e.g., based on the SLA for the customer premises. If the base station determines that the speed tier is to support a bit rate of 30 Mbps, then operation proceeds from step726, via connecting node B738to step740; however, if the base station determines that the speed tier is to support 400 Mbps, then operation proceeds from step726to step732. In step732, the base station determines if the number of spatial streams (SS) and the modulation and coding scheme (MCS) is within the set of {SS=1|MCS=0-4, SS=2|MCS=0-2, SS=3|MCS=0-1, SS=4|MCS=0} or within the set of {SS=1|MCS=5-9, SS=2|MCS=3-9, SS=3|MCS=2-9, SS=4|MCS=1-9}. If the base station determines that the SS and MCS is one of {SS=1|MCS=0-4, SS=2|MCS=0-2, SS=3|MCS=0-1, SS=4|MCS=0}, then operation proceeds from step732, via connecting node A734to step736; however, if the base station determines that the SS and MCS is one of {SS=1|MCS=5-9, SS=2|MCS=3-9, SS=3|MCS=2-9, SS=4|MCS=1-9}, then operation proceeds from step732, via connecting node B738to step740. In step736the base station determines that the physical link does not support the speed tier. Alternatively, in step740, the base station determines that the physical link does support the speed tier. Operation proceeds from step736or step740to return step742, and the determination whether or not the physical link supports the speed tier is reported from the base station to the test server. FIG.6, comprising the combination ofFIG.6AandFIG.6B, is a flowchart800of an exemplary method of performing a second link optimal power level determination, in accordance with an exemplary embodiment. Operation starts in step802and proceeds to step804. In step804the system is operated run a speed test between the base station and the wireless extender and to evaluate the results. Step804includes steps806,808,810,811,812and814. In step806the test server sends a run speed test execute command signal to the base station to initiate a speed test on the second link. Operation proceeds from step806to step808. In step808the base station receives the run speed test execute command signal and in step810the base station starts the speed test and initiates traffic, e.g., speed test traffic, downstream to the wireless extender. Operation proceeds from step810to step811. In step811the wireless extender is operated to: receive traffic, determine a speed based on received traffic, determine the speed test has concluded, determine an achieved speed for the speed test for the second link, report the achieved speed for the speed test for the second link to the test server. Operation proceeds from step811to step812. In step812the test server receives the reported determined achieved speed for the second link and in step814the test server determines if the speed test for the second link passed or failed based on the reported determined achieved speed. Operation proceeds from step804to step816. In step816if the speed test, performed in step804, passed then operation proceeds from step816to step817; otherwise, operation proceeds from step816, via connecting node A824to step826. In step817, the system is operated to reduce the transmit power of the base station over second link by 2 dB. Step817includes steps818,820and822. In step918the test server sends a command to the base station commanding the base station to reduce its transmit power for the second link by 2 dBs. Operation proceeds from step820to step822. In step822the base station reduces its transmit power for the second link by 2 dBs. Operation proceeds from step817to the input of step804, in which the system is operated to run another speed test between the base station and the wireless extender, at reduced power with respect to the last speed test, and to evaluate the results. Returning to step826, in step826the test server determines that the second link transmit power margin has been assessed. Operation proceeds from step826to step828. In step828the system is operated to increase the transmit power of the base station over second link by 2 dBs. Step828includes step830and step832. In step830the test server sends a command to the base station, said command commanding the base station to increase its transmit power for the second link (back haul) by 2 dBs. Operation proceeds from step830to step832. In step832the base station receives the command and increases transmit power for the second link (back haul) by 2 dBs in response to the received command. Operation proceeds from step828to step834. In step834the test server determines that current setting of transmit power of base station over second link is the optimal power setting. Operation proceeds from step834to step836, in which the test server determines if the current setting of the transmit power of the base station for the second link is greater than the DFS maximum transmit power. Operation proceeds from step836to step838. In step838, if the transmit power of the base station for the second link is greater than DFS maximum transmit power, then operation proceeds from step838to step840; in which the test server blacklists DFS channels. In step838, if the transmit power of the base station for the second link is not greater than DFS maximum transmit power, then operation proceeds from step838to step842; in which the test server whitelists DFS channels. Operation proceeds from step840or step842to step844. In step844the test server stores, e.g., in a database in a cloud system, the determined optimal transmit power setting for the second link and information indicating whether the DFS channels have been blacklisted or whitelisted. Operation proceeds from step844to step846. In step846the test server sends a message communicating the determined optimal transmit power setting for the second link and information indicating whether the DFS channels have been blacklisted or whitelisted to the mobile handset. Operation proceeds from step846to step848. In step848the mobile handset receives the message and presents the determined optimal transmit power for the second link and the information indicating if the DFS channels have been blacklisted or whitelisted to the user of the mobile handset. Operation proceeds from step848to return step850. FIG.7is a drawing of an exemplary test server900in accordance with an exemplary embodiment. Exemplary test server900is, e.g., test server108of cloud system104of communications system100ofFIG.1. Exemplary test server900includes a network interface902, e.g., an wired or optical interface902, a processor904, e.g., a CPU, memory906, an assembly of hardware components908, e.g., an assembly of circuits, and an I/O interface910coupled together via a bus909over which the various elements (902,904,906,908,910) may interchange data and information. Test server900further includes a speaker920, a display922, e.g., a touchscreen display, switches924, a keypad926, and a mouse928, coupled to I/O interface910. Network interface902includes a receiver910and a transmitter912, which couple the network interface to other network nodes and/or the Internet. In some embodiments, the receiver910and transmitter912are included as part of a transceiver908. Memory906includes an assembly of components914, e.g., an assembly of software components, data/information916and a wireless, e.g., WiFi, test application (APP)918. In some embodiments, the wireless test app918is includes as part of assembly of components914. FIG.8is a drawing of an exemplary wireless extender1000, e.g., a WiFi extender, in accordance with an exemplary embodiment. Exemplary wireless extender1000is, e.g., wireless extender114of communications system100ofFIG.1. Exemplary wireless extender1000includes a network interface1002, e.g., an wired or optical interface, wireless interface1004, a processor1006, e.g., a CPU, an assembly of hardware components1008, e.g., an assembly of circuits, and an I/O interface1010, and memory1012coupled together via a bus1009over which the various elements (1002,1004,1006,1008,1010,1012) may interchange data and information. Wireless extender1000further includes a speaker1054, a display1056, e.g., a touchscreen display, switches1058, a keypad1060, and a mouse1062, coupled to I/O interface1010. Network interface1002includes a receiver1022and a transmitter1024, which couple the network interface1002to other network nodes and/or the Internet. In some embodiments, the receiver1022and transmitter1024are included as part of a transceiver1020. Wireless interfaces1004includes a 1st wireless interface1026, e.g., a front haul WiFi interface, and a second wireless interface1028, e.g. a backhaul wireless interface. 1st wireless interface1026includes a wireless receiver1032coupled to one or more receive antennas (receive antenna11032, . . . , receive antenna M11034) via which the wireless extender1000may receive wireless signals from a mobile handset, e.g., mobile handset116, and other user devices, e.g., other mobile devices, e.g., other mobile devices which may not include testing capability. 1st wireless interface1026includes a wireless transmitter1036coupled to one or more transmit antennas (transmit antenna11038, . . . , receive antenna N11040) via which the wireless extender1000may transmit wireless signals to a mobile handset, e.g., mobile handset116, and other user devices, e.g., other mobile devices, e.g., other mobile devices which may not include testing capability. In some embodiments, the wireless receiver1030and the wireless transmitter1036are included as part of a transceiver1026. 2nd wireless interface1028includes a wireless receiver1042coupled to one or more receive antennas (receive antenna11044, . . . , receive antenna M21034) via which the wireless extender1000may receive wireless signals from a base station, e.g., base station112. 2nd wireless interface1028includes a wireless transmitter1048coupled to one or more transmit antennas (transmit antenna11050, . . . , receive antenna N21052) via which the wireless extender1000may transmit wireless signals to a base station, e.g., base station112. In some embodiments, the wireless receiver1042and the wireless transmitter1048are included as part of a transceiver1028. In some embodiments, the same antenna or antennas may be, and sometimes are, used by receiver1030and transmitter1036. In some embodiments, the same antenna or antennas may be, and sometimes are, used by receiver1042and transmitter1048. In some embodiments, antenna used by 1st wireless interface1026may be, and sometimes is, used by second wireless interface1028. Memory1012includes an assembly of components1014, e.g., an assembly of software components, data/information1016and a wireless, e.g., WiFi, test application (APP)1018. In some embodiments, the wireless test app1018is includes as part of assembly of components1014. FIG.9is a drawing of an exemplary base station1100, e.g., a WiFi base station, in accordance with an exemplary embodiment. Exemplary base station1100is, e.g., base station112of communications system100ofFIG.1. Exemplary base station1100includes a network interface1105, e.g., an wired or optical interface, wireless interface1104, a processor1106, e.g., a CPU, an assembly of hardware components1108, e.g., an assembly of circuits, and an I/O interface1110, and memory1112coupled together via a bus1109over which the various elements (1105,1104,1106,1108,1110,1112) may interchange data and information. Base station1100further includes a speaker1152, a display11154, e.g., a touchscreen display, switches1156, a keypad and/or keyboard1158, and a mouse1159, coupled to I/O interface1110. Network interface1105includes a receiver1178and a transmitter1180, which couple the network interface1105to other network nodes and/or the Internet. In some embodiments, the receiver1178and transmitter1180are included as part of a transceiver1184. Wireless interface1104includes a wireless receiver1138coupled to one or more receive antennas (receive antenna11039, . . . , receive antenna M1141) via which the base station1100may receive wireless signals from wireless extenders, e.g., wireless extender114, and/or user equipment devices. Wireless interface1104further includes a wireless transmitter1140coupled to one or more transmit antennas (transmit antenna11143, . . . , receive antenna N1145) via which the base station1100may transmit wireless signals to wireless extenders, e.g., wireless extender114, and/or user equipment devices. In some embodiments, the wireless receiver1138and the wireless transmitter1140are included as part of a transceiver1124. In some embodiments, the same antenna or antennas may be, and sometimes are, used by receiver1138and transmitter1140. Memory1112includes an assembly of components1114, e.g., an assembly of software components, data/information1116and a wireless, e.g., WiFi, test application (APP)1118. In some embodiments, the wireless test app1118is includes as part of assembly of components1114. FIG.9is a drawing of an exemplary base station1100, e.g., a WiFi base station, in accordance with an exemplary embodiment. Exemplary base station1100is, e.g., base station112of communications system100ofFIG.1. Exemplary base station1100includes a network interface1105, e.g., an wired or optical interface, wireless interface1104, a processor1106, e.g., a CPU, an assembly of hardware components1108, e.g., an assembly of circuits, and an I/O interface1110, and memory1112coupled together via a bus1109over which the various elements (1105,1104,1106,1108,1110,1112) may interchange data and information. Base station1100further includes a speaker1152, a display11154, e.g., a touchscreen display, switches1156, a keypad and/or keyboard1158, and a mouse1159, coupled to I/O interface1110. Network interface1105includes a receiver1178and a transmitter1180, which couple the network interface1105to other network nodes and/or the Internet. In some embodiments, the receiver1178and transmitter1180are included as part of a transceiver1184. Wireless interface1104includes a wireless receiver1138coupled to one or more receive antennas (receive antenna11039, . . . , receive antenna M1141) via which the base station1100may receive wireless signals from wireless extenders, e.g., wireless extender114, and/or user equipment devices. Wireless interface1104further includes a wireless transmitter1140coupled to one or more transmit antennas (transmit antenna11143, . . . , receive antenna N1145) via which the base station1100may transmit wireless signals to wireless extenders, e.g., wireless extender114, and/or user equipment devices. In some embodiments, the wireless receiver1138and the wireless transmitter1140are included as part of a transceiver1124. In some embodiments, the same antenna or antennas may be, and sometimes are, used by receiver1138and transmitter1140. Memory1112includes an assembly of components1114, e.g., an assembly of software components, data/information1116and a wireless, e.g., WiFi, test application (APP)1118. In some embodiments, the wireless test app1118is includes as part of assembly of components1114. FIG.10is a drawing of an exemplary mobile handset1200, e.g., a mobile wireless test tool, e.g., a mobile WiFi test tool or a mobile device, e.g., a smart phone, wireless tablet or wireless notepad, with a wireless, e.g., WiFi, test application (APP), in accordance with an exemplary embodiment. Exemplary mobile handset1200is, e.g., mobile handset116of communications system100ofFIG.1. Exemplary mobile handset1200includes a network interface1205, e.g., an wired or optical interface, wireless interfaces1204, a processor1206, e.g., a CPU, an assembly of hardware components1208, e.g., an assembly of circuits, and an I/O interface1210, and memory1212coupled together via a bus1209over which the various elements (1205,1204,1206,1208,1210,1212) may interchange data and information. Mobile handset1200further includes a microphone1250, a camera1251, a speaker1252, a display1254, e.g., a touchscreen display, switches1256, a keypad1258, and a mouse1259, coupled to I/O interface1210. Network interface1205includes a receiver1278and a transmitter1280, which couple the network interface1205to other network nodes and/or the Internet. In some embodiments, the receiver1278and transmitter1280are included as part of a transceiver1284. Wireless interfaces1204includes a WiFi interface1224and a cellular1225. WiFi interface1224includes a wireless receiver1238coupled to one or more receive antennas (receive antenna11239, . . . , receive antenna M11241) via which the mobile handset1200may receive WiFi wireless signals from a wireless extender or a WiFi base station. WiFi interface1224further includes a wireless transmitter1240coupled to one or more transmit antennas (transmit antenna11243, . . . , transmit antenna N11245) via which the mobile handset1200may transmit wireless WiFi signals to a wireless extender or WiFi base station. In some embodiments, the wireless receiver1238and the wireless transmitter1240are included as part of a transceiver. Cellular interface1225includes a wireless cellular receiver1268coupled to one or more receive antennas (receive antenna11269, . . . , receive antenna M11271) via which the mobile handset1200may receive cellular wireless signals from a cellular base station. Cellular interface1225further includes a cellular wireless transmitter1270coupled to one or more transmit antennas (transmit antenna11273, . . . , transmit antenna N11275) via which the mobile handset1200may transmit wireless cellular signals to a cellular base station. In some embodiments, the wireless receiver1268and the wireless transmitter1270are included as part of a transceiver. In some embodiments, the same antenna or antennas may be, and sometimes are, used by receiver1238and transmitter1240. In some embodiments, the same antenna or antennas may be, and sometimes are, used by receiver1268and transmitter1270. In some embodiments, an antenna used by WiFi wireless interface1224may be, and sometimes is, used by the cellular wireless interface1225. Memory1012includes an assembly of components1014, e.g., an assembly of software components, data/information1016and a wireless, e.g., WiFi, test application (APP)1018. In some embodiments, the wireless test app1018is includes as part of assembly of components1014. FIG.11is a drawing of an exemplary graphical user interface (GUI)1300included in a mobile handset, in accordance with an exemplary embodiment. GUI1300is, e.g., displayed on touch screen display1254of mobile handset1200. GUI1300displays control buttons and information to the user, e.g., test technician, of mobile handset1200and receives input from the user. Exemplary GUI1300includes a start system analysis button1302, a first link (front haul) analysis region1304, a second link (back haul) analysis region1306, an end-to-end connection analysis region1308, and a notification area or window1310. First link analysis region1304includes a wireless extender to mobile handset (first link) test initiate button1312, a speed test (first link) test button1314, a SLAM (first link) test button1316and a re-run analysis (first link) test button1318. Second link analysis region1306includes a base station to wireless extender (second link) test initiate button1320, a speed test (second link) test button1322, a SLAM (second link) test button1324and a re-run analysis (second link) test button1326. End to end connection analysis region1308includes an end-to-end connection (test server to mobile handset) test initiate button1328, a speed test (end-to-end connection) test button1330, and a re-run analysis (end-to-end connection) test button1332. Notification region or window1310is used to display notifications to the user of mobile handset1300, e.g. a link or connection has been verified, a link or connection has failed verification, and/or recommendations to the user of mobile handset1300, e.g., move the wireless extender to a new closer, e.g., closer to the base station, restart a particular test, etc. In some embodiments, test results and/or test process information are reported in the region (1304,1306,1308) of the GUI1300corresponding to the link or connection (first link, second link, or end-to end connection) being tested. In other embodiments, the test results and/or test process information are reported in notification area1310along with information identifying the particular link or connection undergoing test. FIG.12, comprising the combination ofFIG.12A,FIG.12B,FIG.12C,FIG.12DandFIG.12E, is a drawing of an assembly of components1400, comprising Part A1401, Part B1403, Part C1405, Part D1407and Part E1409, in accordance with an exemplary embodiment. Exemplary assembly of components1400, may be, and sometimes is, included in a test server, e.g., test server108or test server ofFIG.1or test server900ofFIG.7, in accordance with an exemplary embodiment. Assembly of components1400can be, and in some embodiments is, used in test server108and/or test server900. The components in the assembly of components1400can, and in some embodiments are, implemented fully in hardware within the processor904, e.g., as individual circuits. The components in the assembly of components1400can, and in some embodiments are, implemented fully in hardware within the assembly of hardware components908, e.g., as individual circuits corresponding to the different components. In other embodiments some of the components are implemented, e.g., as circuits, within the processor904with other components being implemented, e.g., as circuits within assembly of components908, external to and coupled to the processor904. As should be appreciated the level of integration of components on the processor and/or with some components being external to the processor may be one of design choice. Alternatively, rather than being implemented as circuits, all or some of the components may be implemented in software and stored in the memory906of the test server900, with the components controlling operation of test server900to implement the functions corresponding to the components when the components are executed by a processor, e.g., processor904. In some such embodiments, the assembly of components1400is included in the memory906as assembly of components914. In still other embodiments, various components in assembly of components1400are implemented as a combination of hardware and software, e.g., with another circuit external to the processor providing input to the processor904which then under software control operates to perform a portion of a component's function. While processor904is shown in theFIG.9embodiment as a single processor, e.g., computer, it should be appreciated that the processor904may be implemented as one or more processors, e.g., computers. When implemented in software the components include code, which when executed by the processor904, configure the processor904to implement the function corresponding to the component. In embodiments where the assembly of components1400is stored in the memory906, the memory906is a computer program product comprising a computer readable medium comprising code, e.g., individual code for each component, for causing at least one computer, e.g., processor904, to implement the functions to which the components correspond. Completely hardware based or completely software based components may be used. However, it should be appreciated that any combination of software and hardware, e.g., circuit implemented components, may be used to implement the functions. As should be appreciated, the components illustrated inFIG.12control and/or configure the test server900or elements therein such as the processor904, to perform the functions of corresponding steps illustrated and/or described in the method of one or more of the flowcharts, signaling diagrams and/or described with respect to any of the Figures. Thus the assembly of components1400includes various components that perform functions of corresponding one or more described and/or illustrated steps of an exemplary method, e.g., steps of the method of flowchart200ofFIG.2, steps of the method of flowchart500ofFIG.3, steps of the flowchart600ofFIG.4, steps of the method of flowchart700ofFIG.5, steps of the flowchart800ofFIG.6, and/or described or shown with respect to any of the other figures, e.g., steps which are performed by a test server. Assembly of components1400includes a component1402configured to operate the test server to receive a system analysis initiate signal and to perform configuration operation in response to the received signal, a component1404configured to operate the test server to receive a run first link analysis command signal send from the mobile handset, a component1406configured to operate the test server to receive a run speed test command signal from the mobile handset, e.g., requesting a speed test on the first link, a component1408configured to operate the test server to send a run speed test command signal to a wireless extender to initiate a speed test on the first link, a component1410configured to operate the test server to receive a reported determined achieved speed for the first link, a component1412configured to operate the test server to determine if the achieved speed for the first link is greater than or equal to the expected speed tier for the first link, a component1414configured to control operation as a function of the determination if the achieved speed for the first link is greater than or equal to the expected speed tier for the first link, a component1416configured to operate the test server to determine that the first link has been verified, a component1418configured to operate the test server to send a SLA link achievability method (SLAM) determination execute command to the wireless extender commanding the wireless extender to perform SLAM on the first link, a component1420configured to operate the test server to send the mobile handset a message indicating that the first link has been verified, a component1422configured to operate the test server to: i) receive the SLAM determination for the first link and ii) send the SLAM determination for the first link to the mobile handset, a component1424configured to control operation as a function of the determination if the physical link, e.g., first link, supports the speed tier. a component1426configured to operate the test serve to determine if there is traffic on the network, e.g., over the first link, a component1428configured to control operation as a function of the determination if there is traffic on the network, e.g., on the first link, and a component1430configured to operate the test server to send a recommendation to mobile handset to relocate the wireless extender toward the base station. Assembly of components1400further includes a component1431configured to operate the test server to send a channel change command to the wireless extender to change the operating channel for the first link, a component1432configured to operate the test server to: i) receive said channel change information, ii) store said channel change information, and, in some embodiments, iii) send said channel change information to the mobile handset, a component1434configured to operate the test server to send a command to the wireless extender to remove high throughput contributors from the network. Assembly of components1436further includes a component1436configured to operate the test server to perform steps of an optimal power level determination method for the first link, e.g. perform steps of the method ofFIG.4. Component1436includes a component1438configured to operate the test server to control the wireless extender to set transmit power, e.g., for first link transmission, to an optimal level based on speed test results at different wireless extender transmit power levels, and a component1440configured to operate the test server to blacklist or whitelist DFS channels based on a determined optimal level and a DFS maximum transmit power level. Assembly of components1400further includes a component1442configured to operate the test server to receive a run second link analysis command signal, a component1444configured to operate the test server to receive a run speed test command signal from the mobile handset requesting a speed test on the second link, a component1446configured to operate the test server to send a run speed test execute command signal to a base station to initiate a speed test on the second link, a component1448configured to operate the test server to receive a reported determined achieved speed for the second link, a component1450configured to operate the test server to send the reported determined achieved speed for the second link to the mobile handset, a component1452configured to operate the test server to determine if the achieved speed for the second link is greater than or equal to the expected speed for the second link, a component1453configured to control operation as a function of the determination if the achieved speed for the second link is greater than or equal to the expected speed tier for the second link, a component1454configured to operate the test server to determine that the second link has been verified, a component1455configured to operate the test server to send a SLA link achievability method (SLAM) determination execute command to the base station commanding the base station to perform SLAM on the second link, a component1456configured to operate the test server to dens the mobile handset a message indicating that the second link has been verified, a component1457configured to operate the test server to: i) receive the SLAM determination for the second link and ii) send the SLAM determination for the second link to the mobile handset, a component1458configured to control operation as function of the determination whether the physical link (second link) supports the speed tier, a component1459configured to operate the test server to determine if there is traffic on the network, e.g., over the second link, a component1460configured to control operation as a function of the determination if there is traffic on the network, e.g., over the second link, a component1461configured to operate the test server to send a channel change command to the base station to change the operating channel for the second link, a component1462configured to operate the test server to: i) receive said channel change information, ii) store said channel change information and iii) send said channel change information to the mobile handset, a component1463configured to operate the test server to send a recommendation to the mobile handset to relocate the wireless extender toward the base station, a component1464configured to operate the test server to send a command to the base station to remove high throughput contributors from the network. Assembly of components1400further includes a component1465configured to operate the test server to perform steps of an optimal power level determination method for second link, e.g., perform steps of the method ofFIG.6. Component1465includes a component1466configured to operate the test serve to control the base station to set transmit power, e.g. for the second link, to an optimal level based on speed test results at different base station transmit power levels and a component1467configured to operate the test server to blacklist or whitelist DFS channels based on the determined optimal level and a DFS maximum transmit power level. Assembly of components1400further includes a component1468configured to operate the test server to receive a run end-to-end analysis command signal, a component1469configured to operate the test server to receive an end-to-end speed test command signal, a component1470configured to operate the test server to start the end-to end speed test and initiate traffic, e.g., speed test traffic, downstream directed to the mobile handset, a component1471configured to operate the test server to receive the reported determined achieved speed for the end to end connection, a component1472configured to operate the test server to determine if the speed test passed or failed based on the reported determined achieved speed or the end-to-end connection and an end-to-end pass/fail threshold, and a component1473configured to control operation as a function of the determine of the end-to-end speed test passed or failed. Assembly of component1400further includes a component1474configured to operate the test server to decide to run a speed test between the test server and the base station, e.g. in response to a determination that the end-to-end speed test failed, a component1475configured to operate the test server to initiate the speed test between the test server and the base station and to send traffic, e.g., speed test traffic to the base station, a component1476configured to operate the test serer to received a reported achieved speed for the speed test for the connection between the test server and the base station, a component1477configured to operate the test server to report the achieved speed for the connection between the test server and the base station to the mobile handset, a component1478configured to operate the test server to determine if the speed test for the connection between the test server and the base station passed or failed based on the reported achieved speed and a pass/fail threshold value for the connection between the test server and the base station, a component1479configured to control operation as a function of the determination if the speed test for the connection between the base test server and the base station passed or failed, a component1480configured to operate the test server to contact internal backend systems to mitigate, and a component1481configured to operate the test server to re-check values. FIG.13is a drawing of an assembly of components1500in accordance with an exemplary embodiment. Exemplary assembly of components1500, may be, and sometimes is, included in a wireless extender, e.g., a WiFi extender, in accordance with an exemplary embodiment. Assembly of components1500can be, and in some embodiments is, used in wireless extender114, e.g., a WiFi extender, ofFIG.1and/or wireless extender1000, e.g., a WiFi extender, ofFIG.8. The components in the assembly of components1500can, and in some embodiments are, implemented fully in hardware within the processor1006, e.g., as individual circuits. The components in the assembly of components1500can, and in some embodiments are, implemented fully in hardware within the assembly of hardware components1008, e.g., as individual circuits corresponding to the different components. In other embodiments some of the components are implemented, e.g., as circuits, within the processor1006with other components being implemented, e.g., as circuits within assembly of components1008, external to and coupled to the processor1006. As should be appreciated the level of integration of components on the processor and/or with some components being external to the processor may be one of design choice. Alternatively, rather than being implemented as circuits, all or some of the components may be implemented in software and stored in the memory1012of the wireless extender1000, with the components controlling operation of wireless extender1000to implement the functions corresponding to the components when the components are executed by a processor, e.g., processor1006. In some such embodiments, the assembly of components1500is included in the memory1012as assembly of components1014. In still other embodiments, various components in assembly of components1500are implemented as a combination of hardware and software, e.g., with another circuit external to the processor providing input to the processor1006which then under software control operates to perform a portion of a component's function. While processor1006is shown in theFIG.8embodiment as a single processor, e.g., computer, it should be appreciated that the processor1006may be implemented as one or more processors, e.g., computers. When implemented in software the components include code, which when executed by the processor1006, configure the processor1006to implement the function corresponding to the component. In embodiments where the assembly of components1500is stored in the memory1012, the memory1012is a computer program product comprising a computer readable medium comprising code, e.g., individual code for each component, for causing at least one computer, e.g., processor1006, to implement the functions to which the components correspond. Completely hardware based or completely software based components may be used. However, it should be appreciated that any combination of software and hardware, e.g., circuit implemented components, may be used to implement the functions. As should be appreciated, the components illustrated inFIG.13control and/or configure the wireless extender1000or elements therein such as the processor1006, to perform the functions of corresponding steps illustrated and/or described in the method of one or more of the flowcharts, signaling diagrams and/or described with respect to any of the Figures. Thus the assembly of components1500includes various components that perform functions of corresponding one or more described and/or illustrated steps of an exemplary method, e.g., steps of the method of flowchart200ofFIG.2, steps of the method of flowchart500ofFIG.3, steps of the flowchart600ofFIG.4, steps of the method of flowchart700ofFIG.5, steps of the flowchart800ofFIG.6, and/or described or shown with respect to any of the other figures, e.g., steps which are performed by a wireless extender. Assembly of components1500includes a component1502configured to operate the wireless extender to receive a run speed test execute command signal, a component1504configured to operate the wireless extender to start the speed test and initiate traffic, e.g., speed test traffic, downstream to the mobile handset, a component1506configured to operate the wireless extender to receive the SLAM execute command, and a component1508configured to operate the wireless extender to perform a SLAM determination for the first link, e.g., to perform the method ofFIG.3. Component1508includes a component1510configured to operate the wireless extender to determine one of: i) the physical link does not support the speed tier or ii) the physical link does support the speed tier. Assembly of components1500further includes a component1512configured operate the wireless extender to send the SLAM determination for the first link to the test server, a component1514configured to operate the wireless extender to i) receive a channel change command, ii) perform a channel scan, iii) select a more desirable channel to operate on, and iv) change to the selected channel, a component1516configured to operate the wireless extender to send channel change information to the test server, a component1518configured to operate the wireless extender to receive a command to remove high throughput traffic contributors from the network, e.g., with regard to the first link, and to remove high throughput contributors from the network, e.g., with regard to the first link. Assembly of components1500further includes a component1520configured to operate the wireless extender to perform steps of an optimal power level determination method for the first link, e.g., perform steps of the method ofFIG.4which are performed by the wireless extender, and a component1522configured to operate the wireless extender to perform steps of an optimal power level determination method for the second link, e.g., perform steps of the method ofFIG.6which are performed by the wireless extender. Assembly of components1500further includes a component1946configured to operate the wireless extender to transmit to the mobile handset using the first link transmit power level. FIG.14is a drawing of an assembly of components1600in accordance with an exemplary embodiment. Exemplary assembly of components1600, may be, and sometimes is, included in a base station, e.g., a WiFi base station, in accordance with an exemplary embodiment. Assembly of components1600can be, and in some embodiments is, used in base station112, e.g., a WiFi base station, ofFIG.1and/or base station1200, e.g., a WiFi base station, ofFIG.9. The components in the assembly of components1600can, and in some embodiments are, implemented fully in hardware within the processor1106, e.g., as individual circuits. The components in the assembly of components1600can, and in some embodiments are, implemented fully in hardware within the assembly of hardware components1108, e.g., as individual circuits corresponding to the different components. In other embodiments some of the components are implemented, e.g., as circuits, within the processor1106with other components being implemented, e.g., as circuits within assembly of components1108, external to and coupled to the processor1106. As should be appreciated the level of integration of components on the processor and/or with some components being external to the processor may be one of design choice. Alternatively, rather than being implemented as circuits, all or some of the components may be implemented in software and stored in the memory1112of the base station1100, with the components controlling operation of base station1100to implement the functions corresponding to the components when the components are executed by a processor, e.g., processor1106. In some such embodiments, the assembly of components1600is included in the memory1112as assembly of components1114. In still other embodiments, various components in assembly of components1600are implemented as a combination of hardware and software, e.g., with another circuit external to the processor providing input to the processor1106which then under software control operates to perform a portion of a component's function. While processor1106is shown in theFIG.9embodiment as a single processor, e.g., computer, it should be appreciated that the processor1106may be implemented as one or more processors, e.g., computers. When implemented in software the components include code, which when executed by the processor1106, configure the processor1106to implement the function corresponding to the component. In embodiments where the assembly of components1600is stored in the memory1112, the memory1112is a computer program product comprising a computer readable medium comprising code, e.g., individual code for each component, for causing at least one computer, e.g., processor1106, to implement the functions to which the components correspond. Completely hardware based or completely software based components may be used. However, it should be appreciated that any combination of software and hardware, e.g., circuit implemented components, may be used to implement the functions. As should be appreciated, the components illustrated inFIG.14control and/or configure the base station1100or elements therein such as the processor1106, to perform the functions of corresponding steps illustrated and/or described in the method of one or more of the flowcharts, signaling diagrams and/or described with respect to any of the Figures. Thus the assembly of components1600includes various components that perform functions of corresponding one or more described and/or illustrated steps of an exemplary method, e.g., steps of the method of flowchart200ofFIG.2, steps of the method of flowchart500ofFIG.3, steps of the flowchart600ofFIG.4, steps of the method of flowchart700ofFIG.5, steps of the flowchart800ofFIG.6, and/or described or shown with respect to any of the other figures, e.g., steps which are performed by a base station. Assembly of components1600includes a component1602configured to operate the base station to receive a run speed test execute command signal, a component1604configured to operate the base station to start the speed test and initiate traffic, e.g., speed test traffic, downstream to the wireless extender, a component1606configured to operate the base station to receive the SLAM execute command, and a component1608configured to operate the base station to perform a SLAM determination for the second link, e.g., to perform the method ofFIG.5. Component1608includes a component1610configured to operate the base station to determine one of: i) the physical link does not support the speed tier or ii) the physical link does support the speed tier. Assembly of components1600further includes a component1612configured operate the base station to send the SLAM determination for the second link to the test server, a component1614configured to operate the base station to i) receive a channel change command for the second link, ii) perform a channel scan, iii) select a more desirable channel to operate on, and iv) change to the selected channel, a component1616configured to operate the base station to send channel change information to the test server, a component1618configured to operate the base station to receive a command to remove high throughput traffic contributors from the network, e.g., with regard to the second link, and to remove high throughput contributors from the network, e.g., with regard to the second link, in response to the received command. Assembly of components1600further includes a component1620configured to operate the base station to perform steps of an optimal power level determination method for the first link, e.g., perform steps of the method ofFIG.4which are performed by the base station, and a component1622configured to operate the base station to perform steps of an optimal power level determination method for the second link, e.g., perform steps of the method ofFIG.6which are performed by the base station, a component1624configured to operate the base station to: receive traffic, determine the speed between the test server and the base station has concluded, determine an achieved speed for the speed test for the test server to base station connection, and a component1624configured to operate the base station to report the determined achieved speed for the connection between the test server and the base station to the test server. Assembly of component1600further includes a component1990configured to operate the base station to transmit to the wireless extender using the second link transmit power level. FIG.15, comprising the combination ofFIG.15A,FIG.15BandFIG.15C, is a drawing of an assembly of components1700, comprising Part A1701, Part B1703and Part C1705, in accordance with an exemplary embodiment. Exemplary assembly of components1700, may be, and sometimes is, included in a mobile handset, in accordance with an exemplary embodiment. Assembly of components1700can be, and in some embodiments is, used in mobile handset116ofFIG.1and/or mobile handset1200ofFIG.10. The components in the assembly of components1600can, and in some embodiments are, implemented fully in hardware within the processor1206, e.g., as individual circuits. The components in the assembly of components1600can, and in some embodiments are, implemented fully in hardware within the assembly of hardware components1208, e.g., as individual circuits corresponding to the different components. In other embodiments some of the components are implemented, e.g., as circuits, within the processor1206with other components being implemented, e.g., as circuits within assembly of components1208, external to and coupled to the processor1206. As should be appreciated the level of integration of components on the processor and/or with some components being external to the processor may be one of design choice. Alternatively, rather than being implemented as circuits, all or some of the components may be implemented in software and stored in the memory1212of the mobile handset1200, with the components controlling operation of mobile handset1200to implement the functions corresponding to the components when the components are executed by a processor, e.g., processor1206. In some such embodiments, the assembly of components1600is included in the memory1212as assembly of components1214. In still other embodiments, various components in assembly of components1600are implemented as a combination of hardware and software, e.g., with another circuit external to the processor providing input to the processor1206which then under software control operates to perform a portion of a component's function. While processor1206is shown in theFIG.10embodiment as a single processor, e.g., computer, it should be appreciated that the processor1206may be implemented as one or more processors, e.g., computers. When implemented in software the components include code, which when executed by the processor1206, configure the processor1206to implement the function corresponding to the component. In embodiments where the assembly of components1600is stored in the memory1212, the memory1212is a computer program product comprising a computer readable medium comprising code, e.g., individual code for each component, for causing at least one computer, e.g., processor1206, to implement the functions to which the components correspond. Completely hardware based or completely software based components may be used. However, it should be appreciated that any combination of software and hardware, e.g., circuit implemented components, may be used to implement the functions. As should be appreciated, the components illustrated inFIG.14control and/or configure the mobile handset1200or elements therein such as the processor1206, to perform the functions of corresponding steps illustrated and/or described in the method of one or more of the flowcharts, signaling diagrams and/or described with respect to any of the Figures. Thus the assembly of components1600includes various components that perform functions of corresponding one or more described and/or illustrated steps of an exemplary method, e.g., steps of the method of flowchart200ofFIG.2, steps of the method of flowchart500ofFIG.3, steps of the flowchart600ofFIG.4, steps of the method of flowchart700ofFIG.5, steps of the flowchart800ofFIG.6, and/or described or shown with respect to any of the other figures, e.g., steps which are performed by a mobile handset. Assembly of components1700includes a component1702configured to operate a mobile handset to receive user input to initiate a system analysis method, a component1706configured to operate the mobile handset to send a system analysis test initiate signal to a test server, a component1706configured to operate a mobile handset to receive user input requesting analysis of a first link, said first link being between a wireless extender and the mobile handset, a component1708configured to operate the mobile handset to send a run first link analysis command signal to the test server, a component1710configured to operate the mobile handset to receive user input requesting a speed test on the first link, a component1712configured to operate the mobile handset to send a run speed test command signal to the test server to request a speed test on the first link. Assembly of components1700further includes a component1714configured to operate the mobile handset to: receive traffic, perform speed test measurements based on the received traffic, determine the speed test has concluded, determine an achieved speed for the speed test for the first link, report the achieved speed for the speed test for the first link to the test server, and display the achieved speed for the speed test for the first link to the user of the mobile handset, a component1716configured to operate the mobile handset to: i) receive a message indicating that the first link has been verified and ii) present the user of the mobile handset a notification that the first link has been verified, a component1718configured to operate the mobile handset to: receive a SLAM determination for the first link and ii) display the SLAM determination for the first link to the user of the mobile handset, a component1720configured to operate the mobile handset to: i) receive a recommendation to relocate the wireless extender, e.g., toward the base station, and ii) present the recommendation to relocate the wireless extender, e.g., toward the base station, to the user of the mobile handset. Assembly of components1700further includes a component1722configured to operate the mobile handset to display the status of the channel change to the user of the mobile handset, a component1724configured to operate the mobile handset to display the progress of the removal of high throughput contributors, e.g., with regard to the first link, to the user of the mobile handset, and a component1726configured to operate the mobile handset to perform steps of an optimal power level determination method for the first link, e.g., perform the steps of the method ofFIG.4, which are performed by the mobile handset. Assembly of components1700further includes a component1728configured to operate the mobile handset to receive user input requesting analysis of a second link, said second link being between a base station a wireless extender, a component1730configured to operate the mobile handset to send a run second link analysis command signal to the test server, a component1732configured to operate the mobile handset to receive user input requesting a speed test on the second link, a component1734configured to operate the mobile handset to send a run speed test command signal to the test server to request a speed test on the second link. Assembly of components1700further includes a component1736configured to operate the mobile handset to: i) receive the determined achieved speed for the second link, and ii) display the achieved speed for the speed test for the second link to the user of the mobile handset, a component1738configured to operate the mobile handset to: i) receive a message indicating that the second link has been verified and ii) present the user of the mobile handset a notification that the second link has been verified, a component1740configured to operate the mobile handset to: receive a SLAM determination for the second link and ii) display the SLAM determination for the second link to the user of the mobile handset, a component1742configured to operate the mobile handset to: i) receive a recommendation to relocate the wireless extender, e.g., toward the base station, and ii) present the recommendation to relocate the wireless extender, e.g., toward the base station, to the user of the mobile handset. Assembly of components1700further includes a component1744configured to operate the mobile handset to receive channel change information corresponding to the second link and display the status of the channel change to the user of the mobile handset, a component1746configured to operate the mobile handset to display the progress of the removal of high throughput contributors, e.g., with regard to the second link, to the user of the mobile handset, and a component1748configured to operate the mobile handset to perform steps of an optimal power level determination method for the second link, e.g., perform the steps of the method ofFIG.6, which are performed by the mobile handset. Assembly of components1700further includes a component1750configured to operate the mobile handset to receive user input requesting an end to end connection analysis, said end to end connection analysis being between said test server and said mobile handset, said end to end connection including said first and second links, a component1752configured to operate the mobile handset to send a run end to end connection analysis command signal to the test server, a component1754configured to operate the mobile handset to receive user input requesting a speed test on the end to end connection, a component1756configured to operate the mobile handset to send a run speed test command signal to the test server commanding the test server to run and end to end speed test. Assembly of components1700further includes a component1758configured to operate the mobile handset to: i) receive traffic as part of the end to end speed test, determine a speed based on the received traffic, determine that the speed test for the end to end connection has concluded, determine and achieved speed for the speed test for the end to end connection, report the determined achieved speed for the speed test for the end to end connection to the test server, and display the achieved speed for the speed test for the end to end connection to the user of the mobile handset. Assembly of components1700further includes a component1760configured to operate the mobile handset to receive a message communicating system analysis is complete, a component1762configured to operate the mobile handset to display an indication to the use r of the mobile handset that the system analysis is complete, and a component1764configured to operate the mobile handset to receive the reported achieved speed for the speed test for the connection between the test server and the base station and to display the achieved speed for the connection between the test server and the base station to the user of the mobile handset. FIG.16, comprising the combination ofFIG.16A,FIG.16BandFIG.16C, is a flowchart1800of an exemplary method of implementing a communications system in accordance with an exemplary embodiment. Operation of the exemplary method starts in step1802in which the communications system is powered on and initialized. Operation proceeds from step1802to step1804. In step1804a test server, e.g., test server108, sends a command to a wireless extender, e.g., wireless extender114, at a first customer premises, e.g. customer premises102, to perform a speed test on a first link between said wireless extender and a mobile handset, e.g., mobile handset116, said speed test determining an achieved data rate, e.g. speed, for the first link. Operation proceeds from step1804to step1806. In step1806the test server determines if the achieved rate for the first link determined by the speed test on the first link between the wireless extender and the mobile handset supports a minimum expected communications data rate, e.g., speed in bits per second, for a first speed tier, said first speed tier being a wireless communications speed level to be supported by the first link. For example, in step1806the test server determines if the achieved speed for the first link is greater than or equal to the expected minimum data rate for the first speed tier. Step1806includes steps1810and1812, one of which is performed during an iteration of step1806. In step1810the test server determines that the first link does not support the minimum expected rate for the first speed tier. In step1812the test server determines that the first link does support the minimum expected rate for the first speed tier. Operation proceeds from step1806to step1814. In step1814the test server takes action with respect to the first link based on whether or not the first link supports the minimum expected communications data rate for the first speed tier. Step1814includes steps1816,1818,1820and1828. In step1818, if the determination is that the first link does not support the minimum expected rate for the first speed tier, then operation proceeds from step1816to step1818. In step1818, if the determination is that the first link does support the minimum expected rate for the first speed tier, then operation proceeds from step1816to step1820. In step1818the test server takes remedial action. Step1818includes steps1822,1824and1826. One or more or all of steps1822,1824and1826are performed during an iteration of step1818. In step1822the test server sends a command to the wireless extender to remove traffic from the first link. In step1824, the test server signals the wireless extender to change the channel used for the first link. For example, the channel used for the first link is changed by changing frequencies, bandwidth, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the first link. In step1826the test server sends a message to the mobile handset to cause the mobile handset to display a message in a display to the user of the mobile handset to move the wireless extender closer to the base station. Operation proceeds from step1818to step1828. In step1828the test server initiates retesting of the first link to check that the first link supports the minimum expected data rate. Returning to step1820, in step1820the test server determines a first link transmit power level, e.g., an extender to mobile handset transmit power level, to be used on the first link. Step1820includes steps1830and1832. In step1830the test server determines a transmit power level at which the first link fails to satisfy the minimum expected communications data rate for the first speed tier. Operation proceeds from step1830to step1832. in step1832the test server set the first link transmit power level to a power level above the power level at which the first link fails to satisfy the minimum first tier speed level, e.g., to a power level a predetermined amount, e.g., 2 dBs above the determined highest transmit power level at which the first link first fails to satisfy the first minimum expected communications rate thus resulting in the transmit power being set slightly above the power level where the first data rate will be satisfied but near the data rate at which the first data rate will fail to be satisfied. Operation proceeds from step1820, via connecting node A1834to step1836. In step1836the test server identifies dynamic frequency selection (DFS) channels which have a maximum permitted transmit power below the first link transmit power level. Operation proceeds from step1836to step1838. In step1838DFS channels having a maximum permitted transmit power level below the first link transmit power level are added, e.g., by the test server, to a first DFS channel blacklist stored in memory, said first DFS channel blacklist listing DFS channels which are not to be used by the first link. Operation proceeds from step1838to step1840. In step1840the test server identifies dynamic frequency selection (DFS) channels which have a maximum permitted transmit power equal to or above the first link transmit power level. Operation proceeds from step1840to step1842. In step1842DFS channels having a maximum permitted transmit power level equal to or above the first link transmit power level are added, e.g., by the test server, to a first DFS channel whitelist stored in memory, said first DFS channel whitelist listing DFS channels which are available for use by the first link. Operation proceeds from step1842to step1844. In step1844the test server communicates the determined first link transmit power level and one or both of the first link DFS channel blacklist and first link DFS channel whitelist to the wireless extender for use in configuring the first link. Operation proceeds from step1844to step1846. In step1846the wireless extender transmits to the mobile handset using the first link transmit power level. Operation proceeds from step1846, via connecting node B1848, to step1850. In step1850the test server sends a command to a base station, e.g., base station112, at a first customer premises, e.g. customer premises102, to perform a speed test on a second link between said base station said wireless extender, said speed test determining an achieved data rate, e.g., speed for the second link. Operation proceeds from step1850to step1852. In step1852the test server determines if the achieved rate for the second link determined by the speed test on the second link between the base station and the wireless extender supports a minimum expected communications data rate for said first speed tier. Step1852includes steps1854and1856, one of which is performed during an iteration of step1852. In step1854the test server determines that the second link does not support the minimum expected rate for the first speed tier. In step1858the test server determines that the second link does support the minimum expected rate for the first speed tier. Operation proceeds from step1852to step1858. In step1858the test server takes action with respect to the second link based on whether or not the second link supports the minimum expected communications data rate for the first speed tier. Step1858includes steps1860,1862,1872and1864. In step1860, if the determination is that the second link does not support the minimum expected rate for the first speed tier, then operation proceeds from step1860to step1862. In step1860, if the determination is that the second link does support the minimum expected rate for the first speed tier, then operation proceeds from step1860to step1864. In step1862the test server takes remedial action with respect to the second link. Step1862includes steps1866,1868and1870. One or more or all of steps1866,1868and1870are performed during an iteration of step1862. In step1866the test server sends a command to the base station to remove traffic from the second link. In step1868, the test server signals the base station to change the channel used for the second link. In step1870the test server sends a message to the mobile handset to cause the mobile handset to display a message in a display to the user of the mobile handset to move the wireless extender closer to the base station. Operation proceeds from step1866or1868to step1872. In step1872the test server initiates retesting of the second link in an attempt to verify that the second link supports the minimum expected communications data rate. Operation proceeds from step1870to step1873, in which retesting of both the first and second link are performed to determine if they support the minimum expected communications data rate after the wireless extender has been moved. Returning to step1864, in step1864the test server determines a second link transmit power level, e.g., a base station to extender transmit power level, to be used on the second link. Step1864includes steps1874and1876. In step1874the test server determines a transmit power level at which the second link fails to satisfy the minimum expected communications data rate for the first speed tier. Operation proceeds from step1874to step1876. in step1876the test server sets the second link transmit power level to a power level above the power level at which the second link fails to satisfy the minimum expected communications data rate for the first tier speed level, e.g., the test server sets the power level to a power level a predetermined amount, e.g. 2 dBs, above the determined transmit power at which the second link fails to satisfy the first minimum expected communications rate thus resulting in the transmit power level being set slightly above where the first tier data rate will be satisfied but near the rate at which the first tier data rate will fail to be satisfied. Operation proceeds from step1864, via connecting node C1878to step1880. In step1880the test server identifies dynamic frequency selection (DFS) channels which have a maximum permitted transmit power below the second link transmit power level. Operation proceeds from step1880to step1882. In step1882DFS channels having a maximum permitted transmit power level below the second link transmit power level are added, e.g., by the test server, to a second DFS channel blacklist stored in memory, said second DFS channel blacklist listing DFS channels which are not to be used by the second link. Operation proceeds from step1882to step1884. In step1884the test server identifies dynamic frequency selection (DFS) channels which have a maximum permitted transmit power equal to or above the second link transmit power level. Operation proceeds from step1884to step1886. In step1886DFS channels having a maximum permitted transmit power level equal to or above the second link transmit power level are added, e.g., by the test server, to a second DFS channel whitelist stored in memory, said second DFS channel whitelist listing DFS channels which are available for use by the second link. Operation proceeds from step1886to step1888. In step1888the test server communicates the determined second link transmit power level and one or both of the second link DFS channel blacklist and second link DFS channel whitelist to the base station for use in configuring the second link. Operation proceeds from step1888to step1890. In step1890the base station transmits to the wireless extender using the second link transmit power level. FIG.17, comprising the combination ofFIG.17A,FIG.17B,FIG.17CandFIG.17D, is a drawing of an assembly of components1900, comprising the combination of Part A1901, Part B1903, Part C1905and Part D1907, which may be included in a test server, in accordance with an exemplary embodiment. Exemplary assembly of components1900, may be, and sometimes is, included in a test server, e.g., test server108or test server ofFIG.1or test server900ofFIG.7, in accordance with an exemplary embodiment. Assembly of components1900can be, and in some embodiments is, used in test server108and/or test server900. The components in the assembly of components1900can, and in some embodiments are, implemented fully in hardware within the processor904, e.g., as individual circuits. The components in the assembly of components1900can, and in some embodiments are, implemented fully in hardware within the assembly of hardware components908, e.g., as individual circuits corresponding to the different components. In other embodiments some of the components are implemented, e.g., as circuits, within the processor904with other components being implemented, e.g., as circuits within assembly of components908, external to and coupled to the processor904. As should be appreciated the level of integration of components on the processor and/or with some components being external to the processor may be one of design choice. Alternatively, rather than being implemented as circuits, all or some of the components may be implemented in software and stored in the memory906of the test server900, with the components controlling operation of test server900to implement the functions corresponding to the components when the components are executed by a processor, e.g., processor904. In some such embodiments, the assembly of components1900is included in the memory906as assembly of components914. In still other embodiments, various components in assembly of components1400are implemented as a combination of hardware and software, e.g., with another circuit external to the processor providing input to the processor904which then under software control operates to perform a portion of a component's function. While processor904is shown in theFIG.9embodiment as a single processor, e.g., computer, it should be appreciated that the processor904may be implemented as one or more processors, e.g., computers. When implemented in software the components include code, which when executed by the processor904, configure the processor904to implement the function corresponding to the component. In embodiments where the assembly of components1900is stored in the memory906, the memory906is a computer program product comprising a computer readable medium comprising code, e.g., individual code for each component, for causing at least one computer, e.g., processor904, to implement the functions to which the components correspond. Completely hardware based or completely software based components may be used. However, it should be appreciated that any combination of software and hardware, e.g., circuit implemented components, may be used to implement the functions. As should be appreciated, the components illustrated inFIG.17control and/or configure the test server900or elements therein such as the processor904, to perform the functions of corresponding steps illustrated and/or described in the method of one or more of the flowcharts, signaling diagrams and/or described with respect to any of the Figures. Thus the assembly of components1900includes various components that perform functions of corresponding one or more described and/or illustrated steps of an exemplary method, e.g., steps of the method of flowchart1800ofFIG.17, steps of the method of flowchart200ofFIG.2, steps of the method of flowchart500ofFIG.3, steps of the flowchart600ofFIG.4, steps of the method of flowchart700ofFIG.5, steps of the flowchart800ofFIG.6, and/or described or shown with respect to any of the other figures, e.g., steps which are performed by a test server. In some embodiments a test server, e.g., test server108or test server ofFIG.1and/or test server900ofFIG.7includes assembly of components1900ofFIG.17and assembly of components1400ofFIG.12. Assembly of components1900includes a component1904configured to operate a test server to send a command to a wireless extender at a first customer premises to perform a speed test on a first link between the wireless extender and a mobile handset, said speed test determining an achieved data rate for the first link, and a component1906configured to operate the test server to determine if the achieved rate for the first link determined by the speed test on the first link between the wireless extender and the mobile handset supports a minimum expected communications data rate for a first speed tier, said first speed tier being a wireless communications speed level to be supported by the first link. Component1906includes a component1910configured to determine that the first link does not support the minimum expected rate for the first speed tier, and a component1912configured to determine that the first link does support the minimum expected rate for the first speed tier. Assembly of components1900further includes a component1914configured to take action with respect to the first link based on whether or not the first link supports the minimum expected communications data rate for the first speed tier. Component1914includes a component1916configured to control operation as a function of the determination whether or not the first link supports the minimum expected rate for the first speed tier, a component1918configured to operate the test server to take remedial action, e.g., in response to a determination that the first link does not support the minimum expected rate for the first speed tier, an a component1920configured to determine a first link transmit power level to be used on the first link, e.g. in response to a determination that the first link does support the minimum expected rate for the first speed tier. Component1918includes a component1922configured to send a command to the wireless extender to remove traffic from the first link, a component1924configured to signal the wireless extender to change the channel used for the first link, and a component1926configured to send a message to the mobile handset to cause the mobile handset to display a message or indication in a display to the user of the mobile handset to move the wireless extender closer to the base station. Assembly of components1900further includes a component1928configured to operate the test server to initiate the retesting of the first link to check that the first link supports the minimum expected communications data rate, e.g., following the taking of remedial action, e.g., by one or more of components1922,1924and/or1926. Component1920includes a component1930configured to determine a transmit power level at which the first link fails to satisfy the minimum expected communications data rate for the first speed tier and a component1932configured to set the first link transmit power level to a power level above the power level at which the first link fails to satisfy the minimum expected speed tier level, e.g., 2 dBs above. Assembly of components1900further includes a component1936configured to identify dynamic frequency selection (DFS) channels which have maximum permitted transmit power below the first link transmit power level, a component1938configured to add DFS channels having a maximum permitted transmit power level below the first link transmit power level to a first DFS channel blacklist stored in memory, said first DFS channel blacklist listing DFS channels which are not to be used by said first link, a component1940configured to identify dynamic frequency selection (DFS) channels which have maximum permitted transmit power equal to or above the first link transmit power level, and a component1942configured to add DFS channels having a maximum permitted transmit power level equal to or above the first link transmit power level to a first DFS channel whitelist stored in memory, said first DFS channel whitelist listing DFS channels which are available for use by said first link. Assembly of components1900further includes a component1944configured to operate the test server to communicate the determined first link transmit power and one or both of the first link DFS channel blacklist and the first DFS channel whitelist to the wireless extender for use in configuring the wireless extender. Assembly of components1900further includes a component1950configured to operate the test server to send a command to a base station at a first customer premises to perform a speed test on a second link between the base station and the wireless extender, said speed test determining an achieved data rate for the second link, and a component1952configured to operate the test server to determine if the achieved rate for the second link determined by the speed test on the second link between the base station and the wireless extender supports the minimum expected communications data rate for a first speed tier. Component1952includes a component1954configured to determine that the second link does not support the minimum expected rate for the first speed tier, and a component1956configured to determine that the second link does support the minimum expected rate for the first speed tier. Assembly of components1900further includes a component1958configured to take action with respect to the second link based on whether or not the second link supports the minimum expected communications data rate for the first speed tier. Component1958includes a component1960configured to control operation as a function of the determination whether or not the second link supports the minimum expected rate for the first speed tier, a component1962configured to operate the test server to take remedial action, e.g., in response to a determination that the second link does not support the minimum expected rate for the first speed tier, an a component1964configured to determine a second link transmit power level to be used on the second link, e.g. in response to a determination that the second link does support the minimum expected rate for the first speed tier. Component1962includes a component1966configured to send a command to the wireless extender to remove traffic from the second link, a component1968configured to signal the wireless extender to change the channel used for the second link, and a component1970configured to send a message to the mobile handset to cause the mobile handset to display a message or indication in a display to the user of the mobile handset to move the wireless extender closer to the base station. Assembly of components1900further includes a component1972configured to operate the test server to initiate the retesting of the second to check that the second link supports the minimum expected communications data rate, e.g., following the taking of remedial action, e.g., where the remedial action is one or both of: sending a command to the base station remove traffic from the second link or signaling the base station to change the channel used for the second link, and a component1973configured to retest both the first and second links, e.g., sequentially, to determine if they support the minimum expected data rate for the first speed tier, after the wireless extender has been moved, e.g., moved closer to the base station in response to the remedial action of component1970. Component1964includes a component1974configured to determine a transmit power level at which the second link fails to satisfy the minimum expected communications data rate for the first speed tier and a component1976configured to set the second link transmit power level to a power level above the power level at which the second link fails to satisfy the minimum expected speed tier level, e.g., 2 dBs above. Assembly of components1900further includes a component1980configured to identify dynamic frequency selection (DFS) channels which have maximum permitted transmit power below the second link transmit power level, a component1982configured to add DFS channels having a maximum permitted transmit power level below the second link transmit power level to a second DFS channel blacklist stored in memory, said second DFS channel blacklist listing DFS channels which are not to be used by said second link, a component1984configured to identify dynamic frequency selection (DFS) channels which have maximum permitted transmit power equal to or above the second link transmit power level, and a component1986configured to add DFS channels having a maximum permitted transmit power level equal to or above the second link transmit power level to a second DFS channel whitelist stored in memory, said second DFS channel whitelist listing DFS channels which are available for use by said second link. Assembly of components1900further includes a component1988configured to operate the test server to communicate the determined second link transmit power and one or both of the second link DFS channel blacklist and the second DFS channel whitelist to the base station for use in configuring the base station. Various aspects and/or features of some embodiments of the present invention are discussed below. The RSSI level, typically used for making a wireless extender placement determination, is calculated from a beacon, transmitted by the base station, which is transmitted at full power/low MCS. However, it may be desirable for data may be transmitted from the base station to the wireless extender at a higher MCS. In order to use the highest modulation rate possible, the PA will typically need to back off its power level from the full power level. This is why RSSI, based on a beacon signal at full power/low MCS, is not enough to know range, if different MCS levels are the be used for data transmission. Typically different wireless transmission devices, e.g., different wireless WiFi extenders or different WiFi base stations, have different maximum rates and/or have different fall off characteristics. The rate vs range performance will vary depending upon the following: vender/model/chipset; RSSI level; RF interference level (SINR) for WiFi extender; RF interference level (SINR) for WiF base station; physical path loss, e.g., physical path loss due to walls, furniture, etc.; and physical distance between the WiFi extender and the WiFi base station. There is no “distance” vs. speed that works in a real house. Various embodiments, in accordance with the present invention, are directed to extender placement, e.g., WiFi extender placement, at a customer premises, e.g., a home site or a business site, including a base station, e.g. a WiFi base station. In some embodiments of the present invention, the following steps are automated: 1) independent link analysis leveraging agents, e.g., test apps, at the wireless extender, at the base station, and at a test server, e.g., a test server in a cloud system; 2) use of power control to find the margin of each radio link, e.g., front haul radio link between the wireless extender and a mobile handset, and a back haul radio link between the wireless base station and the wireless extender, and blacklisting/whitelisting DFS channels; and 3) provide service level agreement (SLA) for speed tier link analysis. In some embodiments of the present invention the methods and/or apparatus, in accordance with the present invention, are used to find the sweet spot, e.g., ideal location, to locate a wireless extender, e.g., a WiFi extender, at the customer premises, said sweet spot being close enough for high enough throughput, e.g., to satisfy SLA, and far enough to cover the whole, or a large portion of the, home and to isolate two or more access points and their clients, e.g., access points corresponding to adjacent homes in the neighborhood. In various embodiments, in accordance with a feature of some embodiments of the present invention, transmission power is adjusted at the wireless extender, e.g., WiFi extender, and/or at the base station, e.g., WiFi base station, based on determined margins which were during installation testing, e.g., automated installation testing. In some embodiments, power control can be, and sometimes is, used to lower the transmission power levels while maintaining high throughput region for the wireless extender, e.g., WiFi extender, and/or base station, e.g., WiFi base station. In accordance with a feature of some embodiments of the present invention, the system analysis used to determine wireless extender placement and determine configuration information, e.g., configuration information of the wireless extender and base station, uses software link-analysis agents, e.g., test apps, on a wireless extender, e.g. WiFi extender, a base station, e.g., WiFi base station, a test server, e.g., a test server in a cloud system, and/or a mobile handset, e.g. a wireless, e.g., WiFi, test tool or a user equipment device, e.g., a mobile user customer device with WiFi. In various embodiments, the software link analysis agents, e.g., test apps, support one or more or all of: RSSI measurements, rate testing over wireless links, SLA evaluation for links, traffic reduction on a link, channel changing for a link, power margin measurements for a link, transmission power level setting, whitelisting/blacklisting of DFS channels in view of a determined transmission power level for a link, reporting and displaying of results and/or recommendations, e.g., a recommendation to relocate a wireless extender, and processing of user input, e.g., test operator input to start an automated test. In some embodiments, a technician, located at an edge of the home, presses a button, e.g., on a GUI interface of a mobile handset, which is a WiFi test tool, to start automated ecosystem analysis. This phase of testing checks to see if the WiFi extender coverage is adequate to the edge of home-independent of backhaul (WiFi base station to WiFi extender) performance, e.g., this phase of automated testing measures the link (front haul link (WiFi extender to mobile handset)) from the wireless extender to a possible client location and determines if the desired rate, e.g., in accordance with the SLA, has been achieved during the rate testing. If problems are detected, e.g., the desired data rate corresponding to the rate tier in the SLA for the customer premises, is not achieved, a remediation action is performed including: i) changing channels for the link, ii) reducing traffic on the link, or iii) notifying the operator of the wireless handset to move the location of the wireless extender, e.g., closer to the base station. Following the remediation action, the wireless link between the wireless extender and the base station is retested, and if the speed test passes, then the power margin is assessed, e.g., via a sequence of power backoffs and rate tests until failure occurs, the transmission power level for the wireless extender is set to a determined minimal acceptable level for the link, and DFS channels are whitelisted and/or blacklisted based on the determined transmission power setting for the link. Then, the automated system processed to check if the location of the wireless extender supports adequate backhaul for the link between the WiFi base station and the wireless extender. In this phase of the testing the link between the WiFi base station and the WiFi extender is tested independent of the internet speed. This phase of automated testing measures the link (back haul link (WiFi base station to WiFi extender)) and determines if the desired rate has been achieved during the rate testing. If problems are detected, e.g., the desired data rate is not achieved, a remediation action is performed including: i) changing channels for the link, ii) reducing traffic on the link, or iii) notifying the operator of the wireless handset to move the location of the wireless extender, e.g., closer to the base station. Following the remediation action, retested is performed. If the remediation calls for relocation of the wireless extender, then the front haul link testing is repeated, followed by back haul link re-testing. However, if the remediation calls for changing channels or reducing traffic on the link between the base station and wireless extender, then following the remediation, the link between the wireless base station and the wireless extender is retested to see if it satisfies the speed requirements. If the speed test passes, then the power margin for the link between the base station and wireless extender is assessed, e.g., via a sequence of power backoffs and rate tests until failure occurs, the transmission power level for the base station is set to a determined minimal acceptable level for the link, and DFS channels are whitelisted and/or blacklisted based on the determined transmission power setting for the link between the base station and wireless extender. Once the testing determines that the front haul link (wireless extender to mobile handset) and back haul link (wireless base station to wireless extender) are acceptable in terms of data rate and have been configured for transmission power levels, e.g. optimal transmission power levels, then an end-to end performance test, e.g., an end-to-end rate test, over an end-to end connection path, is performed from the test server to the mobile handset, to verify that operation is acceptable, said end to end connection path including: i) a link between the test server and the base station, which traverses the Internet and, in some embodiments, a cable modem/PON, ii) the wireless link between the base station and the wireless extender, and iii) the wireless link between the wireless extender and the mobile handset. Current systems lack the ability to isolate each link between the elements in the network. Various embodiments implemented in accordance with one or more features of the present invention allow a technician to detect flaws in the system with a higher accuracy than known techniques. In a residential/SMB deployment, there are three (3) independent links: i) a WiFi extender to edge-of network (EON) device link, sometime referred to as front haul link; ii) a WiFi extender back to Wi-Fi base station—wired or wireless link, sometimes referred to as a backhaul link; and iii) a WiFi base station to internet—cable/optical, wireline link. In some embodiments, in accordance with the present invention, the methods and apparatus can, and sometimes do detect, with regard to the Wi-Fi to internet connection, one or more of: provisioned speeds are improperly configured, issues with infrastructure, and issues with backend network capability. In some embodiments, in accordance with the present invention, the methods and apparatus can, and sometimes do detect, with regard to the Wi-Fi base station to WiFi router connection, one or more of: issues with in-band interference, issues with backhaul signal strength, and issues with channel conditions. In some embodiments, in accordance with the present invention, the methods and apparatus can, and sometimes do detect, with regard to the Wi-Fi extender to EON device, one or more of: issues with in-band interference, issues with overall coverage, and issues with channel conditions. Current art troubleshooting techniques only evaluate two channel characteristics, throughput and signal strength. In accordance with some embodiments of the present invention, additional analytics can be, and sometimes are, gathered by the WiFi base station and WiFi extender that can help identify problematic channel conditions. Various features of the current invention are directed to physical layer analysis, e.g., performing a SLAM determination to determine whether or not a physical link supports a speed tier. RSSI can be, and sometimes is, evaluated to determine what is the power measurement of an RF signal. RSSI measurements can be, and sometimes are, used to detect: i) that link endpoints are too far from each other, and ii) destructive interference. Problems with RSSI may result in: i) lower data rates, ii) lower MCS rates, and/or iii) lower SS. Frequency can be, and sometimes is, evaluated to determine what frequency is being used for a link, e.g., 2.4 GHz or 5 GHz. Different frequencies of operation have different characteristics. Frequency measurements and/or signal measurement at different frequencies, e.g., different frequencies of interest, can be, and sometimes are, used to detect: Ii) in-band destructive interference, ii) in-band congestion; and/or iii) device limitations. Various frequency related effects include: i) 2.4 GHz has lower rate but increased signal penetration; ii) 5 GHz has higher theoretical data rate but decreased signal penetration; iii) 2.4 GHZ will operate using 802.11n; and iv) 5 GHZ will operate using either 802.11n or 902.211ac. The standard in use can be, and sometimes is evaluated, e.g., whether the IEEE standard is 802.22n or 802.22ac. Different standards may correspond to different operating frequencies, and there may be, and sometimes are, device limitations with regard to which standards are supported. 802.11n has a lower data rate than 902.11ac. The modulation and coding scheme (MCS) information can be, and sometimes is, evaluated, e.g., determine what is the primary MCS rate. Problems with MCS can cause RSSI issues and/or destructive interference. Lower MCS rates have lower data rates. Spatial Stream information can be, and sometimes is, evaluated, e.g., determine what is the primary number of SS. Problems with SS can cause RSSI issues and/or destructive interference. Lower number of SS have lower data rates. Bandwidth can be, and sometimes is, evaluated, e.g., evaluate to determine what is the primary bandwidth. Problems with bandwidth can cause co-channel destructive interference. Reduced bandwidth related to lower rates. In various embodiments, the following parameters: frequency, standard, MCS, spatial stream, and bandwidth, corresponding to a device, e.g., a WiFi extender or a WiFi base station and/or a physical link, e.g., a physical link between the WiFi extender and an EON device or between a WiFi base station and a WiFi extender, are evaluated to determine if the physical link supports the bit rate of the speed tier based on the SLA corresponding to the customer premises. In some embodiments implemented in accordance with the present invention, DFS channels are whitelisted and/or blacklisted. If the exemplary method, in accordance with the present invention, determines that the power needed to support DFS channels is available, the DFS channels will be whitelisted and available for use. If the exemplary method, in accordance with the present invention, determines that the power needed to support DFS channels is not available, the DFS channels will be blacklisted and not available for use. This technique of avoids stranding clients due to power level reduction in DFS channels. Methods and apparatus, in accordance with some embodiments of the present invention, allow a service provider, who manages WiFi base stations and/or WiFi extenders, to reduce the amount of truck rolls to a customer premises, thereby reducing overall costs. Methods and apparatus, in accordance with some embodiments of the present invention, provide the technician and/or customer with a better understanding of extender placement. Methods and apparatus, in accordance with some embodiments of the present invention, may and sometimes do one or more of the following: i) reduce the number of extenders needed at a customer premises, ii) provide an understanding of speed balance between the links, iii) provide an understanding of power range for maximum throughput, iv) provide SLA and margin for the placement; v) enable DFS channels when coverage allows; vi) provide one step independent link analysis; and vii) facilitate robustness of each link for each speed tier. Dynamic Frequency Selection (DFS) is a spectrum-sharing mechanism that allows wireless LANs (WLANs) to coexist with radar systems. It automatically selects a frequency that does not interfere with certain radar systems while operating in the 5 GHz band. DFS is a feature of ETSI BRAN HIPERLAN/2 and IEEE Standard 802.11h. Numbered List of Exemplary Method Embodiments: Method Embodiment 1 A method of implementing a communications system, the method comprising: operating (1804) a test server (108) to send a command to a wireless extender (114) at a first customer premises (102) to perform a speed test on a first link between said wireless extender and a mobile handset (116), said speed test determining an achieved data rate (e.g., speed) for the first link; operating (1806) the test server (108) to determine if the achieved data rate for the first link (122) determined by the speed test on the first link between said wireless extender and said mobile handset supports a minimum expected communications data rate (e.g., speed in bits per second) for a first speed tier (e.g., determine if the achieved speed for the first link is greater than or equal to the expected minimum data rate for the first speed tier), said first speed tier being a wireless communications speed level to be supported by said first link; and taking action (1814) with respect to the first link based on whether or not the first link supports the minimum expected communications data rate for the first speed tier. Method Embodiment 2 The method of Method Embodiment 1, wherein the first link is determined not to support the minimum expected communications data rate for the first speed tier; and wherein said step of taking action (1814) with respect to the first link includes: in response to determining that the first link does not support the first speed tier, operating the test server to i) take (1818) remedial action (e.g., change channel used on first link, eliminate traffic on link or initiate moving of extender closer to base station) and ii) initiate (1828) retesting (step which is loop back after some change) of the first link to check that the first link supports the minimum expected communications data rate. Method Embodiment 3 The method of Method Embodiment 2, wherein operating (1818) the test server to take remedial action includes one or more of: sending (1822) a command to the wireless extender to remove traffic from the first link; or signaling (1824) the wireless extender to change the channel used for the first link (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the first link). Method Embodiment 4 The method of Method Embodiment 2, wherein operating the test server to take remedial action includes one or more of includes one or more of: sending (1822) a command to the wireless extender to remove traffic from the first link; signaling (1824) the wireless extender to change the channel (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the first link); or sending (1826) a message to said mobile handset to cause the mobile handset to display a message in a display of to the user of the handset to move the extender closer to the base station. Method Embodiment 5 The method of Method Embodiment 1, wherein the first link is determined to support the minimum expected communications data rate for the first speed tier; and wherein said step of taking action (1818) includes: in response to determining that the first link supports the first speed tier, operating (1820) the test server to determine a first link transmit power level (e.g. an extender to mobile handset transmit power level) to be used on the first link. Method Embodiment 6 The method of Method Embodiment 5, wherein operating (1820) the test server to determine a first link transmit power level (e.g. an extender to mobile handset transmit power level) to be used on the first link includes: determining (1830) a transmit power level at which the first link fails to satisfy the minimum expected communications data rate for the first speed tier; and setting (1832) the first link transmit power level to a power level above the determined power level at which the first link fails to satisfy the minimum first tier speed level (e.g. to a power level a predetermined amount, e.g., 2 dB, above the determined highest transmit power level at which the first link first fails to satisfy the first minimum expected communications rate thus resulting in the transmit power being set slightly above the power level where the first data rate will be satisfied but near the rate at which the first data rate will fail to be satisfied). Method Embodiment 7 The method of Method Embodiment 5, further comprising: operating (1836) the test server to identify Dynamic Frequency Selection (DFS) channels which have a maximum permitted transmit power below the first link transmit power level; and adding (1838) DFS channels having a maximum permitted transmit power below the first link transmit power level to a first DFS channel black list stored in memory, said first DFS channel blacklist listing DFS channels which are not to be used by said first link. Method Embodiment 8 The method of Method Embodiment 7, further comprising: operate (1840) the test server to identify DFS channels which have a maximum permitted transmit power equal to or above the first link transmit power level; and adding (1842) identified DFS channels having a maximum permitted transmit power equal to or above the transmit power to the first link to a first link DFS channel whitelist stored in memory, said first link DFS channel whitelist listing DFS channels which are available for use by said first link. Method Embodiment 9 The method of Method Embodiment 8, further comprising: operating (1844) the server to communicate the determined first link transmit power and one or both of the first link DFS channel black list and first link DFS channel white list to the wireless extender for use in configuring the first link. Method Embodiment 10 The method of Method Embodiment 9, further comprising: operating (1846) the wireless extender to transmit to the mobile handset using said first link transmit power level. Method Embodiment 11 The method of Method Embodiment 1, further comprising: operating (1850) the test server (108) to send a command to a base station at the first customer premises (102) to perform a speed test on a second link extending between said base station and said wireless extender, said speed test determining an achieved data rate (e.g., speed) for the second link; operating (1852) the test server (108) to determine if the achieved data rate for the second link (122) determined by the speed test on the second link between said base station and said wireless extender supports the minimum expected communications data rate (e.g., speed indicated by test results in bits per second is greater than or equal to the minimum expected communications data rate as expressed in bits per second) for the first speed tier; and taking (1858) action based on whether or not or not the second link supports the minimum expected communications data rate. Method Embodiment 12 The method of Method Embodiment 11, wherein the second link is determined not to have been verified to support the first speed tier; and wherein said step of taking (1858) action with respect to the second link includes: in response to determining that the second link does not support the first speed tier, operating the test server to i) take (1862) remedial action with respect to the second link (e.g., change channel used on second link, eliminate traffic on second link or initiate moving of extender closer to base station) and ii) initiate (1872) retesting (step which is loop back after some change) of the second link in an attempt to verify that the second link supports the minimum expected communications data rate. Method Embodiment 13 The method of Method Embodiment 11, wherein operating (1862) the test server to take remedial action with respect to the second link includes one or more of: sending (1866) a command to the base station to remove traffic from the second link; or signaling (1868) the base station to change the channel used for the second link (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the second link). Method Embodiment 14 The method of Method Embodiment 12, wherein operating (1862) the test server to take remedial action includes one or more of includes one or more of: sending (1866) a command to the base station to remove traffic from the second link; signaling (1868) the base station to change the channel (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the second link); or sending (1870) a message to said mobile handset to cause the mobile handset to display a message on the display of to mobile handset the user of the handset to move the extender closer to the base station. Method Embodiment 15 The method of Method Embodiment 14, further comprising: retesting (1873) both the first link and the second link to determine if they support the minimum expected communications data rate after the extender has been moved. Method Embodiment 16 The method of Method Embodiment 1, wherein the second link is determined to support the minimum expected communications data rate; and wherein said step of taking action (1858) includes: operating (1864) the test server to determine a second link transmit power level (e.g. a base station to extender transmit power level) to be used on the second link. Method Embodiment 17 The method of Method Embodiment 16, wherein operating (1864) the test server to determine a second link transmit power level to be used on the second link includes: determining (1874) a transmit power level at which the second link fails to support the minimum expected communications data rate for the first speed tier; and setting (1876) the second link transmit power level to a power level above the determined power level at which the second link fails to support the minimum expected communications data rate (e.g. to a power level a predetermined amount, e.g., 2 dB, above the determined transmit power level at which the second link first fails to satisfy the first minimum expected communications rate thus resulting in the transmit power for the second link being set slightly above the power level where the first data rate will be satisfied but near the rate at which the first data rate will fail to be satisfied on the second link). Method Embodiment 18 The method of Method Embodiment 16, further comprising: operating (1880) the test server to identify Dynamic Frequency Selection (DFS) channels which have a maximum permitted transmit power below the second link transmit power level; and adding (1882) DFS channels having a maximum permitted transmit power below the second link transmit power level to a second DFS channel black list stored in memory, said second DFS channel blacklist listing DFS channels which are not to be used by said second link. Method Embodiment 19 The method of Method Embodiment 18, further comprising: operating (1884) the test server to identify DFS channels which have a maximum permitted transmit power equal to or above the second link transmit power level; and adding (1886) identified DFS channels having a maximum permitted transmit power equal to or above the transmit power to the second link to a second link DFS channel white list stored in memory, said second link DFS channel white list listing DFS channels which are available for use by said second link. Method Embodiment 20 The method of Method Embodiment 19, further comprising: operating (1888) the test server to send said second link transmit power level and one or both of said second link DFS channel blacklist and said second link DFS channel white list to said base station for use in configuring said second link. Method Embodiment 21 The method of Method Embodiment 20, further comprising: operating (1890) the base station to transmit to said wireless extender using said second link transmit power level. Method Embodiment 22 The method of Method Embodiment 1, wherein said wireless extender is a WiFi wireless extender. Method Embodiment 23 The method of Method Embodiment 11, wherein said wireless extender is a WiFi wireless extender extender and wherein said base station is a WiFi base station. Numbered List of Exemplary System Embodiments: System Embodiment 1 A communications system comprising: a test server including a first processor, said first processor being configured to: operate (1804) the test server (108) to send a command to a wireless extender (114) at a first customer premises (102) to perform a speed test on a first link between said wireless extender and a mobile handset (116), said speed test determining an achieved data rate (e.g., speed) for the first link; determine if the achieved data rate for the first link (122) determined by the speed test on the first link between said wireless extender and said mobile handset supports a minimum expected communications data rate (e.g., speed in bits per second) for a first speed tier (e.g., determine if the achieved speed for the first link is greater than or equal to the expected minimum data rate for the first speed tier), said first speed tier being a wireless communications speed level to be supported by said first link; and take action (1814) with respect to the first link based on whether or not the first link supports the minimum expected communications data rate for the first speed tier. System Embodiment 2 The communications system of System Embodiment 1, wherein said first processor is configured to: i) take (1818) remedial action (e.g., change channel used on first link, eliminate traffic on link or initiate moving of extender closer to base station) and ii) initiate (1828) retesting (step which is loop back after some change) of the first link to check that the first link supports the minimum expected communications data rate, in response to determining that the first link does not support the first speed tier, as part of being configured to take action (1814) with respect to the first link. System Embodiment 3 The communications system of System Embodiment 2, wherein said first processor is configured to operate the test server to perform one or more of: i) sending (1822) a command to the wireless extender to remove traffic from the first link; or ii) signaling (1824) the wireless extender to change the channel used for the first link (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the first link), as part of being configured to operate (1818) the test server to take remedial action. System Embodiment 4 The communications system of System Embodiment 2, wherein said first processor is configured to operate the test server to perform one or more of: sending (1822) a command to the wireless extender to remove traffic from the first link; signaling (1824) the wireless extender to change the channel (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the first link); or sending (1826) a message to said mobile handset to cause the mobile handset to display a message in a display of to the user of the handset to move the extender closer to the base station, as part of being configured to operate the test server to take remedial action. System Embodiment 5 The communications system of System Embodiment 1, wherein said first processor is configured to determine a first link transmit power level (e.g. an extender to mobile handset transmit power level) to be used on the first link, in response to determining that the first link supports the first speed tier, as part of being configure to taking action. System Embodiment 6 The communications system of System Embodiment 5, wherein said first processor is configured to: determine (1830) a transmit power level at which the first link fails to satisfy the minimum expected communications data rate for the first speed tier; and set (1832) the first link transmit power level to a power level above the determined power level at which the first link fails to satisfy the minimum first tier speed level (e.g. to a power level a predetermined amount, e.g., 2 dB, above the determined highest transmit power level at which the first link first fails to satisfy the first minimum expected communications rate thus resulting in the transmit power being set slightly above the power level where the first data rate will be satisfied but near the rate at which the first data rate will fail to be satisfied), as part of being configured to operate (1820) the test server to determine a first link transmit power level (e.g. an extender to mobile handset transmit power level) to be used on the first link. System Embodiment 7 The communications system of System Embodiment 5, wherein said first processor is further configured to: operate (1836) the test server to identify Dynamic Frequency Selection (DFS) channels which have a maximum permitted transmit power below the first link transmit power level; and add (1838) DFS channels having a maximum permitted transmit power below the first link transmit power level to a first DFS channel black list stored in memory, said first DFS channel blacklist listing DFS channels which are not to be used by said first link. System Embodiment 8 The communications system of System Embodiment 7, wherein said first processor is further configured to: identify (1840) DFS channels which have a maximum permitted transmit power equal to or above the first link transmit power level; and add (1842) identified DFS channels having a maximum permitted transmit power equal to or above the transmit power to the first link to a first link DFS channel whitelist stored in memory, said first link DFS channel whitelist listing DFS channels which are available for use by said first link. System Embodiment 9 The communications system of System Embodiment 8, wherein said first processor is further configured to: operate (1844) the test server to communicate the determined first link transmit power and one or both of the first link DFS channel blacklist and first link DFS channel whitelist to the wireless extender for use in configuring the first link. System Embodiment 10 The communications system of System Embodiment 9, further comprising: said wireless extender, said wireless extender including a second processor, and wherein said second processor is configured to operate (1846) the wireless extender to transmit to the mobile handset using said first link transmit power level. System Embodiment 11 The communications system of System Embodiment 1, wherein said first processor is further configured to: operate (1850) the test server (108) to send a command to a base station at the first customer premises (102) to perform a speed test on a second link extending between said base station and said wireless extender, said speed test determining an achieved data rate (e.g., speed) for the second link; determine (1852) if the achieved data rate for the second link (122) determined by the speed test on the second link between said base station and said wireless extender supports the minimum expected communications data rate (e.g., speed indicated by test results in bits per second is greater than or equal to the minimum expected communications data rate as expressed in bits per second) for the first speed tier; and take (1858) action based on whether or not or not the second link supports the minimum expected communications data rate. System Embodiment 12 The communications system of System Embodiment 11, wherein said first processor is configured to: i) take (1862) remedial action with respect to the second link (e.g., change channel used on second link, eliminate traffic on second link or initiate moving of extender closer to base station) and ii) initiate (1872) retesting (step which is loop back after some change) of the second link in an attempt to verify that the second link supports the minimum expected communications data rate, in response to determining that the second link does not support the first speed tier, as part of being configured to take action. System Embodiment 13 The communications system of System Embodiment 11, wherein said first processor is configured to operate the test server to perform one or more of: sending (1866) a command to the base station to remove traffic from the second link; or signaling (1868) the base station to change the channel used for the second link (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the second link), as part of being configured to operate (1862) the test server to take remedial action with respect to the second link. System Embodiment 14 The communications system of System Embodiment 12, wherein said first processor is configured to operate the test server to perform one or more of: sending (1866) a command to the base station to remove traffic from the second link; signaling (1868) the base station to change the channel (e.g., by changing frequencies, bandwidth, speed-tier, modulation and coding scheme, number of spatial streams, transmission times, tone hopping patterns and/or codes used to implement the channel being used for the second link); or sending (1870) a message to said mobile handset to cause the mobile handset to display a message on the display of to mobile handset the user of the handset to move the extender closer to the base station, as part of being configured to operate (1862) the test server to take remedial action. System Embodiment 15 The communications system of System Embodiment 14, wherein said first processor is further configured to control retesting (1873) of both the first link and the second link to determine if they support the minimum expected communications data rate after the extender has been moved. System Embodiment 16 The communications system of System Embodiment 1, wherein said first processor is configured to: determine a second link transmit power level (e.g. a base station to extender transmit power level) to be used on the second link, in response to a determination that the second link supports the minimum expected communications data rate, as part of being configured to take action with regard to the second link. System Embodiment 17 The communications system of System Embodiment 16, wherein said first processor is configured to: determine (1874) a transmit power level at which the second link fails to support the minimum expected communications data rate for the first speed tier; and set (1876) the second link transmit power level to a power level above the determined power level at which the second link fails to support the minimum expected communications data rate (e.g. to a power level a predetermined amount, e.g., 2 dB, above the determined transmit power level at which the second link first fails to satisfy the first minimum expected communications rate thus resulting in the transmit power for the second link being set slightly above the power level where the first data rate will be satisfied but near the rate at which the first data rate will fail to be satisfied on the second link), as part of being configured to determine a second link transmit power level to be used on the second link. System Embodiment 18 The communications system of System Embodiment 16, wherein said first processor is further configured to: identify (1880) Dynamic Frequency Selection (DFS) channels which have a maximum permitted transmit power below the second link transmit power level; and add (1882) DFS channels having a maximum permitted transmit power below the second link transmit power level to a second DFS channel black list stored in memory, said second DFS channel blacklist listing DFS channels which are not to be used by said second link. System Embodiment 19 The communications system of System Embodiment 18, wherein said first processor is further configured to: identify DFS channels which have a maximum permitted transmit power equal to or above the second link transmit power level; and add (1886) identified DFS channels having a maximum permitted transmit power equal to or above the transmit power to the second link to a second link DFS channel whitelist stored in memory, said second link DFS channel whitelist listing DFS channels which are available for use by said second link. System Embodiment 20 The communications system of System Embodiment 19, wherein said first processor is further configured to: operate (1888) the test server to send said second link transmit power level and one or both of said second link DFS channel blacklist and said second link DFS channel whitelist to said base station for use in configuring said second link. System Embodiment 21 The communications system of System Embodiment 20, further comprising a base station including a second processor, said second processor being configured to operate (1890) the base station to transmit to said wireless extender using said second link transmit power level. System Embodiment 22 The system of System Embodiment 1, wherein said wireless extender is a WiFi wireless extender. System Embodiment 23 The system of System Embodiment 11, wherein said wireless extender is a WiFi wireless extender and wherein said base station is a WiFi base station. The techniques of various embodiments may be implemented using software, hardware and/or a combination of software and hardware. Various embodiments are directed to apparatus, e.g., test servers, wireless extenders such as WiFi extenders, base stations such as WiFi base stations, mobile handsets, user equipment devices, IP edge devices, servers, network nodes, and/or network equipment devices. Various embodiments are also directed to methods, e.g., method of controlling and/or operating test servers, wireless extenders, base stations, mobile handsets, UE devices, IP edge devices, servers, network nodes, etc. Various embodiments are also directed to machine, e.g., computer, readable medium, e.g., ROM, RAM, CDs, hard discs, etc., which include machine readable instructions for controlling a machine to implement one or more steps of a method. The computer readable medium is, e.g., non-transitory computer readable medium. It is understood that the specific order or hierarchy of steps in the processes and methods disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes and methods may be rearranged while remaining within the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented. In some embodiments, one or more processors are used to carry out one or more steps of the each of the described methods. In various embodiments each of the steps or elements of a method are implemented using one or more processors. In some embodiments, each of elements or steps are implemented using hardware circuitry. In various embodiments devices, servers, nodes and/or elements described herein are implemented using one or more components to perform the steps corresponding to one or more methods, for example, message reception, signal processing, sending, comparing, determining and/or transmission steps. Thus, in some embodiments various features are implemented using components or in some embodiments logic such as for example logic circuits. Such components may be implemented using software, hardware or a combination of software and hardware. Many of the above described methods or method steps can be implemented using machine executable instructions, such as software, included in a machine readable medium such as a memory device, e.g., RAM, floppy disk, etc. to control a machine, e.g., general purpose computer with or without additional hardware, to implement all or portions of the above described methods, e.g., in one or more devices, servers, nodes and/or elements. Accordingly, among other things, various embodiments are directed to a machine-readable medium, e.g., a non-transitory computer readable medium, including machine executable instructions for causing a machine, e.g., processor and associated hardware, to perform one or more of the steps of the above-described method(s). Some embodiments are directed to a device, e.g., a controller, including a processor configured to implement one, multiple or all of the steps of one or more methods of the invention. In some embodiments, the processor or processors, e.g., CPUs, of one or more devices, e.g., communications nodes such as test server, wireless extender, base station, mobile handset device, are configured to perform the steps of the methods described as being performed by the test server, wireless extender, base station, mobile handset device. The configuration of the processor may be achieved by using one or more components, e.g., software components, to control processor configuration and/or by including hardware in the processor, e.g., hardware components, to perform the recited steps and/or control processor configuration. Accordingly, some but not all embodiments are directed to a device, e.g., the test server, wireless extender, base station, mobile handset device, with a processor which includes a component corresponding to each of the steps of the various described methods performed by the device in which the processor is included. In some but not all embodiments a device, e.g., the test server, wireless extender, base station, mobile handset device, includes a controller corresponding to each of the steps of the various described methods performed by the device in which the processor is included. The components may be implemented using software and/or hardware. Some embodiments are directed to a computer program product comprising a computer-readable medium, e.g., a non-transitory computer-readable medium, comprising code for causing a computer, or multiple computers, to implement various functions, steps, acts and/or operations, e.g. one or more steps described above. Depending on the embodiment, the computer program product can, and sometimes does, include different code for each step to be performed. Thus, the computer program product may, and sometimes does, include code for each individual step of a method, e.g., a method of controlling a test server, wireless extender, base station, mobile handset device. The code may be in the form of machine, e.g., computer, executable instructions stored on a computer-readable medium, e.g., a non-transitory computer-readable medium, such as a RAM (Random Access Memory), ROM (Read Only Memory) or other type of storage device. In addition to being directed to a computer program product, some embodiments are directed to a processor configured to implement one or more of the various functions, steps, acts and/or operations of one or more methods described above. Accordingly, some embodiments are directed to a processor, e.g., CPU, configured to implement some or all of the steps of the methods described herein. The processor may be for use in, e.g., a communications device such as a test server, a wireless extender, a base station, a mobile handset device or other device described in the present application. Numerous additional variations on the methods and apparatus of the various embodiments described above will be apparent to those skilled in the art in view of the above description. Such variations are to be considered within the scope. Numerous additional embodiments, within the scope of the present invention, will be apparent to those of ordinary skill in the art in view of the above description and the claims which follow. Such variations are to be considered within the scope of the invention.
181,623
11943636
DETAILED DESCRIPTION FIG.1is a schematic overview depicting a wireless communications network100wherein embodiments herein may be implemented. The wireless communications network100comprises one or more RANs and one or more CNs. The wireless communications network100may use 5G NR but may further use a number of other different technologies, such as, (LTE), LTE-Advanced, Wideband Code Division Multiple Access (WCDMA), Global System for Mobile communications/enhanced Data rate for GSM Evolution (GSM/EDGE), Worldwide Interoperability for Microwave Access (WiMax), or Ultra Mobile Broadband (UMB), just to mention a few possible implementations. Network nodes such as a network node110operate in the wireless communications network100, providing radio coverage by means of antenna beams, referred to as beams herein. The network node110provides a number of beams115also referred to as antenna beams, and may use these beams for communicating with e.g. one or more User Equipment, UEs120, see below. The network node110is a radio node such as e.g. a base station or a UE. The network node110provides radio coverage over a geographical area by means of the antenna beams. The geographical area may be referred to as a cell, a service area, beam or a group of beams. The network node110may in this case be a transmission and reception point e.g. a radio access network node such as a base station, e.g. a radio base station such as a NodeB, an evolved Node B (eNB, eNode B), an NR Node B (gNB), a base transceiver station, a radio remote unit, an Access Point Base Station, a base station router, a transmission arrangement of a radio base station, a stand-alone access point, a Wireless Local Area Network (WLAN) access point, an Access Point Station (AP STA), an access controller, a UE acting as an access point or a peer in a Device to Device (D2D) communication, or any other network unit capable of communicating with a UE within the cell served by network node110depending e.g. on the radio access technology and terminology used. User Equipments operate in the wireless communications network100, such as a UE120. The UE120provides radio coverage by means of antenna beams126, also referred to as beams herein. The UE,120, may e.g. be an NR device, a mobile station, a wireless terminal, an NB-IoT device, an eMTC device, a CAT-M device, a WiFi device, an LTE device and an a non-access point (non-AP) STA, a STA, that communicates via a base station such as e.g. the network node110, one or more Access Networks (AN), e.g. RAN, to one or more core networks (CN). It should be understood by the skilled in the art that the UE relates to a non-limiting term which means any UE, terminal, wireless communication terminal, user equipment, (D2D) terminal, or node e.g. smart phone, laptop, mobile phone, sensor, relay, mobile tablets or even a small base station communicating within a cell. Thus, the network node110provides a number of beams which may be used for transmissions between the network node110and respective one or more UEs120. The methods according to embodiments herein are performed by the network node110. As an alternative, a Distributed Node DN and functionality, e.g. comprised in a cloud130as shown inFIG.1may be used for performing or partly performing the methods. In order to provide a beam management with low signaling overhead and adequate quality in the wireless communications network100, the network node110herein applies a method for beam management according to embodiments such as e.g. a beam failure avoidance and management method or scheme. In an example scenario, the beam management method tracks the time evolution of quality values measured by the UE120such as e.g. measurement reports made on only the serving beam itself. The measurement reports are related to beam quality. The network node110first finds the best beam and selects it as a serving beam using a traditional search sweep over multiple downlink reference signals transmitted in a set of beam candidates. After the selection of the serving beam, a rule or metric is created also referred to as constructed, based on the reported quality value of the beam. This rule may determine when to obtain e.g. calculate other or new beam candidates. This may in some embodiments be performed by comparing new reported quality values to the previously reported ones. As long as the rule such as a condition of the rule or metric is fulfilled, the network node110transmits periodic reference signals only for the serving beam. This may typically be the case for extended time periods, e.g. from hundreds of milliseconds up to hours. In this way signalling overhead is minimized. Example embodiments of a method performed by a network node110for beam management in a wireless communications network100will now be described with reference to a flowchart depicted inFIG.2. Dashed boxes represent optional method steps. The method comprises the following actions, which actions may be taken in any suitable order. Action201 In order to apply the beam management method the network node110first need to select, also referred to as choose, a serving beam to be used in an upcoming data communication with the UE120. A serving beam when used herein is the beam through which the data communication between the network node110and the UE120will take place. The serving beam is to be selected among candidate beams transmitted by the network node110. A candidate beam when used herein is a beam which the network node110instructs the UE120to measure and report a quality value for, so that the network node110can decide if this beam should be used as a new serving beam. Thus, the network node110receives from the UE120, one or more quality values measured by the UE120on first reference signals transmitted by the network node110in respective one or more beams comprised in a first set of candidate beams. The set of candidate beams may e.g. be the four beams with highest quality value resulting from a beam sweep. A beam sweep when used herein is an evaluation of the candidate beams transmitted from the network node110based on measurement results on reference signals. Action202 The network node110has now received a first set of candidate beams among which a serving cell with good quality will be selected to be used for the upcoming data communication with the UE120. The network node110accordingly selects, for the upcoming data communication with the UE120, a serving beam out of the first set of candidate beams based on the received quality values. The beam selected as serving beam may e.g. be the optimal beam with regards to the received quality values, e.g. the beam having the highest quality value. Action203 In order to determine whether quality values of the serving beam that later on will be reported still are adequate, the network node110creates a rule to follow. Thus, the network node, creates a rule related to signal quality. According to some embodiments, the rule defines a quality value based on the first quality value. The defined quality value may then be a value which may e.g. be a bit above or a bit below the first quality value or the same as the first quality value. According to some other embodiments, the rule defines a reference quality value or an absolute quality value. A reference quality value may e.g. be a value based on the quality value for the serving beam but time filtered so that it changes over time to adapt to minor changes in the reported quality values for the serving beam. An absolute quality value is a pre-set value which is considered an adequate beam quality. The value may e.g. be in in dB. Action204 According to embodiments herein, the network node110will continue to use the serving beam as long as the quality of the serving beam is adequate. In order to determine or evaluate whether the quality of the serving beam is not deteriorating, the network node110occasionally needs to know the current quality value of the serving beam. Thus, the network node110obtains one or more second quality values related to respective subsequent messages transmitted to the UE120in the selected serving beam. According to some embodiments, the messages may be reference signals and the second quality values are measured by the UE120on respective subsequent reference signals transmitted in the selected serving beam. The subsequent messages may e.g. be messages comprising data or reference signals. In this way the network node110will have information regarding the current quality of the serving beam. Action205 According to embodiments herein, the network node110will continue to use the serving beam as the beam for communication with the UE120until the quality of the serving beam degrades below a minimally adequate level. Thus, as long as the rule is fulfilled with respect to the second quality value, the network node110transmits one or more further subsequent messages only in the serving beam. In this way signaling overhead is decreased since there is less signaling dedicated to beam management. At the same time quality measurements are performed and reported to the network node110to ensure that the signal quality is sufficient for the communication. According to some embodiments, the rule is decided to be fulfilled based on a comparison of the second quality value with a quality value defined by the rule. As mentioned above, in some embodiments, the rule defines the quality value based on the first quality value. In these embodiments, the rule may be fulfilled as long as the difference between the second quality value and the first quality value is below a first threshold value and above a second threshold value. The first threshold value may e.g. be dB or more, e.g. anything between 0 and 6 dB. A typical example value may be 3 dB. The second threshold value may e.g. be 0 dB or less, e.g. anything between 0 and −6 dB. An exampleal example value may be −3 dB. As further mentioned above, in some embodiments, the rule defines a reference quality value. The rule is then fulfilled as long as the difference between the second quality value and the reference quality value is below a third threshold value and above a fourth threshold value. The third threshold value may e.g. be 0 dB or more. An example value may be 3 dB, e.g. anything between 0 and 6 dB. The fourth threshold value may e.g. be 0 dB or less, e.g. anything between 0 and −6 dB. An example value may be −3 dB. As yet further mentioned above, in some embodiments, the rule defines an absolute quality value. The rule is then fulfilled as long as the second quality value is above the absolute quality value. According to some of these embodiments, the message is any one out of a PDCCH message and a PDSCH message. Furthermore, the respective second quality value is based on whether or not it is considered that the UE120has received the message from the network node110. Thus, the second quality value may e.g. be based on whether or not the UE120has received a message transmitted from the network node110. E.g. when the UE120has not received the message it is an indication that the second quality value is poor and the rule is not fulfilled, and when the UE120has received the message it is an indication that the second quality value is good and the rule is then fulfilled. According to some of these embodiments, the respective second quality value is initially set to the same value as the absolute quality value. It is thereafter increased a predetermined step when it is considered that the UE120has received the message and decreased a predetermined step when it is considered that the UE120has not received the message. Thus, the second quality value increases when the message is considered to have been received, thereby increasing the quality value measure of the serving beam, and thus decreasing the likelihood of the network node110changing serving beams. In an analogous manner the second quality value decreases when the message is not considered to have been received. This decreases the quality value measure of the serving beam and thereby increases the likelihood of the network node110changing serving beams. As an example, the UE120is considered to have been able to receive a DL assignment on PDCCH if a Hybrid Automatic Repeat Request (HARQ) Acknowledgment (ACK) or Negative Acknowledgment (NACK) is received when expected and considered to not have been able to receive the DL assignment if a HARQ ACK or NACK is not received when expected. According to some other of these embodiments, the respective second quality value is initially set to the same value as the absolute quality value. The absolute quality value is thereafter decreased a predetermined step when it is considered that the UE120has received the message and increased a predetermined step when it is considered that the UE120has not received the message. Thus, in these embodiments the absolute quality value is altered in response to whether the message is considered to have been received or not. If the message is considered to have been received the absolute quality value is decreased, reflecting that the quality measure of the serving beam is considered to be better than before. Inversely, if the message is not considered to have been received, the absolute quality value is increased. The probability of the second quality value being lower than the absolute quality value is then increased, reflecting the fact that the quality measure of the serving beam is considered to be worse than before. Action206 The network node110may determine to generate a second set of beam candidates based on whether or not the rule is fulfilled. Thus, if the rule is not fulfilled a second set of beam candidates are obtained. As an example, a new serving beam e.g. exhibiting a higher quality than the current serving beam, can then e.g. be selected from the candidate beams. Embodiments herein such as mentioned above will now be further described and exemplified. The text below is applicable to and may be combined with any suitable embodiment described above. To perform beam management in the wireless communications network100, according to an example scenario, which e.g. may comprises operating an analog beamforming scheme the network node110will first select the serving beam to be used in the upcoming data communication with the UE120. This relate to action201above. Thus, in order to select a serving beam, quality values for a set of candidate beams transmitted by the network node110are evaluated by the UE120in a beam sweep and the highest one to four values are reported to the network node110. Thus, the network node110receives from the UE120, one or more quality values measured by the UE120on first reference signals transmitted by the network node110in respective one or more beams comprised in a first set of candidate beams. This relates to Action202above. The network node110then selects a serving beam out of the first set of candidate beams based on the received quality values to be used for the upcoming data communication with the UE120. The network node110may e.g. select the candidate beam with highest quality value, in this example scenario called beam A, to be used as a serving beam for the upcoming data communication with the UE120. The network node110then creates a rule related to signal quality. This relates to Action203above. The network node110may further set a reference quality value T0, which may also for brevity be referred to as only quality value T0. The reference quality value T0may be defined by the rule. The reference quality value T0may be based on or pertain to the quality value for the beam reported by the UE120before initiating downlink data transmissions using this beam, i.e. the first quality value. The long-term channel properties of the UE120may change with mobility, i.e. during movement of the UE120. To assess the beam quality over time, the network node110may transmit one or more subsequent messages only in beam A, i.e. only in the serving beam. The subsequent messages may for instance be reference signals. The reference signals may in turn be CSI-RS-BM. The UE120may then be configured to transmit, or report, one or more second quality values TXrelated to the subsequent messages transmitted to the UE120. This relates to Action204above. Thus, the UE120may report the corresponding quality values, also referred to as quality measurements, continuously as a response to the transmitted messages from the network node110. The quality values may according to one embodiment be reported RSRP values. The following relates to Action205above. As described above, in some further embodiments, the rule defines a quality value T0based on the first quality value, i.e. the quality value for the beam reported by the UE120before initiating downlink data transmissions using the beam as a serving beam. The rule may then be fulfilled as long as the difference between the second quality value and the first quality value is below a first threshold A To value and above a second threshold value ΔT1. Thus, the network node110compares the reported second quality values TXto the stored quality value T0. The rule is then not fulfilled if the difference exceeds or is lower than one of the two thresholds Δ T0and Δ T1, i.e. TX−T0>Δ T0or TX−T0<Δ T1. The network node110may then trigger a beam sweep as a response to the rule not being fulfilled.FIG.3show an illustration of this embodiment with a single threshold. An example of a UE passing through a spatial direction of a beam is shown along a path comprising a number of measurement points P0-P5. A quality metric, defined at P0, is above a threshold at measurement points P1 to P3. This is used to indicate that tracking the serving beam is sufficient, i.e. the quality of the beam is adequate. At P4, the metric is less than the threshold. This means that the beam quality gets poor and should not be used any more. The beam quality is therefore inadequate and a set of 8 candidate beams are evaluated. The evaluation result in a change to another serving beam, where the threshold is reset. The threshold being reset means that the state of the procedure is set to Action203above, where the rule may be updated based on the reported quality value for the new serving beam. According to some of these embodiments, the rule defines a reference quality value. The reference quality value may initially be based on the first quality value, i.e. the quality value for the beam reported by the UE120before initiating downlink data transmissions using the beam as a serving beam. The reference quality value may then be time filtered and change over time such that minor changes in the reported second quality values TXwill not trigger a beam sweep Time filtered when used herein means that several reported quality values for the serving beam measured and reported by the UE120at different times are combined according to a selected mathematical function. The rule may then be fulfilled as long as the difference between the second quality values TXand the reference quality value is below a third threshold value and above a fourth threshold value. As further mentioned above, in some embodiments the rule defines an absolute quality value Δ. According to some embodiments, the rule may then be fulfilled as long as the second quality value is above the absolute quality value. Thus, in these embodiments the network node110compares the reported second quality values TXto the absolute quality value Δ. If the second quality values TXare lower than the absolute quality value Δ, i.e. if TX<Δ, the rule is not fulfilled. The network node110may then e.g. trigger a beam sweep. According to some of these embodiments the second quality value TXis based on whether or not it is considered that the UE120has received the messages from the network node110. The second reported second quality values TXmay then be updated based on if the UE120is considered to have been able to receive a message from the network node110or not. The message may for example be a PDCCH message. The UE120is e.g. considered to have been able to receive a DL assignment on PDCCH if a HARQ ACK or NACK is received when expected by the network node110. Similarly, the UE120may be considered to not have been able to receive the DL assignment if a HARQ ACK or NACK is not received when expected by the network node110. As another example, the UE120is considered to have been able to receive an UL grant if an UL transmission is detected when expected by the network node110. Similarly, the UE120may be considered to not have been able to receive an UL grant if an UL transmission is not detected when expected by the network node110. After initializing or setting the second quality value TXequal to the absolute quality value Δ, the second quality value TXmay thereafter be increased a certain predetermined step Δpdcchup when it is considered that the UE120has been able to receive the message and decreased a certain predetermined step Δpdcchdown when it is considered that the UE120has not been able to receive the message. According to some other of these embodiments, the second quality value TXis set or initialized equal to the absolute quality value Δ. The absolute quality value Δ is then updated based on if the UE120is considered to have been able to receive a message from the network node110or not. In this case, the absolute quality value Δ is decreased in case of a successfully received message, e.g. a successful PDCCH reception. Analogously, the absolute quality value Δ is increased in case of an unsuccessfully received message, e.g. a failed PDCCH reception. According to some other of these embodiments the message failure rate is measured. The message failure rate may e.g. be calculated by dividing the number of messages that failed to be received by the total number of messages received. As an example, the PDCCH failure rate may be measured. This would be calculated by dividing the number of failed PDCCH receptions by the total number of PDCCH receptions. The second quality value TXor the absolute quality value Δ may then be decreased respectively increased if the failure rate is larger than a certain threshold. For example, if the PDCCH failure rate is measured, the failure rate threshold may be a PDCCH BLER operation point that a link adaption is targeting. As another example, the PDSCH failure rate is measured. This would be calculated by dividing the number of failed PDSCH receptions by the total number PDSCH receptions. The failure rate threshold would then be the PDSCH BLER operation point that the link adaption is targeting. If the failure rate is greater than the threshold the second quality value TXis decreased by a certain step. Alternatively, the absolute quality value Δ is decreased a certain step. Link adaptation includes selection of modulation and code rate. The selection is often done based on reported channel quality from the UE120. Since there may be uncertainties in the reported quality due to for example measurement errors, delays between measuring the channel quality in the UE120and receiving in the network node110and infrequently sent reports the network node110often uses an outer loop which increases or decreases the reported channel quality with certain step sizes due to if an HARQ ACK or an HARQ NACK is received. The outer loop is configured with step sizes so that a certain HARQ BLER, eg 10%, should be achieved in average. According to some other of these embodiments the second quality value TXmay be updated based on if a HARQ ACK or a HARQ NACK is sent from the UE120. If the network node110receives a HARQ NACK from the UE120, the second quality value TXmay be decreased by a certain step Δpdschdown. Analogously, the second quality value TXmay be increased by a certain step Δpdschup, if the network node110receives a HARQ ACK from the UE120. According to some other of these embodiments, the absolute quality value Δ is decreased in case of a successful PDSCH reception. Analogously, the absolute quality value Δ is increased in case of a failed PDSCH reception. To perform the method actions above for beam management in a wireless communications network100, the network node110may comprise the arrangement depicted inFIGS.4aand4b. The network node110may comprise an input and output interface400configured to communicate e.g. with the UE120. The input and output interface400may comprise a wireless receiver not shown and a wireless transmitter not shown. The network node110is configured to, e.g. by means of a receiving unit410in the network node110, receive from the UE120, one or more first quality values measured by the UE120on first reference signals transmitted by the network node110in respective one or more beams comprised in a first set of candidate beams. The network node110is further configured to, e.g. by means of a selecting unit420in the network node, select for an upcoming data communication with the first UE120, a serving beam out of the first set of candidate beams based on the received quality values. The network node110is further configured to, e.g. by means of a creating unit430in the network node, create a rule related to signal quality. According to some embodiments the rule is adapted to define a quality value based on the first quality value. According to some other embodiments the rule is adapted to define a reference quality value or an absolute quality value. The network node110is further configured to, e.g. by means of a obtaining unit440in the network node, obtain one or more second quality values related to respective subsequent messages transmitted to the UE120in the selected serving beam. In some embodiments, the messages are adapted to be reference signals. Furthermore, the second quality values, may be adapted to be measured by the UE120on subsequent reference signals transmitted in the selected serving beam. The network node110is further configured to, e.g. by means of a transmitting unit450in the network node110, as long as the rule is fulfilled with respect to the second quality value, transmit one or more further subsequent messages only in the serving beam. According to some embodiments, the rule is to be decided to be fulfilled based on a comparison of the second quality value with a quality value defined by the rule. The network node110may further be configured to, e.g. by means of a determining unit460in the network node110, determine to generate a second set of beam candidates based on whether or not the rule is fulfilled. As mentioned above, in some embodiments, the rule is adapted to define a quality value based on the first quality value. In these embodiments, the rule may be adapted to be fulfilled as long as the difference between the second quality value and the first quality value is below a first threshold value and above a second threshold value. As further mentioned above, in some embodiments the rule is adapted to define a reference quality value. The rule may then be adapted to be fulfilled as long as the difference between the second quality value and the reference quality value is below a third threshold value and above a fourth threshold value. As yet further mentioned above, in some embodiments the rule is adapted to define an absolute quality value. The rule is then adapted to be fulfilled as long as the second quality value is above the absolute quality value. According to some of these embodiments, the message is any one out of a PDCCH message and a PDSCH message. Furthermore, the respective second quality value is then based on whether or not it is considered that the UE120has received the message from the network node110. According to some of these embodiments the respective second quality value is initially set to the same value as the absolute quality value. The second quality value is then increased a predetermined step when it is considered that the UE120has received the message, and decreased a predetermined step when it is considered that the UE120has not received the message. According to some other of these embodiments the respective second quality value is initially set to the same value as the absolute quality value. The absolute quality value is then decreased a predetermined step when it is considered that the UE120has received the message, and increased a predetermined step when it is considered that the UE120has not received the message. The embodiments herein may be implemented through a respective processor or one or more processors, such as a processor470of a processing circuitry in the network node110depicted inFIG.4a, together with a respective computer program code for performing the functions and actions of the embodiments herein. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into the network node110. One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code may furthermore be provided as pure program code on a server and downloaded to the network node110. The network node110may further comprise a memory480comprising one or more memory units. The memory comprises instructions executable by the processor470. The memory480is arranged to be used to store e.g. information about the one or more first quality values measured by the UE120on the first reference signals, the rule or rules created by the network node110, the one or more second quality values, the one or more subsequent messages, the reference quality value, the absolute quality value, the step size of the predetermined steps and applications to perform the methods herein when being executed in the network node110. Those skilled in the art will also appreciate that the units in network node110mentioned above may refer to a combination of analog and digital circuits, and/or one or more processors configured with software and/or firmware, e.g. stored in network node110that when executed by the respective one or more processors such as the processors described above. One or more of these processors, as well as the other digital hardware, may be included in a single Application-Specific Integrated Circuitry (ASIC), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a system-on-a-chip (SoC). Further Extensions and Variations With reference toFIG.5, in accordance with an embodiment, a communication system includes a telecommunication network3210such as the wireless communications network100, e.g. an IoT network, or a WLAN, such as a 3GPP-type cellular network, which comprises an access network3211, such as a radio access network, and a core network3214. The access network3211comprises a plurality of base stations3212a,3212b,3212c, such as the network node110,130, access nodes, AP STAs NBs, eNBs, gNBs or other types of wireless access points, each defining a corresponding coverage area3213a,3213b,3213c. Each base station3212a,3212b,3212cis connectable to the core network3214over a wired or wireless connection3215. A first user equipment (UE) e.g. the wireless device120such as a Non-AP STA3291located in coverage area3213cis configured to wirelessly connect to, or be paged by, the corresponding base station3212c. A second UE3292e.g. the wireless device122such as a Non-AP STA in coverage area3213ais wirelessly connectable to the corresponding base station3212a. While a plurality of UEs3291,3292are illustrated in this example, the disclosed embodiments are equally applicable to a situation where a sole UE is in the coverage area or where a sole UE is connecting to the corresponding base station3212. The telecommunication network3210is itself connected to a host computer3230, which may be embodied in the hardware and/or software of a standalone server, a cloud-implemented server, a distributed server or as processing resources in a server farm. The host computer3230may be under the ownership or control of a service provider, or may be operated by the service provider or on behalf of the service provider. The connections3221,3222between the telecommunication network3210and the host computer3230may extend directly from the core network3214to the host computer3230or may go via an optional intermediate network3220. The intermediate network3220may be one of, or a combination of more than one of, a public, private or hosted network; the intermediate network3220, if any, may be a backbone network or the Internet; in particular, the intermediate network3220may comprise two or more sub-networks (not shown). The communication system ofFIG.5as a whole enables connectivity between one of the connected UEs3291,3292and the host computer3230. The connectivity may be described as an over-the-top (OTT) connection3250. The host computer3230and the connected UEs3291,3292are configured to communicate data and/or signalling via the OTT connection3250, using the access network3211, the core network3214, any intermediate network3220and possible further infrastructure (not shown) as intermediaries. The OTT connection3250may be transparent in the sense that the participating communication devices through which the OTT connection3250passes are unaware of routing of uplink and downlink communications. For example, a base station3212may not or need not be informed about the past routing of an incoming downlink communication with data originating from a host computer3230to be forwarded (e.g., handed over) to a connected UE3291. Similarly, the base station3212need not be aware of the future routing of an outgoing uplink communication originating from the UE3291towards the host computer3230. Example implementations, in accordance with an embodiment, of the UE, base station and host computer discussed in the preceding paragraphs will now be described with reference toFIG.6. In a communication system3300, a host computer3310comprises hardware3315including a communication interface3316configured to set up and maintain a wired or wireless connection with an interface of a different communication device of the communication system3300. The host computer3310further comprises processing circuitry3318, which may have storage and/or processing capabilities. In particular, the processing circuitry3318may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The host computer3310further comprises software3311, which is stored in or accessible by the host computer3310and executable by the processing circuitry3318. The software3311includes a host application3312. The host application3312may be operable to provide a service to a remote user, such as a UE3330connecting via an OTT connection3350terminating at the UE3330and the host computer3310. In providing the service to the remote user, the host application3312may provide user data which is transmitted using the OTT connection3350. The communication system3300further includes a base station3320provided in a telecommunication system and comprising hardware3325enabling it to communicate with the host computer3310and with the UE3330. The hardware3325may include a communication interface3326for setting up and maintaining a wired or wireless connection with an interface of a different communication device of the communication system3300, as well as a radio interface3327for setting up and maintaining at least a wireless connection3370with a UE3330located in a coverage area (not shown) served by the base station3320. The communication interface3326may be configured to facilitate a connection3360to the host computer3310. The connection3360may be direct or it may pass through a core network (not shown inFIG.6) of the telecommunication system and/or through one or more intermediate networks outside the telecommunication system. In the embodiment shown, the hardware3325of the base station3320further includes processing circuitry3328, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The base station3320further has software3321stored internally or accessible via an external connection. The communication system3300further includes the UE3330already referred to. Its hardware3335may include a radio interface3337configured to set up and maintain a wireless connection3370with a base station serving a coverage area in which the UE3330is currently located. The hardware3335of the UE3330further includes processing circuitry3338, which may comprise one or more programmable processors, application-specific integrated circuits, field programmable gate arrays or combinations of these (not shown) adapted to execute instructions. The UE3330further comprises software3331, which is stored in or accessible by the UE3330and executable by the processing circuitry3338. The software3331includes a client application3332. The client application3332may be operable to provide a service to a human or non-human user via the UE3330, with the support of the host computer3310. In the host computer3310, an executing host application3312may communicate with the executing client application3332via the OTT connection3350terminating at the UE3330and the host computer3310. In providing the service to the user, the client application3332may receive request data from the host application3312and provide user data in response to the request data. The OTT connection3350may transfer both the request data and the user data. The client application3332may interact with the user to generate the user data that it provides. It is noted that the host computer3310, base station3320and UE3330illustrated inFIG.6may be identical to the host computer3230, one of the base stations3212a,3212b,3212cand one of the UEs3291,3292ofFIG.7, respectively. This is to say, the inner workings of these entities may be as shown inFIG.6and independently, the surrounding network topology may be that ofFIG.5. InFIG.6, the OTT connection3350has been drawn abstractly to illustrate the communication between the host computer3310and the use equipment3330via the base station3320, without explicit reference to any intermediary devices and the precise routing of messages via these devices. Network infrastructure may determine the routing, which it may be configured to hide from the UE3330or from the service provider operating the host computer3310, or both. While the OTT connection3350is active, the network infrastructure may further take decisions by which it dynamically changes the routing (e.g., on the basis of load balancing consideration or reconfiguration of the network). The wireless connection3370between the UE3330and the base station3320is in accordance with the teachings of the embodiments described throughout this disclosure. One or more of the various embodiments improve the performance of OTT services provided to the UE3330using the OTT connection3350, in which the wireless connection3370forms the last segment. More precisely, the teachings of these embodiments may improve the applicable RAN effect: data rate, latency, power consumption, and thereby provide benefits such as corresponding effect on the OTT service: e.g. reduced user waiting time, relaxed restriction on file size, better responsiveness, extended battery lifetime. A measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve. There may further be an optional network functionality for reconfiguring the OTT connection3350between the host computer3310and UE3330, in response to variations in the measurement results. The measurement procedure and/or the network functionality for reconfiguring the OTT connection3350may be implemented in the software3311of the host computer3310or in the software3331of the UE3330, or both. In embodiments, sensors (not shown) may be deployed in or in association with communication devices through which the OTT connection3350passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software3311,3331may compute or estimate the monitored quantities. The reconfiguring of the OTT connection3350may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not affect the base station3320, and it may be unknown or imperceptible to the base station3320. Such procedures and functionalities may be known and practiced in the art. In certain embodiments, measurements may involve proprietary UE signalling facilitating the host computer's3310measurements of throughput, propagation times, latency and the like. The measurements may be implemented in that the software3311,3331causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection3350while it monitors propagation times, errors etc. FIG.7is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station such as the network node110, and a UE such as the wireless device120, which may be those described with reference toFIG.5andFIG.6. For simplicity of the present disclosure, only drawing references toFIG.7will be included in this section. In a first action3410of the method, the host computer provides user data. In an optional subaction3411of the first action3410, the host computer provides the user data by executing a host application. In a second action3420, the host computer initiates a transmission carrying the user data to the UE. In an optional third action3430, the base station transmits to the UE the user data which was carried in the transmission that the host computer initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In an optional fourth action3440, the UE executes a client application associated with the host application executed by the host computer. FIG.8is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station such as a AP STA, and a UE such as a Non-AP STA which may be those described with reference toFIG.5andFIG.6. For simplicity of the present disclosure, only drawing references toFIG.8will be included in this section. In a first action3510of the method, the host computer provides user data. In an optional subaction (not shown) the host computer provides the user data by executing a host application. In a second action3520, the host computer initiates a transmission carrying the user data to the UE. The transmission may pass via the base station, in accordance with the teachings of the embodiments described throughout this disclosure. In an optional third action3530, the UE receives the user data carried in the transmission. FIG.9is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station such as a AP STA, and a UE such as a Non-AP STA which may be those described with reference toFIG.5andFIG.6. For simplicity of the present disclosure, only drawing references toFIG.9will be included in this section. In an optional first action3610of the method, the UE receives input data provided by the host computer. Additionally or alternatively, in an optional second action3620, the UE provides user data. In an optional subaction3621of the second action3620, the UE provides the user data by executing a client application. In a further optional subaction3611of the first action3610, the UE executes a client application which provides the user data in reaction to the received input data provided by the host computer. In providing the user data, the executed client application may further consider user input received from the user. Regardless of the specific manner in which the user data was provided, the UE initiates, in an optional third subaction3630, transmission of the user data to the host computer. In a fourth action3640of the method, the host computer receives the user data transmitted from the UE, in accordance with the teachings of the embodiments described throughout this disclosure. FIG.10is a flowchart illustrating a method implemented in a communication system, in accordance with one embodiment. The communication system includes a host computer, a base station such as a AP STA, and a UE such as a Non-AP STA which may be those described with reference toFIG.5andFIG.6. For simplicity of the present disclosure, only drawing references toFIG.10will be included in this section. In an optional first action3710of the method, in accordance with the teachings of the embodiments described throughout this disclosure, the base station receives user data from the UE. In an optional second action3720, the base station initiates transmission of the received user data to the host computer. In a third action3730, the host computer receives the user data carried in the transmission initiated by the base station. When using the word “comprise” or “comprising” it shall be interpreted as non-limiting, i.e. meaning “consist at least of”. The embodiments herein are not limited to the above described preferred embodiments. Various alternatives, modifications and equivalents may be used.
45,986
11943637
DESCRIPTION OF EXEMPLARY EMBODIMENTS FIG.1illustrates an exemplary private subnetwork of LTE base stations (hereinafter subnetwork100) according to the disclosure. Subnetwork100comprises a connection aggregator (hereinafter S1-Conn110); an operation and maintenance module120; a plurality of internal baseband processors (or internal eNodeBs125), each of which has a corresponding supervisor module130, and each of which has one or more corresponding cells135. Each internal eNodeB125is coupled to the S1-Conn110by a respective internal S1 connection140, which is a standard S1 connection as would be implemented between a conventional eNodeB and a conventional MME (Mobility Management Entity) as defined in the LTE specification. Each supervisor module130may be coupled to the operation and maintenance module120by a conventional IP connection145. S1-Conn110may be coupled to one or more MMEs150via a corresponding external S1 connection155. Each external S1 connection155may be identical to each internal S1 connection140in that they each are standard S1 connections as defined in the LTE specification. Also illustrated inFIG.1is an external eNodeB160having at least one corresponding cell165. External eNodeB160may be coupled to one or more of the illustrated MMEs150via an S1 connection170. Further illustrated is a UE170, which may be in communication with one or more cells135/165. Subnetwork100may be deployed or integrated in, for example, a dense urban environment or a large venue, such as a stadium, airport, shopping center, university campus, etc. Each internal eNodeB125may correspond to macro cell, a small cell, femto cell, or a Distributed Antenna System (DAS). Each internal eNodeB125may have any number of cells135. Each individual internal eNodeB125may be individually implemented as a pure software-based virtual baseband processor that may be instantiated and de-instantiated as needed, or each may be individually implemented as a hardware-based baseband processor that is deployed in dedicated hardware in close proximity to its corresponding RF and antenna components, or any combination of the above. Although an LTE-specific term is used to refer to a given eNodeB125, it may actually be implemented according to a different or legacy RAT technology, as long as it communicates with S1-Conn110via an S1 interface. As used herein, the terms baseband processor and eNodeB may be interchangeable. S1-Conn110and operation and maintenance module120(and potentially one or more of the internal eNodeBs125) may be implemented in software that runs in a conventional server hardware that may be located in a single location (e.g., one or more racks) within or near the venue where subnetwork100is deployed, or otherwise distributed. There may be an advantage to having the internal eNodeBs125pure software-based virtual baseband processors in that they can make the best advantage of the ability of the subnetwork100to dynamically instantiate and de-instantiate internal eNodeBs125as traffic demand within the venue fluctuates. Further, having each internal eNodeB125implemented purely in software enables each internal eNodeB125to be instrumented with code to enable interaction with its corresponding supervisor module130and easier configuration and maintenance from operation and maintenance module120. However, it will be understood that hardware-based internal eNodeBs125may be activated/de-activated in place of instantiation/de-instantiation of a virtual internal eNodeB125. FIG.2illustrates an exemplary process for configuring a subnetwork100according to the disclosure. In step205, S1-Conn110establishes an S1 interface with each of the MMEs150. In doing so, S1-Conn110issues an S1 SETUP REQUEST message to each of the MMEs150, which includes its own 20 bit eNB ID (the virtual subnetwork baseband processor identifier) and all of the E-CGIs corresponding to each of the constituent cells of all of the internal eNodeBs125. In response, each MME150may send a subsequent S1 SETUP RESPONSE message to S1-Conn110, thereby establishing an external S1 Connection155between the S1-Conn110and each MME150. In step210, each internal eNodeB125starts up according to its nominal function. Each internal eNodeB125has the same 20 bit identifier, and a number of allocated 8 bit subidentifiers for each possible cell135that might correspond to that particular internal eNodeB125. This information may be stored in a configuration file within each internal eNodeB125and may be supplied by its corresponding supervisor module130. Alternatively, configuration information for each internal eNodeB125may be stored in a distributed data source. Examples of such distributed data sources may include systems like consul and etcd. Given that all of the internal eNodeBs125have the same 20 bit identifier, in order to uniquely identify each internal eNodeB125, each one may select a 8 bit cell identifier of one of its cells135(for example, its first cell135) and append it to its own 20 bit identifier, making it a 28 bit eNodeB identifier. The internal identifier may be the same as that used conventionally with Home eNodeBs (HeNB). This internal 28 bit eNodeB identifier may be referred to herein as an “internal identifier”. Once it has started up, in step215, each internal eNodeB125sets up an S1 connection with S1-Conn110, using its individual internal 28 bit eNodeB identifier. An example of how the internal eNodeB125may establish an S1 connection with an MME150is described in 3GPP TS 36.413. In doing so, the given internal eNodeB125functions as if it is establishing an S1 connection with each MME150. However, S1-Conn110intercepts each S1 SETUP REQUEST from each internal eNodeB. S1-Conn110uses this information to establish an S1 interface with each internal eNodeB125and subsequently generates and issues an S1 SETUP RESPONSE message to each of the internal eNodeBs125. In doing so, each of the internal eNodeBs125“thinks” that it has established an S1 interface with a single MME that has a lot of capabilities (actually the collective capabilities of the MMEs150), but what it has actually done is establish an internal S1 connection140with the S1-Conn110. In step220, each internal eNodeB125sends an initiation message that would otherwise indicate to one or more MMEs150that it is functioning. This initiation message would include its own identity and the cell identities of its corresponding cells135. In an exemplary embodiment of process200, each internal eNodeB125sends a PWSRestartIndication message, which is intercepted by S1-Conn110. The PWSRestartIndication message, an example of which is described in 3GPP TS 36.413, includes the following information: the E-CGI (Enhanced Cell Global ID) of each cell corresponding to the sending internal eNodeB125, the Global eNB ID for the sending internal eNodeB125(which is its aforementioned internal 28 bit eNodeB identifier), the TAI (Tracking Area Identifier) list for the internal eNodeB's125corresponding cells, and the Emergency Area ID list for the internal eNodeB's125corresponding cells. It will be understood that the described functions performed by each internal eNodeB125may correspond to a sequence of computer instructions stored on a machine readable memory allocated to or associated with each corresponding internal eNodeB125, and executed either by a dedicated processor embedded within corresponding eNodeB125or by a server processor or virtual machine spawned in a cloud computing environment running on server hardware located within the venue of subnetwork100or elsewhere. The same is true for the S1-Conn110and the operation and maintenance module120. These components may comprise computer instructions that may be stored in non-volatile memory and executed on server compute hardware that may be located in, near, or distributed around the venue corresponding to subnetwork100. Each of these components may be implemented in C, C++, Java, one or more scripting languages, or any combination thereof, depending on the given subcomponent within each of these components. In step225, S1-Conn110intercepts each PWSRestartIndication message from each internal eNodeB125, and in step230, creates a mapping of the following for each internal eNodeB125: its internal 28 bit eNodeB identifier, its constituent cell IDs (E-CGIs), and the rest of the information provided in its corresponding PWSRestartIndication message. Further to step230, S1-Conn110assigns itself the 20 bit eNodeB ID common to all of the internal eNodeBs125, extracts the constituent cells IDs (E-CGIs) and further information gathered from each corresponding PWSRestartIndication message and populates this information in a new “repackaged” PWSRestartIndication Message. The 20 bit eNodeB ID assigned to the S1-Conn110may be referred to as a virtual subnetwork baseband processor identifier. In step235, S1-Conn110sends its own PWSRestartIndication message assembled in step225to each corresponding MME150via its respective external51connection155. Accordingly, each MME150will behave as though it is interacting with a single “giant” eNodeB with a potentially large number of aggregated cells135(potentially as many as 256 cells), even though each MME150is interacting exclusively with S1-Conn110. Further, each internal eNodeB125will behave as though it is interacting with any of MMEs150, even though it is interacting exclusively with S1-Conn110. In order to accomplish this, S1-Conn110intercepts each subsequent message, bidirectionally, between a given MME150and an internal eNodeB125, and also between the MME150a given UE170. S1-Conn110remaps the cell IDs and other required information using—for example—look up tables stored in memory allocated to S1-Conn110, repackages the given message with the re-mapped information, and sends the repackaged message to its destination. For the purposes herein, the internal eNodeB125that is the destination of the incoming message from a given MME150may be referred to as a message destination baseband processor. Advantages of this include the following. First, any given (non-Home) eNodeB has a 20 bit identifier and may have allocated to it as many as 256 cells, given that the cell ID for each eNodeB is an 8 bit identifier. However, given the practical limitations in computational power, any given eNodeB typically has no more than a dozen cells. The disclosed subnetwork100enables a given eNodeB (in this case, the S-Conn110acting like a “giant” eNodeB) to make use of all 8 bits of cell IDs. This is because each internal eNodeB125has allocated to it (either in dedicated hardware or provisioned cloud computing resources) sufficient memory and computational resources to handle a typical number of cells commonly used. Second, given that the external network (e.g., from the MMEs150outward) is only aware of a single “giant” eNodeB encompassed by the functions of S1-Conn110, the number of internal eNodeBs125(and subsequent number of cells135) may be dynamically adjusted according to traffic demand. This may be extremely useful for venues, such as stadiums, that may be filled to capacity one day a week and quiet the rest of the time. In this case, internal eNodeBs125, each with a plurality of corresponding cells135, may be created and allocated to handle changes in traffic demand, such that all of these changes are hidden to the outer network. It will be understood that the described functions performed by the S-Conn110is further describing a sequence of computer instructions stored on a machine readable memory allocated to or associated with the S1-Conn110, and executed either by a dedicated processor or by a server processor or virtual machine spawned in a cloud computing environment running on server hardware located within the venue of subnetwork100or elsewhere. FIG.3illustrates an exemplary process300by which a UE170establishes connection with an internal eNodeB125. In step305, the UE170and the given internal eNodeB125exchange appropriate conventional signals to establish a connection. For example, UE170may transmit an RRC Connection Request to the internal eNodeB125, which may in turn respond with an RRC Connection Setup message, etc. The result is that UE170is connected to internal eNodeB125and that internal eNodeB125has established an internal identifier corresponding to that UE. In subprocess310, internal eNodeB125establishes a default bearer with an MME150via S1-Conn110. As illustrated inFIG.3, subprocess310comprises several steps that are added to the default bearer establishment procedures specified in 3GPP TS 24.301, for example. steps315,320, and325describe modifications/enhancements to the conventional procedures described in the 3GPP technical specifications. In step315, S1-Conn110intercepts the default bearer establishment messages sent by internal eNodeB125, which includes a UE ID generated by the internal eNodeB125. In step320, the S1-Conn replaces the UE ID (generated by the internal eNodeB125) and replaces it with a unique UE ID generated by the S1-Conn110. This is necessary because each of the internal eNodeBs125generate UE IDs without any awareness of the UE IDs generated by any of the other internal eNodeBs125. There is a significant chance of two eNodeBs125generating duplicate UE IDs. Given this possibility, S1-Conn replaces the UE ID generated by the internal eNodeB125with a unique value, repackages the message, and transmits the message to the appropriate MME150. In step325, S1-Conn110intercepts default bearer establishment messages from the MME150to the internal eNodeB125, remaps the UE ID, and transmits the repackaged message to the internal eNodeB125. The objective is that the given internal eNodeB125is not aware that it is not interacting directly with MME150, and that the MME150is not aware that it is not interacting directly with internal eNodeB125. In the former case, S1-Conn110is acting as the MME150for the internal eNodeB125, and in the latter case, S1-Conn110is acting as the eNodeB that interacts with the MME150(and the UE170). In subprocess330, internal eNodeB125establishes a dedicated bearer with an MME150via S1-Conn110. As illustrated inFIG.3, subprocess330comprises several steps that are added to the default bearer establishment procedures specified in 3GPP TS 24.301. The steps required for establishing a dedicated bearer may be substantially identical to steps320and325described above. The result is that there is at least one dedicated bearer established between UE170and MME150, whereby S1-Conn110is serving as an unseen intermediary between internal eNodeB120and MME150. FIG.4illustrates an exemplary process400for establishing an X2 connection between two internal eNodeBs125. In step405, UE170communicates with its currently connected source internal eNodeB125that it has a strong signal from another internal eNodeB125. UE170does so by transmitting a measurement report to the source internal eNodeB125, which identifies neighboring internal eNodeBs125and cell135from which UE170is receiving a strong signal. Step405may be a conventional process, an example of which is described in 3GPP TS 36.300. From this information UE170identifies and recommends a target internal eNodeB125for handover. In step410, the source internal eNodeB125retrieves its own internal 28 bit identifier from internal memory. Recalling from step210, each internal eNodeB has as a default the same 20 bit eNodeB identifier. In order to prevent collisions within subnetwork100, each internal eNodeB's supervisor module120instructs its respective internal eNodeB125to select the 8 bit identifier of one of its cells (e.g., the first cell) and append its own 20 bit identifier with the 8 bit identifier of its cell, creating a false Home eNodeB (HeNB) internal identifier for itself, referred to herein as an internal eNodeB identifier. Further to step410, the source internal eNodeB125retrieves the E-CGI for the target cell identified by the UE (via the measurement report) and uses that 28 bit cell identifier corresponding to the target eNodeB. In step415, the source internal eNodeB125sends an eNBConfigurationTransfer command, which is conventionally sent to one of the MMEs150. In the eNBConfigurationTransfer command, the source internal eNodeB125is identifying itself with its internal eNodeB identifier and the internal eNodeB identifier for the target internal eNodeB identifier. In step420, S1-Conn110intercepts the eNBConfigurationTransfer transmitted in step415. In step425, S1-Conn110extracts the internal eNodeB identifier of the source internal eNodeB125and the internal eNodeB identifier of the target internal eNodeB125(as well as other information in the eNBConfigurationTransfer command) and constructs an MMEConfigurationTransfer command with this information. And in step430, S1-Conn sends the MMEConfigurationTransfer command to the target internal eNodeB125. With the configuration transfer complete, source internal eNodeB125and target eNodeB125may establish an X2 connection between them. In performing the steps of process400, S1-Conn110is acting as the MME150such that neither source internal eNodeB125nor target eNodeB125is aware that they were not communicating directly with MME150. Further, MME150was not at any point involved in the process. This is because MME150sees the S1-Conn110as a “giant” eNodeB and thus there would be no X2 connection, given only one eNodeB. FIG.5illustrates an exemplary process500for executing an X2 handover between two internal eNodeB s125. In step505, the UE170identifies a target cell135and target internal eNodeB125and notifies the source internal eNodeB125, to which the UE170is currently connected. This process may be substantially similar to step405of process400. In step510, the source internal eNodeB125forwards any data packets (downlink and potentially uplink) corresponding to UE170to target internal eNodeB125over the X2 connection that was established in process400. In step515, the target internal eNodeB125sends a Path Switch Request message to the relevant MME150. The Path Switch Request includes the TAI (Tracking Area Identity) of the target cell135of the target internal eNodeB125as well as the target cell's E-CGI. S1-Conn110relays this message to the relevant MME150. In step520, target internal eNodeB125sends a Release Resource message to the source internal eNodeB125over their mutual X2 connection, thus completing the handover process of a UE170between two internal eNodeBs125within subnetwork100in a way that is hidden from the outer network. FIG.6illustrates an exemplary process600for executing an S1 handover between an internal eNodeB125to an external eNodeB160. This is for the situation in which UE170is moving out of range of the internal eNodeBs125of subnetwork100. The steps of process600may be incorporated into the S1-based handover process. In step605, the UE170identifies a target cell165and target external eNodeB160and notifies the source internal eNodeB125, to which the UE170is currently connected. This process may be substantially similar to step405of process400and step505of process500. In step610, the source internal eNodeB125sends a Handover Required message to the relevant MME150. In doing so, source internal eNodeB125uses its internal eNodeB identifier in the message. In step615, S1-Conn110intercepts the Handover Required message and repackages the message with its own 20 bit virtual subnetwork baseband processor identifier and the E-CGI of the cell currently connecting UE170with source internal eNodeB125, and sends the message to the relevant MME150. In step620, MME150sends a Handover Command to S1-Conn110. It will be understood that MME150behaves as though it were interacting with a conventional eNodeB. In step625, S1-Conn110receives the Handover Command from MME150and remaps the eNB ID to the internal eNodeB identifier of the source internal eNodeB125, and sends the repackaged Handover Command to the source internal eNodeB125. Subsequently, in step630, source internal eNodeB125sends the Handover Command to UE170. If any of the E-RABs (Evolved Radio Access Bearers) corresponding to UE170are configured for PDCP (Packet Data Convergence Protocol) preservation, the source internal eNodeB125may send an eNB Status Transfer message to the relevant MME150. S1-Conn110may intercept this message, remap the information in the message to specify the virtual subnetwork baseband processor identifier, repackage the message, and transmit it to the relevant MME150(the source MME). In step635, the source MME150sends a UE Context Release Command to the S1-Conn110. In step640, the S1-Conn110in turn remaps the eNB ID to the internal eNodeB identifier of the source internal eNodeB125and transmits the message to the source internal eNodeB125. In step645, the source internal eNodeB125sends a UE Context Release Complete message to the source MME150. In step650, the S1-Conn110intercepts the UE Context Release Complete message, remaps the information to reflect the virtual subnetwork baseband processor identifier, repackages the message, and transmits it to the source MME150. It will be understood that there are many steps to the conventional process of an S1-based handover, as described in 3GPP TS 23.401, that occur (for example) between steps615and620, and between steps620and635. These steps occur in the outer network (e.g., between MMEs150, S-GW and P-GW (not shown) and external eNodeB160. It is understood that these external steps are known and fully described in the referenced 3GPP documentation. Accordingly, to the outer network, the S1-based handoff disclosed in process600involves a handoff between the “giant” eNodeB represented by S1-Conn110and external eNodeB160. The inner workings of subnetwork100are hidden from the outer network. FIG.7illustrates an exemplary process700for reconfiguring subnetwork100based on an increase or decrease in traffic demand. This enables the subnetwork100to expand and contract based on demand while hiding the changes to the subnetwork from the outer network. In step705, the operation and maintenance module120may, in conjunction with the S1-Conn110, make an assessment of current traffic usage and demand. This may involve analyzing historical usage data as well as extrapolating near future demand. For example, if subnetwork100is deployed in a stadium, operation and maintenance module120may have stored in accessible memory a calendar of upcoming events so that it can anticipate periods of high and low demands. For deployments such as in a dense urban setting, operation and maintenance module120may have accumulated historical data on demand based on time of day, day of week, holidays, and days with special events. Given this, operation and maintenance module120may be able to perform appropriate analytics to estimate current and near future demand, and take action accordingly to provide for the provisioning of cloud-based computing capacity for virtual internal eNodeBs125, or to power up/down hardware-based internal eNodeBs125. Additionally, the virtual eNodeBs125may employ 3GPP-specified mechanisms for assessing (i.e., determining) demand, including setting a configurable threshold(s), and comparing actual demand to said threshold(s). The eNodeBs125can then send the results of the comparisons to the operation and maintenance module120. The operation and maintenance module120can then further determine whether demand has dropped below a low threshold (e.g., 5% of configured maximum capacity) or whether demand has gone above a high threshold (e.g., 95% of configured maximum capacity). Alternatively, each of the eNodeBs can make the above described further determinations and send an alarm signal, or the like, to the operation and maintenance module120if either of the thresholds has been exceeded. This mechanism may use the standard PM-Stat files (Performance Measurement) that are generated every 15 minutes and transmitted to the core network via a northbound interface (not shown) that is also specified by 3GPP. It will be understood that such variations are possible and within the scope of the disclosure. Depending on the result of the assessing done in step705, process700may either take no action (not shown inFIG.7); it may take subprocess path701, in which operation and maintenance module120may increase the capacity of subnetwork100by adding one or more internal eNodeBs125; or it may take subprocess path702, in which operation and maintenance module120may reduce capacity by removing one or more internal eNodeBs125. Regarding subpath701, if in the assessing in step705the operation and maintenance module120determines that additional capacity is required, operation and maintenance module120may proceed to step710and execute instructions to identify where within subnetwork100one or more additional internal eNodeBs125are needed. This may include determining the location of the internal eNodeBs125with the greatest demand and determining the availability of remote radio unit and antenna hardware in the vicinity, for example. In step715, operation and maintenance module120executes instructions to bring up one or more new internal eNodeBs125. In doing so, operation and maintenance module120may execute instructions to have local server hardware instantiate one or more software-based virtual baseband processors, and/or to power up one or more dormant hardware-based base stations. In step720, operation and maintenance module120may issue instructions to S1-Conn110to command the currently-running high-demand internal eNodeBs125to handoff UE connections to the recently-introduced new internal eNodeBs125. This may be alternatively done whereby the operation and maintenance module120may issue instructions to the appropriate supervisor modules130, via IP connection145, to have the corresponding internal eNodeBs125execute UE connection handoffs. With the new eNodeBs125up and running, it is necessary to update the identifier mapping information within S1-Conn110. Accordingly, in step725, the newly online internal eNodeBs125may each issue a PWSRestartIndication message (or similar initiation message), indicating its internal eNodeB identifier and constituent cell IDs. In step730, S1-Conn110intercepts the one or more PWSRestartIndication messages, one from each newly online internal eNodeB125, extracts the internal eNodeB identifier and corresponding cell IDs, and adds this information to the pre-existing mapping that S1-Conn110stores in its memory. In step735, S1-Conn may issue its own PWSRestartIndication to one or more MMEs150, similarly to step235in process200. In this case, the outer network is not aware of the addition of new internal eNodeBs125. Instead, it is only aware of a single “giant” eNodeB that has one or more additional cells. Regarding subpath702, if in the assessing in step705the operation and maintenance module120determined that subnetwork100has excess capacity, operation and maintenance module120may proceed to step750and execute instructions to identify where within subnetwork100one or more additional internal eNodeBs125are to be shut down. This may include determining the location of the internal eNodeBs125with insufficient demand and the internal identifiers of neighboring eNodeBs125that might be available for handoff. In step755, operation and maintenance module120may execute instructions to command the internal eNodeBs125designated for shutdown to handoff UE connections to neighboring eNodeBs that are otherwise capable of servicing these UEs170. As with step720, this may happen one or more ways: whereby operation and maintenance module120issues instructions to S1-Conn110to command the handoffs, or operation and maintenance module120issues instructions to the relevant supervisor modules130to implement the handoffs. It will be understood that such variations are possible and within the scope of the disclosure. In step760, operation and maintenance module120may shutdown the internal eNodeBs125designated in step750. In the case of software-based virtual internal eNodeBs125, this may involve terminating the corresponding virtual machine running on the subnetwork's server hardware. Alternatively (or additionally) this may involve powering down appropriate hardware-based base stations. Operation and maintenance module120may do this by issuing commands to the relevant supervisor modules130. In step765, S1-Conn110executes instructions to remove the terminated internal eNodeB identifiers and corresponding cell IDs from its memory. Subprocess702then proceeds to step735. In step735, S1-Conn110issues a new PWSRestartIndication with the revised list of Cell IDs (minus the Cell IDs corresponding to the terminated internal eNodeBs125). The ability of the S1-Conn110to intercept, re-map information within, repackage, and transmit messages between the internal eNodeBs125and the MMEs150enables other capabilities. For example, S1-Conn110may identify patterns in messages from the internal eNodeBs and derive position information from one or more of the UEs170connected to them. FIG.8illustrates two exemplary processor800by which S1-Conn110handles position-related information in accordance with the LTE Positioning Protocol Annex (LPPa) between an E-SMLC (Evolved Serving Mobile Location Center)801and an internal eNodeB125and a UE170, respectively. The E-SMLC801may be coupled to subnetwork100via one of the MMEs150. The connection between MME150and E-SMLC801may be over an SLs interface, as specified in 3GPP TS 23.271. Details regarding LPPa may be found in 3GPP TS 36.455. Through process800, the E-SMLC801interacts with eNodeB in accordance with LPPa procedures, with the exception of the intervention of the S1-Conn110, which as described above, makes the E-SMLC function as though it is interacting with a single “giant” eNodeB that is actually the S1-Conn110acting as a proxy for the internal eNodeBs125within subnetwork100. In step805, E-SMLC801issues a request/command to the eNodeB emulated by S1-Conn110. In this case, the E-SMLC801is not aware of the internal eNodeBs125of subnetwork100, and only interacts with the S1-Conn110. The request/command may include, for example, E-CID (Enhanced Cell ID) MEASUREMENT INITIATION REQUEST, E-CID MEASUREMENT TERMINATION COMMAND, OTDOA (Observed Time Difference of Arrival) INFORMATION REQUEST, etc. Note that in these interactions, S1-Conn110will report a predetermined location that may or may not be the actual location of the instantiation of S1-Conn110. For example, if subnetwork100is deployed in a venue such as a stadium or an airport, the position reported by S1-Conn100may be the location of the venue's security office, or its main entrance, etc. Alternatively, S1-Conn110may return a list of locations, one for each cell135within subnetwork100. In step810, S1-Conn110receives and processes the request/command, and in step820, the S1-Conn110packages a response and transmits it to the E-SMLC801. FIG.9illustrates an exemplary process900by which S1-Conn110may selectively intercept requests from multiple UEs170connecting to one or more eNodeBs125of subnetwork100and take actions to intervene and notify relevant people/entities related to the venue of anomalous behavior among connected or connecting UEs170. In step905, a plurality of UEs170issue messages to initiate a call, either by VoIP or CS fallback to a 3G/2G cell (not shown). These calls may be initiated via one internal eNodeB125, or two or more neighboring internal eNodeBs125. In step910, S1-Conn110intercepts the call initiation messages. In the case of VoIP, the S1-Conn110retrieves the QCI (QoS Class Identifier) from each message. If the QCI is equal to 1, the bearer to be established is identified as corresponding to a voice call. Alternatively, if the QCI is equal to 5, then the message corresponds to IMS (IP Multimedia Subsystem) signaling used to establish and release a VoIP connection. As with any message, S1-Conn110remaps the eNodeB cell ID with its virtual subnetwork baseband processor identifier, repackages the message, and transmits it to the intended MME150. With each recognized VoIP call initiation, S1-Conn110may execute instructions to log relevant information corresponding to the call initiation (e.g., UE identifier, internal eNodeB identifier, 28 bit cell identifier (ECGI), S-TMSI (SAE-Temporary Mobile Subscriber Identity), time of receipt of message, etc.). In step915, S1-Conn110stores information related to call establishment events in step910. Further to step915, S1-Conn110may execute instructions to identify patterns, including history of call patterns, as a function of time. If, in the course of executing these instructions, S1-Conn110might identify an anomaly in call establishment, such as a sudden surge in call establishment messages from UEs170connected to a given internal cell135, or multiple adjacent cells135of a single eNodeB125, or an isolated instance of numerous UEs170within a single cell135simultaneously initiating calls. As used herein, simultaneously may imply events within a single narrow time window, such as within 1 second, 5 seconds, etc., at the location(s) corresponding to the antenna(s) of relevant cell(s)135. In this case, S1-Conn110may store a plurality of identifiers, each corresponding to a UE170identified within the cluster. In step920, the S1-Conn110may command the relevant internal eNodeBs125to provide the most recent Timing Advance values corresponding to each UE identified in step915. Subsequently, in step925, the relevant internal eNodeBs125may provide the requested Timing Advance information corresponding to each UE170identified in step915. In step930, once the S1-Conn110has received these values, it may execute instructions to determine if the Timing Advance values are sufficiently clustered to indicate whether the call establishment procedures executed by the relevant UEs may be in response to an event in their common location. It will be understood that, in doing so, S1-Conn110may execute instructions corresponding to one or more known clustering algorithms. If the clustering calculated in step920indicates a possible event, S1-Conn110may transmit instructions to neighboring internal eNodeBs125to determine Timing Advance values for each of the relevant UEs170and provide them to the S1-Conn110. S1-Conn110may then determine the position of the cluster of UEs170based on triangulation. In step935, if S1-Conn110identifies a clustering of Timing Advance values for the relevant UEs170(regardless of whether it executes instructions to determine a cluster position via triangulation), S1-Conn110may execute instructions to send a message to a predetermined entity within the venue of subnetwork100. An example of a predetermined entity may include a customer office, such as a security office.
34,909
11943638
DETAILED DESCRIPTION The embodiments set forth below represent information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled hi the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure. Radio Node: As used herein, a “radio node” is either a radio access node or a wireless device. Radio Access Node: As used herein, a “radio access node” or “radio network node” is any node in a radio access network of a cellular communications network that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station (e.g., a New Radio (NR) base station (gNB) in a Third Generation Partnership Project (3GPP) Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP Long Term Evolution (LTE) network), a high-power or macro base station, a low-power base station (e.g., a micro base station, a pica base station, a home eNB, or the like), and a relay node. Core Network Node: As used herein, a “core network node” is any type of node in a core network. Some examples of a core network node include, e.g., an Access and Mobility Management Function (AMF), a Mobility Management Entity (MME), a Packet Data Network Gateway (P-GW), a Service Capability Exposure Function (SCEF), or the like. Wireless Device: As used herein, a “wireless device” is any type of device that has access to (i.e., is served by) a cellular communications network by wirelessly transmitting and/or receiving signals to a radio access node(s). Some examples of a wireless device include, but are not limited to, a User Equipment device (UE) in a 3GPP network and a Machine Type Communication (MTC) device. Network Node: As used herein, a “network node” is any node that is either part of the radio access network or the core network of a cellular communications network/system. Note that the description given herein focuses on a 3GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is oftentimes used. However, the concepts disclosed herein are not limited to a 3GPP system. Note that, in the description herein, reference may be made to the term “cell;” however, particularly with respect to 5G NR concepts, beams may be used instead of cells and, as such, it is important to note that the concepts described herein are equally applicable to both cells and beams. FIG.1illustrates one example of a wireless communication system10in which embodiments of the present disclosure may be implemented. In this example, the wireless communication system10is a cellular communications network and, in particular, is a 3GPP New Radio (NR) cellular communications network. As such, 3GPP NR terminology is oftentimes used. Note, however. that the concepts disclosed herein are not limited to 3GPP NR and may be used in other types of wireless communication systems that utilize networking slicing. As illustrated, the wireless communication system10includes a number of wireless devices12(i.e., wireless communication devices12or UEs12). In addition, the wireless communication system10includes a RAN that includes a number of radio access nodes14(e.g., eNBs or NR base stations (gNBs)) serving corresponding coverage areas or cells16. The radio access nodes14are also referred to herein as RAN nodes14. The radio access nodes14are connected to a core network17, which includes a number of core network nodes or entities, as will be appreciated by one of skill in the art. For a 5G NR network, the core network nodes include, for example, a Access and Mobility Management Function(s) (AMF(s))18, a Network Slice Selection Function(s) (NSSF(s))18or similar core network entities, etc., as will be appreciated by one of skill in the art. In some embodiments, operator policies regarding time (service hours) and location constrains (service area) for a network slice are configured in an NSSF19and provided to an AMF(s)18at setup of the network and whenever changed (e.g., due to elapse of time). This, in turn, enables the AMF18to provide, per Tracking Area (TA), the intersection of operator policies and the inherent support in the AMF18to the RAN nodes14at N2 setup and at later changes through a configuration update procedure. In a similar way but in the opposite direction, in some embodiments, the AMF18will provide the NSSF19, per TA, the intersection of the slice support in the AMF18, and in the RAN node14. This will enable the NSSF19to select target AMF(s)18based on what slices are supported by AMF18and RAN node14, per TA. Some embodiments of the present disclosure incorporate any one or more of the following proposals:Proposal 1: Let NSSF19, at setup of the N22 interface, provide AMF18with what slices are permitted, per TA, according to operator policies.Proposal 2: Let the RAN node14, at setup of the N2 interface, provide AMF18with RAN node supported slices per TA.Proposal 3: Let AMF18provide the RAN node14with core network supported slices per TA, at setup of the N2 interface and whenever the core network support changes.Proposal 4: Let the AMF18provided core network support of slices, per TA, be determined by the intersection of what slices the AMF18supports and what slices are permitted, per TA, according to operator policies provided by the NSSF19to AMF18at setup of the N22 interface or whenever the operator policies changes.Proposal 5: Let the AMF18, at setup of the N22 interface, provide the NSSF19with the AMF18and RAN node(s)14supported slices per TA.Proposal 6: Let the AMF18and RAN node(s)14provided support of slices, per TA, be determined by the intersection of what slices the AMF18supports and what slices are supported per RAN node(s)14and TA. Whenever new RAN nodes14are deployed, the network slices supported and permitted by the network are assessed and provided to the RAN nodes14. With the information provided and with the exchange of information over the Xn interface, a source RAN node14will be able to determine if Xn based handover shall be performed or if N2 based handover shall be used. At the use of N2 based handover and in case of intra AMF handover, the AMF18will also be able to conclude what slices, in use by the UE12, are supported in a target TA. If a slice(s) is not supported in the target TA, then AMF18, during handover, indicates to the corresponding Session Management Function(s) (SMF(s)) that the slice is not supported in the target TA. The SMF(s) may then release the Protocol Data Unit (PDU) session(s) as defined in 3GPP TS 23.501 clause 5.15.5.2.2 (see above) or the SMF may trigger deactivation of User Plane (UP) resources for the PDU session. FIGS.2A and2Billustrate the operation of the wireless communication system10ofFIG.1in accordance with embodiments in which the information related to the slices supported and permitted in a new TA are distributed when a new slice, and in particular a new AMF18and RAN node14, is instantiated. A new AMF18is instantiated and, at deployment of a new RAN node14, the RAN node14sends an N2 setup request, propagating into N22 setup request. Step100A: It is assumed that a new AMF18is instantiated in the network. The AMF configuration includes the AMF Identifier (ID), i.e. a Globally Unique AMF Identifier (GUAMI) and the network slice instances, i.e. the Single Network Slice Selection Assistance Information (S-NSSAIs), supported in the AMF18. Step100B: Similarly, the NSSF19is configured with one or more operator (i.e., business) policies for which network slices (i.e., the S-NSSAIs) are permitted in a TA. These policies are network policies that apply to all subscribers. Step100C: A first RAN node14is instantiated for the new TA. The RAN node14is configured with a TA Identity(s) (TAI(s) and the network slices (i.e., the S-NSSAIs) it is intended to support. These network slices are referred to herein as the network slices supported by the RAN node14. Step102: The RAN node14discovers the AMF(s)18supporting the TAs configured in the RAN node14(e.g., through a Domain Name System (DNS) lookup and in this example just one AMF18). The RAN node14initiates the setup of the N2 interface, i.e. the signaling interface between the AMF18and the RAN node14, through an N2 setup request message (or equivalent). In this message, the RAN node14reports the supported network slices (i.e., S-NSSAIs) and the TAs it is configured with. This message is sent to all AMFs18that the RAN node14has connectivity to. Network Slice Selection Assistance Information (NSSAI) support could either be homogeneous over all configured TAs or configured individually per TA. Step104: Triggered by the RAN request, the AMF18checks which network slices (i.e., S-NSSAIs) are supported by both the AMF18and the RAN node14in each T.A. This is done as a cross section of the RAN supported S-NSSAIs per TA and the supported S-NSSAIs configured in the AMF18and it results in the “RAN/AMF Supported S-NSSAI (s) per TA.” Step106: The AMF18can now, in principle, serve at least a TA, and needs to establish the connectivity to the NSSF19for assessing which network slices (i.e., S-NSSAIs) are permitted in the TA. Note that “permitted” is to be distinguished from “supported”. As used herein, a network slice is “permitted” for a TA if the network policies allow the network slice to be used in the TA. The AMF18sends a N22 setup request (or equivalent) including the AMF ID and the lists of {RAN/AMF supported S-NSSAI(s) per TA}). Step108: The NSSF18determines a cross section of the RAN/AMF supported S-NSSAI(s) per TA and the operator business policies of the permitted S-NSSAI(s) per TA. The resulting “Supported and Permitted S-NSSAI(s) per TA and AMF ID” is stored in the NSSF18to be used in a procedure to identify the best suited AMF18for serving a UE12requesting a specific set of S-NSSAIs. Step110: The NSSF18provides the list of permitted S-NSSAI(s) per TA to the AMF in a N22 setup response message. In some alternative embodiments, the NSSF18provides the list of supported and permitted S-NSSAI(s) per TA and AMF ID to the AMF18. Step112: The AMF18stores the lists of permitted S-NSSAI(s) per TA to be used in the mobility procedures to compare it with the list of S-NSSAIs active in a UE12at handover. The AMF18performs a cross section of the AMF supported S-NSSAI(s) per TA and the operator business policies of the permitted S-NSSAI(s) per TA as received from NSSF19(step112A). Note that, for the alternative embodiments mentioned in the preceding paragraph, step112A would not be needed. The resulting list of supported and permitted S-NSSAI(s) per TA is provided in a N2 setup response to the RAN node14(step112B). Note that, in some alternative embodiments, the AMF18provides (in step112B) a list of the AMF supported and NSSF permitted network slices to the RAN node14, and the RAN node determines a cross section of the RAN node supported network slices and the AMF supported and NSSF permitted network slices to determine the list of supported and permitted network slices (i.e., S-NSSAI(s)) per TA. Step114: The (R)AN node14stores the list of supported and permitted S-NSSAI(s) per TA and when suitable, for instance when it has received the similar information from all AMFs it is connected to, the RAN node14exchanges via X2 interface the information of the supported and permitted S-NSSAI(s) per TA to all “neighboring” RAN nodes it has contact with through an X2 interface, if any. This allows the “neighboring” RAN nodes to assess whether a handover for a UE12should be performed as X2 handover (in case all active slices are supported by the target RAN node) or as a N2 handover. FIGS.3A and3Billustrate embodiments in which network slice/TA support is changed in the RAN node14. In particular, upon occurrence of a change of supported TAs and/or S-NSSAI in RAN node14, a N2 configuration update request is initiated by the RAN node14. Step200: A precondition is that RAN nodes14, AMF18, and NSSF19are configured and have exchanged the supported and permitted S-NSSAI per TA. Step202: A change of configuration in the RAN node14on the supported S-NSSAI and/or TAs is done. The RAN node14triggers the update of the configuration information. Steps204-216: Follows steps102-114described inFIGS.2A and2B, but based on N2 configuration update request/response and N22 configuration update request/response in an alternative to the previously described set up messages. FIGS.4A and4Billustrate embodiments in which network slice ITA support is changed in the AMF18. In particular, upon occurrence of a change of supported NSSAIs in AMF18, the AMF18sends a configuration update command towards the RAN node14to initiate an N2 configuration update. Step300: Precondition is that RAN nodes14, AMF18, and NSSF19are configured and have exchanged the supported and permitted S-NSSAI per TA. Step302: A change of configuration in the AMF18on the supported S-NSSAI is done. Step304: The AMF18triggers a request for an update of the supported and permitted S-NSSAI per TA in the RAN node14through a N2 configuration update command. Steps306-318: Follows steps204-216described inFIGS.3A and3B. FIGS.5A and5Billustrate embodiments in which there is a change in business policies. In particular, upon occurrence of a change of business policies for certain TAs, the NSSF19triggers the impacted AMF(s)18to a change of configuration. This may trigger additional actions impacting the connected devices to handle the changes in the slice serving areas. Step400: Precondition is that RAN nodes14, AMF18, and NSSF19are configured and have exchanged the supported and permitted S-NSSAI per TA. Step402: A change of the business policies in the NSSF18for certain S-NSSAIs and TAs occurs. The NSSF18identifies the impacted AMF(s)14. The change of the business policies may be due to time based business policies. Step404: The NSSF19triggers a request for an update of the permitted S-NSSAI per TA through a N22 slice config update request. Step406: The AMF18acknowledges the reception of the update request with an N22 slice config update ack. Steps408-422: Follows steps304-318described inFIGS.4A and4B. FIGS.6A and6Billustrate an embodiment in which a new AMF18is deployed. The RAN node14detects a new AMF18. For example, the RAN node14performs a DNS look-up based on the TAI(s) that the RAN node14to thereby detect any new AMF(s)18. In some alternative embodiments, the new AMF18triggers the RAN node14to check for new AMFs. As another alternative, the new AMF18reports its existence to the RAN node14. Step500: Precondition is that RAN nodes14, AMF18, and NSSF19are configured and have exchanged the supported and permitted S-NSSAI per TA. Step502: A new AMF18is deployed. The new AMF18has a set of configured supported S-NSSAI per TA. Step504: The RAN nodes14detect connectivity to a new AMF18. Steps506-518: Follows steps102-114inFIGS.2A and2B. Note that, in some embodiments, the RAN node14need not signal to all AMFs18because it should be sufficient for the RAN node14to signal to the new AMF18. FIG.7is a schematic block diagram of the wireless communication device12, or UE12, according to some embodiments of the present disclosure. As illustrated, the wireless communication device12includes circuitry20comprising one or more processors22(e.g., Central Processing Units (CPUs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Digital Signal Processors (DSPs), and/or the like) and memory24. The wireless communication device12also includes one or more transceivers26each including one or more transmitters28and one or more receivers30coupled to one or more antennas32. In some embodiments, the functionality of the wireless communication device12described herein may be implemented in hardware (e.g., via hardware within the circuitry20and/or within the processor(s)22) or be implemented in a combination of hardware and software (e.g., fully or partially implemented in software that is, e.g., stored in the memory24and executed by the processor(s)22). In some embodiments, a computer program including instructions which, when executed by the at least one processor22, causes the at least one processor22to carry out at least some of the functionality of the wireless communication device12according to any of the embodiments described herein is provided. In some embodiments, a carrier containing the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium (e.g., a non-transitory computer readable medium such as memory). FIG.8is a schematic block diagram of the wireless communication device12, or UE, according to some other embodiments of the present disclosure. The wireless communication device12includes one or more modules34, each of which is implemented in software. The module(s)34provide the functionality of the wireless communication device12described herein. FIG.9is a schematic block diagram of a network node36(e.g., the RAN node14such as, for example, an eNB or gNB or a core network node such as the AMF18or NSSF19) according to some embodiments of the present disclosure. As illustrated, the network node36includes a control system38that includes circuitry comprising one or more processors40(e.g., CPUs, ASICs, DSPs, FPGAs, and/or the like) and memory42. The control system38also includes a network interface44. In embodiments in which the network node36is a radio access node14, the network node36also includes one or more radio units46that each include one or more transmitters48and one or more receivers50coupled to one or more antennas52. In some embodiments, the functionality of the network node36(e.g., the functionality of the RAN node14such as, for example, an eNB or gNB or a core network node such as the AMF18or NSSF19) described above (e.g., with respect toFIGS.2A and2B,3A and3B,4A and4B,5A and5B, and6A and6B) may be fully or partially implemented in software that is, e.g., stored in the memory42and executed by the processor(s)40. FIG.10is a schematic block diagram that illustrates a virtualized embodiment of the network node36(e.g., the RAN node14such as, for example, an eNB or gNB or a core network node such as the AMF18or NSSF19) according to some embodiments of the present disclosure. As used herein, a “virtualized” network node36is a network node36in which at least a portion of the functionality of the network node36is implemented as a virtual component (e.g., via a virtual machine(s) executing on a physical processing node(s) in a network(s)). As illustrated, the network node36optionally includes the control system38, as described with respect toFIG.9. In addition, if the network node36is the radio access node14, the network node36also includes the one or more radio units46, as described with respect toFIG.9. The control system38(if present) is connected to one or more processing nodes54coupled to or included as part of a network(s)56via the network interface44. Alternatively, if the control system38is not present, the one or more radio units46(if present) are connected to the one or more processing nodes54via a network interface(s). Alternatively, all of the functionality of the network node36described herein may be implemented in the processing nodes54. Each processing node54includes one or more processors58(e.g., CPUs, ASICs, DSPs, FPGAs, and/or the like), memory60, and a network interface62. In this example, functions64of the network node36described herein (e.g., the functions of the RAN node14such as, for example, an eNB or gNB or a core network node such as the AME18or NSSF19described above with respect to, e.g.,FIGS.2A and2B,3A and3B,4A and4B,5A and5B, and6A and6B) are implemented at the one or more processing nodes54or distributed across the control system38(if present) and the one or more processing nodes54in any desired manner. In some particular embodiments, some or all of the functions64of the network node36described herein are implemented as virtual components executed by one or more virtual machines implemented in a virtual environment(s) hosted by the processing node(s)54. As will be appreciated by one of ordinary skill in the art, additional signaling or communication between the processing node(s)54and the control system38(if present) or alternatively the radio unit(s)46(if present) is used in order to carry out at least some of the desired functions. Notably, in some embodiments, the control system38may not be included, in which case the radio unit(s)46(if present) communicates directly with the processing node(s)54via an appropriate network interface(s). In some particular embodiments, higher layer functionality (e.g., layer3and up and possibly some of layer2of the protocol stack) of the network node36may be implemented at the processing node(s)54as virtual components (i.e., implemented “in the cloud”) whereas lower layer functionality (e.g., layer1and possibly some of layer2of the protocol stack) may be implemented in the radio unit(s)46and possibly the control system38. In some embodiments, a computer program including instructions which, when executed by the at least one processor40,58, causes the at least one processor40,58to carry out the functionality of the network node36or a processing node54according to any of the embodiments described herein is provided. In some embodiments, a carrier containing the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium (e.g., a non-transitory computer readable medium such as the memory42,60). FIG.11is a schematic block diagram of the network node36according to some other embodiments of the present disclosure. The network node36includes one or more modules66, each of which is implemented in software. The module(s)66provide the functionality of the network node36described herein (e.g., the functions of the RAN node14such as, for example, an eNB or gNB or a core network node such as the AMF18or NSSF19described above with respect to, e.g.,FIGS.2A and2B,3A and3B,4A and4B,5A and5B, and6A and6B). EXAMPLE EMBODIMENTS While not being limited thereto, some example embodiments described above may be summarized in the following manner:1. A method of operation of a radio access node14in a cellular communications network10, comprising:sending102,204,306,410,506, to a mobility function entity18in a core network of the cellular communications network10, first information indicative of one or more tracking areas supported by the radio access node14and one or more network slices supported by the radio access node14; andreceiving112B,214B,316B,420B,516B, from the mobility function entity18, second information indicative of one or more network slices that are:(a) supported by the radio access node14and the mobility function entity14and (b) permitted by one or more network policies, for each of at least one of the one or more tracking areas supported by the radio access node14, or(a) supported by the mobility function entity (14) and (b) permitted by one or more network policies, for each of at least one of the one or more tracking areas supported by the radio access node14.2. The method of embodiment 1 wherein sending102,204,306,410,506the first information comprises sending102the first information to the mobility management function entity at a time of deployment of the radio access node and instantiation of the mobility function entity.3. The method of embodiment 1 or 2 wherein:sending102,204,306,410,506the first information comprises sending102a N2 setup request comprising the first information; andreceiving112B,2148,316B,420B,516B the second information comprises receiving112B a N2 setup response comprising the second information.4. The method of embodiment 1 wherein sending102,204,306,410,506the first information comprises sending204the first information upon occurrence of a change in the one or more tracking areas supported by the radio access node and/or a change in the one or more network slices supported by the radio access node.5. The method of embodiment 1 or 4 wherein:sending102,204,306,410,506the first information comprises sending204a N2 configuration update request comprising the first information; andreceiving112B,214B,316B,420B,516B the second information comprises receiving214B a N2 configuration update response comprising the second information.6. The method of embodiment 1 wherein sending102,204,306,410,506the first information comprises sending306,410the first information upon receiving a command from the mobility function entity.7. The method of embodiment 1 or 6 further comprising:receiving304,408a N2 configuration update command from the mobility function entity;wherein:sending102,204,306,410,506the first information comprises sending306,410a N2 configuration update request comprising the first information upon receiving the N2 configuration update command; andreceiving112B,214B,316B,420B,516B the second information comprises receiving316B,420B a N2 configuration update response comprising the second information.8. The method of embodiment 1 further comprising:detecting504connectivity to the mobility function entity14as a new mobility function entity;wherein sending102,204,306,410,506the first information comprises sending506a N2 setup response to the mobility function entity14upon detecting504connectivity to the mobility function entity14.9. The method of any one of embodiments 1 to 8 further comprising exchanging114,216,318,422,518the second information with one or more other radio access nodes.10. A radio access node14for a cellular communications network10, the radio access node14adapted to perform the method of any one of embodiments 1 to 9.11. A radio access node14for a cellular communications network10, comprising:a network interface44,62;at least one processor40,58; andmemory42,60comprising instructions executable by the at least one processor40,58whereby the radio access node14is operable to perform the method of any one of embodiments 1 to 9.12. A radio access node14for a cellular communications network10, comprising:one or more modules66operable to perform the method of any one of embodiments 1 to 9.13. A method of operation of a mobility function entity18in a core network17of a cellular communications network10, comprising:receiving102,204,306,410,506, from a radio access node14of the cellular communications network10, first information indicative of one or more tracking areas supported by the radio access node14and one or more network slices supported by the radio access node14;determining112A,214A,316A,420A,516A one or more network slices that are: (a) supported by the radio access node14and the mobility function entity18and (b) permitted by one or more network policies, for each of at least one of the one or more tracking areas supported by the radio access node14;sending1128,2148,3168,4208,516B, to the radio access node14, second information indicative of the one or more network slices that are: (a) supported by the radio access node14and the mobility function entity18and (b) permitted by one or more network policies, for each of the at least one of the one or more tracking areas supported by the radio access node14.14. The method of embodiment 13 further comprising:obtaining106,110;208,212;310,314;414,418;510,514third information indicative of one or more network slices that are permitted by the one or more network policies for each of the at least one of the one or more tracking areas supported by the radio access node;wherein determining112A,214A,316A,420A,516A the one or more network slices that are: (a) supported by the radio access node and the mobility function entity and (b) permitted by one or more network policies, for each of the at least one of the one or more tracking areas supported by the radio access node, comprises:for each of the at least one of the one or more tracking areas supported by the radio access node, determining112A,214A,316A,420A,516A the one or more network slices for the tracking area that are: a supported by the radio access node and the mobility function entity and b permitted by the one or more network policies based on the third information.15. The method of embodiment 13 further comprising:determining104,206,308,412,508, for each of at least one of the one or more tracking areas supported by the radio access node, one or more network slices that are supported by both the radio access node and the mobility function entity for the tracking area; andsending106,208,310,414,510, to a network slice selection function entity19, third information that is indicative of the one or more network slices that are supported by both the radio access node14and the mobility function entity18for each of at least one of the one or more tracking areas supported by the radio access nodes.16. The method of embodiment 15 wherein determining112A,214A,316A,420A,516A the one or more network slices that are: a supported by the radio access node and the mobility function entity and b permitted by one or more network policies, for each of the at least one of the one or more tracking areas supported by the radio access node, comprises:receiving110,212,314,418,514, from the network slice selection function entity18, fourth information indicative of one or more network slices that are permitted by the one or more network policies for each of the at least one of the one or more tracking areas supported by the radio access node; andfor each of the at least one of the one or more tracking areas supported by the radio access node, determining112A,214A,316A,420A,516A the one or more network slices for the tracking area that are: (a) supported by the radio access node and the mobility function entity and (b) permitted by the one or more network policies based on the fourth information.17. The method of any one of embodiments 13 to 16 wherein receiving102,204,306,410,506the first information comprises receiving102the first information at a time of deployment of the radio access node and instantiation of the mobility function entity.18. The method of any one of embodiments 13 to 17 wherein:receiving102,204,306,410,506the first information comprises receiving102a N2 setup request comprising the first information; andsending112B,214B,316B,420B,516B the second information comprises sending112E a N2 setup response comprising the second information.19. The method of any one of embodiments 13 to 17 wherein receiving102,204,306,410,506the first information comprises receiving204the first information upon occurrence of a change in the one or more tracking areas supported by the radio access node and/or a change in the one or more network slices supported by the radio access node.20. The method of any one of embodiments 12 to 16 or 18 wherein:receiving102,204,306,410,506the first information comprises receiving204a N2 configuration update request comprising the first information; andsending112B,214B,316B,420B,516B the second information comprises sending214B a N2 configuration update response comprising the second information.21. The method of any one of embodiments 13 to 17 further comprising:sending304, to the radio access node, an update command upon occurrence in a change of the one or more network slices supported by the mobility function entity; andwherein receiving102,204,306,410,506the first information comprises receiving306the first information in response to sending the update command,22. The method of embodiment 21 wherein the update command is an N2 configuration update command, andreceiving306the first information comprises receiving306a N2 configuration update request comprising the first information upon receiving the N2 configuration update command; andsending316B the second information comprises sending316B a N2 configuration update response comprising the second information.23. The method of any one of embodiments 13 to 17 further comprising:receiving404, from a network slice selection function entity, an update request;upon receiving404the update request, sending403, to the radio access node, an update command; andwherein receiving102,204,306,410,506the first information comprises receiving410the first information in response to sending the update command.24. The method of embodiment 23 wherein the update request is an N22 slice configuration update request, the update command is an N2 configuration update command, andreceiving410the first information comprises receiving410a N2 configuration update request comprising the first information upon receiving the N2 configuration update command; andsending420B the second information comprises sending420B a N2 configuration update response comprising the second information.25. A network node that implements a mobility function entity for a core network of a cellular communications system, the network node adapted to perform the method of any one of embodiments 13 to 24.26. A network node36that implements a mobility function entity18for a core network17of a cellular communications system10, comprising:at least one processor40,58; andmemory42,60comprising instructions executable by the at least one processor40,58whereby the network node is operable to perform the method of any one of embodiments 13 to 24.27. A network node36that implements a mobility function entity18for a core network17of a cellular communications system10, comprising:one or more modules66operable to perform the method of any one of embodiments 13 to 24.28. A method of operation of a mobility function entity18in a core network17of a cellular communications network10, comprising:determining112A,214A,316A,420A,516A one or more network slices that are: (a) supported by the mobility function entity18and (b) permitted by one or more network policies, for each of at least one of one or more tracking areas supported by the radio access node14; andsending112B,214B,316B,420B,516B, to the radio access node14, second information indicative of the one or more network slices that are: (a) supported by the mobility function entity18and (b) permitted by one or more network policies, for each of the at least one of the one or more tracking areas supported by the radio access node14. The following acronyms are used throughout this disclosure.3GPP Third Generation Partnership Project5G Fifth GenerationAMF Access and Mobility Management FunctionDNS Domain Name SystemGUAMI Globally Unique Authentication Management Function IdentifierID IdentifierNSSAI Network Slice Selection Assistance InformationNSSF Network Shoe Selection FunctionPDU Protocol Data UnitPLMN Public Land Mobile NetworkRAN Radio Access NetworkSMF Session Management FunctionS-NSSAI Single Network Slice Selection Assistance InformationTA Tracking AreaTAI Tracking Area IdentityTS Technical SpecificationUE User EquipmentUP User Plane Those skilled in the art will recognize improvements and modifications to the embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein.
35,582
11943639
DETAILED DESCRIPTION FIGS.1through15, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. FIG.1is a block diagram illustrating an electronic device101in a network environment100according to various embodiments. Referring toFIG.1, the electronic device101in the network environment100may communicate with an electronic device102via a first network198(e.g., a short-range wireless communication network), or an electronic device104or a server108via a second network199(e.g., a long-range wireless communication network). According to an embodiment, the electronic device101may communicate with the electronic device104via the server108. According to an embodiment, the electronic device101may include a processor120, memory130, an input module150, a sound output module155, a display module160, an audio module170, a sensor module176, an interface177, a connecting terminal178, a haptic module179, a camera module180, a power management module188, a battery189, a communication module190, a subscriber identification module (SIM)196, or an antenna module197. In some embodiments, at least one of the components (e.g., the connecting terminal178) may be omitted from the electronic device101, or one or more other components may be added in the electronic device101. In some embodiments, some of the components (e.g., the sensor module176, the camera module180, or the antenna module197) may be implemented as a single component (e.g., the display module160). The processor120may execute, for example, software (e.g., a program140) to control at least one other component (e.g., a hardware or software component) of the electronic device101coupled with the processor120, and may perform various data processing or computation. According to one embodiment, as at least part of the data processing or computation, the processor120may store a command or data received from another component (e.g., the sensor module176or the communication module190) in volatile memory132, process the command or the data stored in the volatile memory132, and store resulting data in non-volatile memory134. According to an embodiment, the processor120may include a main processor121(e.g., a central processing unit (CPU) or an application processor (AP)), or an auxiliary processor123(e.g., a graphics processing unit (GPU), a neural processing unit (NPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor121. For example, when the electronic device101includes the main processor121and the auxiliary processor123, the auxiliary processor123may be adapted to consume less power than the main processor121, or to be specific to a specified function. The auxiliary processor123may be implemented as separate from, or as part of the main processor121. The auxiliary processor123may control, for example, at least some of functions or states related to at least one component (e.g., the display module160, the sensor module176, or the communication module190) among the components of the electronic device101, instead of the main processor121while the main processor121is in an inactive (e.g., sleep) state, or together with the main processor121while the main processor121is in an active (e.g., executing an application) state. According to an embodiment, the auxiliary processor123(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module180or the communication module190) functionally related to the auxiliary processor123. According to an embodiment, the auxiliary processor123(e.g., the neural processing unit) may include a hardware structure specified for artificial intelligence model processing. An artificial intelligence model may be generated by machine learning. Such learning may be performed, e.g., by the electronic device101where the artificial intelligence is performed or via a separate server (e.g., the server108). Learning algorithms may include, but are not limited to, e.g., supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. The artificial intelligence model may include a plurality of artificial neural network layers. The artificial neural network may be a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), deep Q-network or a combination of two or more thereof but is not limited thereto. The artificial intelligence model may, additionally or alternatively, include a software structure other than the hardware structure. The memory130may store various data used by at least one component (e.g., the processor120or the sensor module176) of the electronic device101. The various data may include, for example, software (e.g., the program140) and input data or output data for a command related thereto. The memory130may include the volatile memory132or the non-volatile memory134. The program140may be stored in the memory130as software, and may include, for example, an operating system (OS)142, middleware144, or an application146. The input module150may receive a command or data to be used by another component (e.g., the processor120) of the electronic device101, from the outside (e.g., a user) of the electronic device101. The input module150may include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button), or a digital pen (e.g., a stylus pen). The sound output module155may output sound signals to the outside of the electronic device101. The sound output module155may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or playing record. The receiver may be used for receiving incoming calls. According to an embodiment, the receiver may be implemented as separate from, or as part of the speaker. The display module160may visually provide information to the outside (e.g., a user) of the electronic device101. The display module160may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to an embodiment, the display module160may include a touch sensor adapted to detect a touch, or a pressure sensor adapted to measure the intensity of force incurred by the touch. The audio module170may convert a sound into an electrical signal and vice versa. According to an embodiment, the audio module170may obtain the sound via the input module150, or output the sound via the sound output module155or an external electronic device (e.g., an electronic device102(e.g., a speaker or a headphone)) directly or wirelessly coupled with the electronic device101. The sensor module176may detect an operational state (e.g., power or temperature) of the electronic device101or an environmental state (e.g., a state of a user) external to the electronic device101, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module176may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface177may support one or more specified protocols to be used for the electronic device101to be coupled with the external electronic device (e.g., the electronic device102) directly or wirelessly. According to an embodiment, the interface177may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal178may include a connector via which the electronic device101may be physically connected with the external electronic device (e.g., the electronic device102). According to an embodiment, the connecting terminal178may include, for example, a HDMI connector, a USB connector, a SD card connector, or an audio connector (e.g., a headphone connector). The haptic module179may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or electrical stimulus which may be recognized by a user via his tactile sensation or kinesthetic sensation. According to an embodiment, the haptic module179may include, for example, a motor, a piezoelectric element, or an electric stimulator. The camera module180may capture a still image or moving images. According to an embodiment, the camera module180may include one or more lenses, image sensors, image signal processors, or flashes. The power management module188may manage power supplied to the electronic device101. According to one embodiment, the power management module188may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery189may supply power to at least one component of the electronic device101. According to an embodiment, the battery189may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module190may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device101and the external electronic device (e.g., the electronic device102, the electronic device104, or the server108) and performing communication via the established communication channel. The communication module190may include one or more communication processors that are operable independently from the processor120(e.g., the application processor (AP)) and supports a direct (e.g., wired) communication or a wireless communication. According to an embodiment, the communication module190may include a wireless communication module192(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module194(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device104via the first network198(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or the second network199(e.g., a long-range communication network, such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multi components (e.g., multi chips) separate from each other. The wireless communication module192may identify or authenticate the electronic device101in a communication network, such as the first network198or the second network199, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module196. The wireless communication module192may support a 5G network, after a 4G network, and next-generation communication technology, e.g., new radio (NR) access technology. The NR access technology may support enhanced mobile broadband (eMBB), massive machine type communications (mMTC), or ultra-reliable and low-latency communications (URLLC). The wireless communication module192may support a high-frequency band (e.g., the mmWave band) to achieve, e.g., a high data transmission rate. The wireless communication module192may support various technologies for securing performance on a high-frequency band, such as, e.g., beamforming, massive multiple-input and multiple-output (massive MIMO), full dimensional MIMO (FD-MIMO), array antenna, analog beamforming, or large scale antenna. The wireless communication module192may support various requirements specified in the electronic device101, an external electronic device (e.g., the electronic device104), or a network system (e.g., the second network199). According to an embodiment, the wireless communication module192may support a peak data rate (e.g., 20 Gbps or more) for implementing eMBB, loss coverage (e.g., 164 dB or less) for implementing mMTC, or U-plane latency (e.g., 0.5 ms or less for each of downlink (DL) and uplink (UL), or a round trip of 1 ms or less) for implementing URLLC. The antenna module197may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device101. According to an embodiment, the antenna module197may include an antenna including a radiating element composed of a conductive material or a conductive pattern formed in or on a substrate (e.g., a printed circuit board (PCB)). According to an embodiment, the antenna module197may include a plurality of antennas (e.g., array antennas). In such a case, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network198or the second network199, may be selected, for example, by the communication module190from the plurality of antennas. The signal or the power may then be transmitted or received between the communication module190and the external electronic device via the selected at least one antenna. According to an embodiment, another component (e.g., a radio frequency integrated circuit (RFIC)) other than the radiating element may be additionally formed as part of the antenna module197. According to various embodiments, the antenna module197may form a mmWave antenna module. According to an embodiment, the mmWave antenna module may include a printed circuit board, an RFIC disposed on a first surface (e.g., the bottom surface) of the printed circuit board, or adjacent to the first surface and capable of supporting a designated high-frequency band (e.g., the mmWave band), and a plurality of antennas (e.g., array antennas) disposed on a second surface (e.g., the top or a side surface) of the printed circuit board, or adjacent to the second surface and capable of transmitting or receiving signals of the designated high-frequency band. At least some of the above-described components may be coupled mutually and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, general purpose input and output (GPIO), serial peripheral interface (SPI), or mobile industry processor interface (MIPI)). According to an embodiment, commands or data may be transmitted or received between the electronic device101and the external electronic device104via the server108coupled with the second network199. Each of the electronic devices102or104may be a device of a same type as, or a different type, from the electronic device101. According to an embodiment, all or some of operations to be executed at the electronic device101may be executed at one or more of the external electronic devices102,104, or108. For example, if the electronic device101should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device101, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device101. The electronic device101may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, mobile edge computing (MEC), or client-server computing technology may be used, for example. The electronic device101may provide ultra low-latency services using, e.g., distributed computing or mobile edge computing. In another embodiment, the external electronic device104may include an internet-of-things (IoT) device. The server108may be an intelligent server using machine learning and/or a neural network. According to an embodiment, the external electronic device104or the server108may be included in the second network199. The electronic device101may be applied to intelligent services (e.g., smart home, smart city, smart car, or healthcare) based on 5G communication technology or IoT-related technology. FIG.2is a diagram illustrating an example of a system for providing augmented reality according to various embodiments. Referring toFIG.2, a system for providing augmented reality may include an electronic device101, an external electronic device205(e.g., the electronic devices102and104inFIG.1), and/or a server108. For example, the electronic device101may be a portable terminal such as a smart phone. For example, the external electronic device205may include an AR device for providing an augmented reality image, such as AR glasses. For example, the server108may include a cloud server. According to various embodiments, the electronic device101may transmit and/or receive data to and/or from the external electronic device205through a first network (e.g., the first network198inFIG.1) and/or a second network (e.g., the second network199inFIG.1). For example, the electronic device101may receive sensing data and/or image data from the external electronic device205. For example, transmission and/or reception of data between the electronic device101and the external electronic device205may be performed through a Bluetooth communication scheme or a WiFi direct communication scheme. For example, transmission and/or reception of data between the electronic device101and the external electronic device205may be performed through a WiFi communication scheme through an access point (AP). When the electronic device101and the external electronic device205are connected through the WiFi direct communication scheme or the WiFi communication scheme, a wireless communication protocol defined in the IEEE 802.11 wireless local area network (WLAN) standard may be used. In addition to the above-described examples, the electronic device101may be connected to the external electronic device205through the second network199such as a cellular communication scheme. According to various embodiments, the electronic device101may transmit and/or receive data to and/or from the server108through a first network (e.g., the first network198and/or the second network199inFIG.1). For example, the electronic device101may receive, from the server108, a variety of information that may be used to generate augmented reality image data. For example, transmission and/or reception of data between the electronic device101and the server108may be performed through a connection201bvia a cellular communication scheme using a base station201aand/or a connection203bthrough a WiFi communication scheme using an access point (AP)203a. In the disclosure, the case where the electronic device101or the external electronic device205performs a specific operation may indicate, for example, that a processor included in the electronic device101or the external electronic device205performs a specific operation or controls other hardware (e.g., the wireless communication module192) to perform a specific operation. Alternatively, the case where the electronic device101or the external electronic device205performs a specific operation may indicate, for example, that a processor performs a specific operation or controls another hardware (e.g., the wireless communication module192) to perform a specific operation as at least one command stored in a memory included in the electronic device101or the external electronic device205is executed. FIG.3illustrates an example of an external electronic device205according to various embodiments. Referring toFIG.3, according to various embodiments, the external electronic device205may be AR glasses301. According to various embodiments, the AR glasses301may include a pair of display devices350and a pair of housings310. The pair of display devices350may be respectively fixed to the pair of housings310in the form of a frame. A pair of wearing members320may extend parallel to each other from the pair of housings310. The AR glasses301may be a head-mounted electronic device. Although the AR glasses301is illustrated as a head-mounted wearable electronic device, this is only an example, and it can be easily understood by those skilled in the art that there is no limitation on the form of implementation of the AR glasses301. According to various embodiments, the AR glasses301may include a spacing adjustment structure340for adjusting the length between the pair of housings310, and a circuit board360and a battery370disposed in the wearing member320. As another example, a light output device380(e.g., a projector), a light refraction module390(e.g., a prism), or a display module (not shown) may be included in the wearing member320of the electronic device101. According to various embodiments, the display device350may include a display module, a projector, a sensor equipped with a touch circuit, and the like, and a display of the display module may be a transparent or translucent display. As another example, the display device350may include a window member (e.g., a transparent member), and the window member may include a light control member disposed on at least a portion of the window member. The light control member may be translucent glass or a member capable of adjusting the transmittance of light as the color concentration is adjusted. As another example, the display device350may include a lens including a waveguide, a reflective lens, and the like, and each lens may transmit light output from the output device to the user's eye. According to various embodiments, the pair of housings310may be in the form of a frame that at least partially surrounds the edges of each of the display devices350, and may serve as a rim in the structure of glasses including general sunglasses. According to various embodiments, the circuit boards360may be disposed in the respective wearing members320, and circuit lines connecting the circuit boards may be disposed inside or outside the pair of housings310. The pair of wearing members320may serve as a temple in the general structure of glasses. For example, the pair of housings310may be positioned on the user's face such that the display devices350correspond to the user's eyes, and the pair of wearing members320may be worn on the user's respective ears on both sides of the user's head. According to various embodiments, the pair of wearing members320may be utilized to dispose the circuit board360, the battery370, the light output device380, the light refraction module390, and the like. For example, each of the pair of wearing members320may have a housing structure capable of accommodating the circuit board360, the battery370, the light output device380or the light refraction module390. As another example, the electronic device101may have the circuit board360, the battery370, the light output device380, and the light refraction module390in the pair of wearing members320, respectively. As another example, the circuit board360, the battery370, the light output device380, or the light refraction module390may be variously disposed in consideration of the weight distribution and comfort of wearing of the electronic device101. According to an embodiment, a plurality of circuit boards360may be configured, one of which may be provided as a board including a driving circuit of the display device350, a processor for processing image information and the like, and a communication module for performing communication with the electronic device101. The processor may output an image using a projector. For example, the processor may receive data for displaying content from the electronic device101through the communication module. The processor may display content on at least a portion of the display device350on the basis of the received data. The processor may identify the position to which an image is to be output based at least on the relative position of the electronic device101with respect to the AR glasses301. Alternatively, the processor may receive information on the display position along with data for displaying content through the communication module. The processor may also identify the position to which an image is to be output on the basis of the relative position of the electronic device101with respect to the AR glasses301and information on the received display position. A configuration in which the processor identifies the relative position of the electronic device101with respect to the AR glasses301and a configuration in which the processor identifies the display position of content in various ways will be described later in more detail. The processor may display content at the identified position on the display device350. For example, the content may be displayed at a position where the user recognizes the content as being displayed in the vicinity of the display device160of the electronic device101. According to various embodiments, the processor of the AR glasses may be implemented to be at least partially the same as the processor120of the electronic device101inFIG.1. The communication module of the AR glasses may be implemented to be at least partially the same as the communication module of the electronic device101inFIG.1. The communication module of the AR glasses may transmit/receive data to/from the communication module190of the electronic device101through at least one of the first network198or the second network199. According to various embodiments, another one of the circuit boards360may be provided as a circuit board on which an interface with a user, a communication module for providing connection with another electronic device or a commercial communication network, various connectors, and sensor modules are mounted. As another example, a microphone and a speaker phone for input/output of sound may also be disposed on one of the circuit boards360, or may be disposed adjacent to one of the circuit boards360. However, the circuit arrangement of the circuit boards360and their functions are not limited thereto, and may be variously modified as necessary. According to an embodiment, the circuit boards360may be respectively disposed in any one of the wearing members320. As another example, the sensor module may include a proximity sensor, an illuminance sensor, a gyro sensor, a camera module, an eye tracker, a geomagnetic sensor, an accelerometer, and the like, and various sensors constituting the sensor module are not necessarily installed on one of the circuit boards360. For example, the camera module may be disposed at an appropriate position on the pair of housings310to be close to the user's gaze. The sensor module may detect information on the surrounding environment required to configure an optimal usage environment while monitoring the usage environment of the AR glasses301. For example, the processor may analyze an image of an external landscape obtained through the camera module and identify a relative position of the electronic device101with respect to the AR glasses301based at least on the analysis result. According to an embodiment, one or more batteries370may be disposed to supply power to the circuit board360, the display module, or the like, and may be disposed in at least one of the pair of wearing members320or may be disposed in each wearing member320. According to an embodiment, a plurality of light output devices380and a plurality of light refraction modules390may be disposed, and may be disposed in at least one of the pair of wearing members320, or may be disposed in each wearing member320. The light emitted from the light output device380may reach the display device350passing through the light refraction module390. The AR glasses301using the light output device380may be a wave guide type or a reflective mirror type. For example, in the wave guide type, light emitted from a side light output device such as a projector is reflected onto a grating area formed in the display device using a wave guide such as a prism and is then transmitted to the user's eyes. As another example, in the reflective mirror type, light emitted from a light output device may be directly reflected onto a display device in front of the user's eyes to provide visual information to the user's eyes. According to an embodiment, the circuit boards360disposed on the respective housings310may be connected to each other through circuit wires (not shown). The circuit wires may provide a transmission/reception path for various control signals and data between circuit boards. The circuit wire may be configured using a coaxial cable, and may have various other types of transmission line structures such as a flexible printed circuit board (FPCB) and the like. According to an embodiment, the AR glasses301may include an input device including physical keys or a touch pad. For example, an input module such as a power key or a touch pad requires direct contact with a user and thus may be exposed to the outside of the AR glasses301. FIG.4is a diagram illustrating a method of scheduling a service period for a wireless channel of an electronic device (e.g., the electronic device101inFIG.1) and an external electronic device (e.g., the external electronic device205inFIG.2) according to various embodiments. InFIG.4, the service periods (SPs)401a,401b, and401care shown. The electronic device101and the external electronic device205may operate in a wake-up state during the service periods401a,401b, and401cand enter a sleep state within the remaining periods. Data for generating and/or reproducing one or more augmented reality image frames may be transmitted and/or received during each of the service periods401a,401b, and401c. Although not shown, the service period may be repeated at a predetermined interval. According to various embodiments, the duration of each of the service periods401a,401b, and401cmay be determined by the amount of data to be transmitted from the electronic device101to the external electronic device205(hereinafter referred to as a “first data amount”), the amount of data to be transmitted from the external electronic device205to the electronic device101by (hereinafter referred to as a “second data amount”), and/or a network bandwidth. For example, information on the first data amount and/or the second data amount may be identified from an augmented reality-related application executed in the electronic device101and the external electronic device205. For example, information on the network bandwidth may be identified on the basis of information on the communication scheme for a connection between the electronic device101and the external electronic device205and/or the quality of a signal (e.g., the intensity of a received signal). For example, if the electronic device101and the external electronic device205are connected in a WiFi communication scheme or a WiFi direct communication scheme (hereinafter referred to as a “WiFi communication scheme”), a transmittable data rate set may be determined according to WLAN standards. For example, if the electronic device101and the external electronic device205use a bandwidth of 160 MHz based on the IEEE 802.11ax standard and a multi-input multi-output (MIMO) scheme of two spatial streams, the peak data rate may be 2.4 Gbps, and any one data rate among a set of supported data rates defined in the IEEE 802.11ax standard may be selected on the basis of the intensity of a received signal. For example, if the intensity of a received signal is sufficiently high, a data rate of 2.4 Gbps may be selected from among the supported data rate set defined in the IEEE 802.11ax standard, thereby performing communication, and the network bandwidth may be determined to be 1.8 Gbps, which corresponds to 75% of the selected data rate of 2.4 Gbps, in consideration of the overhead included in the packet to be transmitted. For example, the duration of each of the service periods401a,401b, and401cmay be determined to be greater than or equal to a value obtained by dividing a sum of the data amounts transmitted and/or received per unit time between the electronic device101and the external electronic device205by the network bandwidth. For example, if the sum of the first data amount and the second data amount transmitted and/or received per unit time is 1.8 Mbits, and if the network bandwidth is 1.8 Gbps, the duration of each of the service periods401a,401b, and401cmay be determined to be 1 ms or more, which is obtained by dividing the sum of data amounts (i.e., 1.8 Mbits) by the network bandwidth (i.e., 1.8 Gbps). As another example, the duration of each of the service periods401a,401b, and401cmay be determined to be 2 ms which is double the value determined in the previous example, in order to guarantee sufficient retransmission time in consideration of variables such as a network overhead, interference, and/or possibility of retransmission. However, the duration of each service period according to various embodiments of the disclosure is not limited thereto. According to various embodiments, the interval of the service periods401a,401b, and401cmay be determined on the basis of a refresh rate of the external electronic device205. For example, if the external electronic device205reproduces an augmented reality image at a refresh rate of 60 fps and outputs the same through a display device (e.g., the display device350inFIG.3), the interval of the service periods401a,401b, and401cmay be determined to be about 16.6 ms or less, which is the reciprocal of the refresh rate. In the disclosure, the service period scheduled to enable the electronic device101and/or the external electronic device205to transmit and/or receive data will be referred to as a “target-wake-time (TWT) service period (SP)”, the duration of the period in which the electronic device101and/or the external electronic device205operate in a wake-up state in order to transmit and/or receive data during the scheduled service period will be referred to as a “TWT wake duration”, and the interval of the scheduled service period (e.g., the duration of the time between the starting time of one TWT service period and the starting time of the next TWT service period) will be referred to as a “TWT wake interval”. FIG.5is a diagram illustrating a method of transmitting and/or receiving data between an electronic device101and an external electronic device205according to various embodiments. Referring toFIG.5, in the case of TWT setup between the electronic device101and the external electronic device205, the external electronic device205may operate as a device (e.g., a TWT requesting STA) that requests the TWT setup, and the electronic device101, which is a computing host, may operate as a device (e.g., a TWT responding STA) that responds to the TWT setup. Unlike the illustrated example, the external electronic device205may operate as a device (e.g., a TWT responding STA) that responds to the TWT setup, and the electronic device101may operate as a device (e.g., a TWT requesting STA) that requests the TWT setup. According to various embodiments, the external electronic device205may transmit a message requesting TWT setup (e.g., a TWT request frame501) to the electronic device101. According to various embodiments, if the message requesting TWT setup (e.g., the TWT request frame501) is received, the electronic device101may transmit a response message (e.g., a TWT response frame503) including information on the parameters for the TWT service period. For example, the parameters for the TWT service period may include at least one of a target wake time505, a TWT wake duration507, and/or a TWT wake interval509. For example, the target wake time505may be a parameter indicating the time at which the TWT service period starts. For example, the TWT wake duration507may be a parameter indicating the duration of the TWT service period. For example, the TWT wake interval509may be a parameter indicating the interval at which the TWT service period repeatedly starts. According to various embodiments, the external electronic device205that requested TWT setup may receive the response message and identify a set TWT service period on the basis of the parameters included in the received response message. According to various embodiments, when the TWT service period starts, the electronic device101may transmit a trigger frame511to the external electronic device205. For example, the trigger frame511may be a control frame that requests (e.g., triggers) the uplink (UL) operation (e.g., transmission of uplink traffic) of the external electronic device201. According to various embodiments, when the TWT service period starts, the external electronic device505may transmit a power saving (PS)-poll frame513to the electronic device101in order to notify the electronic device101that the external electronic device505is in a wake-up state. According to an embodiment, the PS-poll frame513transmitted to the electronic device101may be replaced by a quality-of-service (QoS) null frame. For example, the PS-poll frame513may be a control frame that requests the device101to transmit buffered data frames after the external electronic device205switches from a doze mode to a wake-up mode in order to receive the data frames buffered in the electronic device101. According to various embodiments, if the PS-poll frame513is received, the electronic device101may transmit an ACK message515including reception of the PS-poll frame513and transmit downlink data (DL) data517to the external electronic device205. According to various embodiments, if the downlink data517is received, the external electronic device205may transmit an ACK message519including reception of the downlink data517to the electronic device101. For example, the ACK message519may include information indicating at least one data frame received from the electronic device101through the downlink. According to various embodiments, if the ACK message519is received, the electronic device101may identify that at least one data frame among one or more data frames transmitted to the external electronic device205was received by the external electronic device205. According to various embodiments, the electronic device101may transmit a trigger frame521to the external electronic device205after the ACK message519is received. For example, the trigger frame521may be a control frame that requests (e.g., triggers) the uplink operation of the external electronic device205. According to various embodiments, if the trigger frame521is received, the external electronic device205may transmit uplink data523to the electronic device101. According to various embodiments, if the uplink data523is received, the electronic device101may transmit an ACK message525including reception of the uplink data523to the external electronic device205. For example, the ACK message525may include information indicating at least one data frame received from the external electronic device205through the uplink. According to various embodiments, the external electronic device205may switch to the doze state if the set TWT service period elapses. Thereafter, the external electronic device205may switch to the wake-up mode according to the determined TWT wake interval and perform transmission and/or reception of messages and/or data with the electronic device101described above. According to various embodiments, the electronic device101may not transmit the trigger frame511if there is no more downlink data to be transmitted. According to various embodiments, the external electronic device205may not transmit the PS-poll frame513if there is no more uplink data to be transmitted. Unlike the above description, when the first TWT service period starts after the TWT setup, the trigger frame511and/or the PS-poll frame513may not be transmitted. For example, in the case of TWT setup, only one of the trigger frame511and the PS-poll frame513may be transmitted, or none of them may be transmitted according to the value of a sub-field “Trigger” and the value of a sub-field “Flow Type” exchanged between the electronic device101and the external electronic device205. For example, if the value of the sub-field “Trigger” is configured as 0, the trigger frame511may not be transmitted when the first TWT service period starts after the TWT setup. In this case, the external electronic device205may perform the uplink operation even if the trigger frame511is not received. Alternatively, if the value of the sub-field “Trigger” is configured as 1, the trigger frame511may be transmitted when the first TWT service period starts after the TWT setup. For example, if the value of the sub-field “Flow Type” is configured as 0, the PS-poll frame513may be transmitted when the TWT service period starts after the TWT setup. Alternatively, if the value of the sub-field “Flow Type” is configured as 1, the PS-poll frame513may not be transmitted when the TWT service period starts after the TWT setup. Unlike the illustrated example, configuration may be made such that the external electronic device205transmits the uplink data523and then the electronic device101transmits the downlink data517during each TWT service period, and the sequence of transmission of the downlink data517and uplink data523may be respectively configured for each TWT service period. FIG.6is a diagram illustrating a method in which an electronic device (e.g., the electronic device101inFIG.1) and/or an external electronic device (e.g., the external electronic device205inFIG.2) determine the time at which a TWT service period starts according to various embodiments. According to various embodiments, the electronic device101and/or the external electronic device205may monitor the wireless channel for a specified time. For example, the specified time may be double the determined TWT wake interval (e.g., Determined Interval) or more. According to various embodiments, the electronic device101and/or the external electronic device205may detect a packet transmitted through the wireless channel for transmitting and/or receiving data and periodically identify whether or not there is an occupiable channel section (e.g., a section that is not wirelessly occupied by other external electronic devices). For example, referring toFIG.6, the electronic device101and/or the external electronic device205may identify one or more sections601a,601b,601c,601d,601e, and601fthat are wirelessly occupied by one or more other external electronic devices as a result of monitoring the wireless channel. The electronic device101and/or the external electronic device205may identify sections that are not wirelessly occupied by other external electronic devices (e.g., clear channels) during a specified time during which the monitoring is performed on the basis of the one or more identified sections601a,601b,601c,601d,601e, and601fthat are wirelessly occupied by one or more other external electronic devices. For example, the electronic device101and/or the external electronic device205may identify that the sections, which are not wirelessly occupied, greater than or equal to the determined TWT wake duration (e.g., the determined duration), among the sections (e.g., clear channels) that are not wirelessly occupied by other external electronic devices, are periodically repeated at the determined TWT wake interval (e.g., the determined interval). The electronic device101and/or the external electronic device205may stop monitoring the wireless channel and identify an occupiable channel section603on the basis of the identified result. According to various embodiments, the electronic device101and/or the external electronic device205may determine the time at which the TWT service period starts on the basis of the identified occupiable channel section603. For example, the electronic device101and/or the external electronic device205may determine the time ti after the determined TWT wake interval (e.g., the determined interval) from the starting time to of the last section603ain the identified occupiable channel section603as the time at which the TWT service period starts. According to various embodiments, the external electronic device205may transmit, to the electronic device101, a request message (e.g., the TWT request frame501inFIG.5) including a parameter (e.g., the target wake time505inFIG.5) indicating the determined time at which the TWT service period starts. According to various embodiments, the electronic device101may transmit, to the external electronic device205, a response message (e.g., the TWT response frame503inFIG.5) including the parameter (e.g., the target wake time505inFIG.5) indicating the determined time at which the TWT service period starts. FIG.7is a diagram illustrating an example in which latency increases due to retransmission of a data frame according to various embodiments. Hereinafter, the case where a data frame transmitted by an electronic device (e.g., the electronic device101inFIG.1) is retransmitted will be described. According to various embodiments, the electronic device101may transmit a data frame to an external electronic device (e.g., the external electronic device205inFIG.2) according to a set TWT service period. Referring toFIG.7, a data frame (e.g., F1) may be transmitted within a TWT service period (e.g., a 1stservice period). If the entire data frame (e.g., F1) is transmitted within the TWT service period (e.g., the 1stservice period), and if the entire transmitted data frame (e.g., F1) is received by the external electronic device205, the electronic device101may perform an operation of transmitting the next data frame (e.g., F2) within the next service period (e.g., a 2ndservice period). If at least a portion of the data frame (e.g., F2) is not transmitted within the service period (e.g., the 2ndservice period), or if at least a portion of the transmitted data frame (e.g., F2) is not received by the external electronic device205due to issues such as wireless channel interference, congestion, and/or low signal quality (e.g., failure of transmission of the data frame (e.g., F2)), the electronic device101may perform an operation of retransmitting the data frame (e.g., F2) that could not be transmitted normally to the external electronic device205during the next service period (e.g., a 3rdservice period). Accordingly, latency of the TWT wake interval may occur. If failure of transmission of the data frame (e.g., F2) also occurs during the service period (e.g., the 3rdservice period), the electronic device101may again perform the operation of retransmitting the data frame (e.g., F2) during the next service period (e.g., a 4thservice period). Even if the entire data frame (e.g., F2) is transmitted within the service period (e.g., the 4thservice period) and is thus received by the external electronic device205, total latency may be double the TWT wake interval. Accordingly, there may be a problem with latency in transmission of the data frames to be transmitted subsequent to the data frame (e.g., F2) in which failure of transmission described above occurs. FIG.8is a diagram illustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments. A description will be made below with reference toFIG.5as well. According to various embodiments, transmission and/or reception of data between the electronic device101and the external electronic device205may be performed within a TWT service period. Referring toFIG.8, the electronic device101may transmit downlink data517to the external electronic device205within a TWT service period (e.g., the 1stservice period). According to various embodiments, if the downlink data517is received, the external electronic device205may transmit, to the electronic device101, an ACK message519indicating reception of the downlink data517. According to various embodiments, if the ACK message519is received, the electronic device101may identify at least one data frame received by the external electronic device205, among one or more data frames transmitted to the external electronic device205, on the basis of information included in the received ACK message519. According to various embodiments, if all of the one or more transmitted data frames are identified to have been received by the external electronic device205, the electronic device101may transmit a trigger frame (e.g., the trigger frame521inFIG.5) to the external electronic device205, thereby controlling the external electronic device205to perform an uplink operation. According to various embodiments, if the trigger frame521is received, the external electronic device205may transmit uplink data523to the electronic device101. According to various embodiments, if the uplink data523is received, the electronic device101may transmit, to the external electronic device205, an ACK message525indicating reception of the uplink data523. According to various embodiments, if the ACK message525is received, the external electronic device205may identify whether or not there are one or more missing frames at the time T1 at which the ACK message525is received. For example, if the ACK message525is received, the external electronic device205may identify at least one data frame received by the electronic device101, among one or more data frames transmitted to the electronic device101, on the basis of information included in the received ACK message525. For example, frame numbers may be configured for the respective data frames transmitted from the external electronic device205to the electronic device101, and then a plurality of data frames having the configured frame numbers may be transmitted to the electronic device101. According to an embodiment, the electronic device101may include, in the ACK message252, information on the number of at least one data frame transmitted from the external electronic device205and received by the electronic device101. According to an embodiment, the electronic device101may transmit, to the external electronic device205, the ACK message525including information on the number of at least one data frame received by the electronic device101. According to an embodiment, the electronic device101may parse the ACK message525transmitted from the external electronic device205to identify the frame number included in the ACK message525, and compare the same with the frame numbers for one or more data frames transmitted to the electronic device101, thereby identifying at least one data frame received by the electronic device101. According to various embodiments, if it is identified that at least some of the one or more transmitted data frames have not been received by the electronic device101, the external electronic device205may determine at least some data frames, which have not been received by the electronic device101, to be missing frames. For example, if the ACK message525is not received from the electronic device101, the external electronic device205may identify that none of the one or more data frames transmitted to the electronic device101have been received by the electronic device101, and determine one or more transmitted data frames to be missing frames. According to various embodiments, if it is identified that there are one or more missing frames, the external electronic device205may retransmit one or more missing frames801in the uplink data523to the electronic device101within the TWT service period (e.g., the 1stservice period). According to various embodiments, the external electronic device205may identify whether or not one or more missing frames801are able to be transmitted within the remaining period (e.g., before the expiration of the TWT service period (e.g., the 1stservice period) after the time T1) of the TWT service period (e.g., the 1stservice period), and then retransmit one or more missing frames801. For example, the external electronic device205may identify the time required to transmit all of the one or more missing frames801on the basis of a network bandwidth or a bit rate, and, if the time required to transmit all of the one or more missing frames801is less than or equal to the remaining period of the TWT service period (e.g., the 1stservice period), determine that the one or more missing frames801are able to be transmitted, and transmit the one or more missing frames801within the remaining period of the corresponding TWT service period (e.g., the 1stservice period). For example, in the external electronic device205, if the duration of at least one missing frame801is 0.09 Mbits, and if the network bandwidth or the bit rate is 1.8 Gbps, the time required to transmit all of the one or more missing frames801may be 0.05 ms. In this case, if the remaining period of the TWT service period (e.g., the 1stservice period) is 0.05 ms or more, it may be determined that the one or more missing frames801are able to be transmitted. If it is identified that the time required to transmit all of the one or more missing frames801exceeds the remaining period of the TWT service period (e.g., the 1stservice period) (e.g., if the remaining period of the TWT service period (e.g., the 1stservice period) is less than 0.05 ms), the external electronic device205may determine that the one or more missing frames801are unable to be transmitted. If it is determined that one or more missing frames801are unable to be transmitted within the remaining period of the TWT service period (e.g., the 1stservice period), the external electronic device205may retransmit the one or more missing frames801in the next TWT service period, or may adjust the target wake time of the TWT service period and then transmit the one or more missing frames801during a new TWT service period, which will be described later in more detail with reference to the drawings. According to various embodiments, if at least one missing frame801is received, the electronic device101may transmit, to the external electronic device205, an ACK message803indicating reception of at least some of one or more missing frames801. According to various embodiments, if the ACK message803is received, the external electronic device205may identify whether or not one or more missing frames801have been received by the electronic device101at a time T2 at which the ACK message803is received. For example, the ACK message803may include information indicating at least some missing frames received by the electronic device101among the one or more missing frames transmitted to the electronic device101. According to various embodiments, the external electronic device205may identify whether or not all of the one or more missing frames have been received by the electronic device101on the basis of information included in the received ACK message803. According to various embodiments, if it is identified that all of the one or more missing frames have been received by the electronic device101, the external electronic device205may switch to a doze state and then perform an operation corresponding to the next TWT service period. According to various embodiments, if it is identified that at least one of the one or more missing frames has not been received by the electronic device101, the external electronic device205may identify whether or not at least one missing frame, which has not been received by the electronic device101, is able to be transmitted within the remaining period (e.g., before the expiration of the TWT service period (e.g., the 1stservice period) after the time T2) of the TWT service period (e.g., the 1st service period), and retransmit at least one missing frame. If it is identified that at least one missing frame is unable to be transmitted within the remaining period of the TWT service period (e.g., the 1stservice period), the external electronic device205may retransmit the at least one missing frame in the next TWT service period (including both the TWT service period closest to the current TWT service period and a TWT service period subsequent thereto in time), or may adjust the target wake time of the TWT service period and then transmit the at least one missing frame during a new TWT service period. According to various embodiments, in the case of retransmitting at least one missing frame in the next TWT service period (e.g., the TWT service period closest to the current TWT service period in time), when the next TWT service period starts, the external electronic device205may further perform the operation of identifying whether or not the missing frames are able to be transmitted during the next TWT service period. Although it has been described inFIG.8that the external electronic device205receives the entire downlink data517of the electronic device101, there may be one or more missing frames with respect to the downlink data517, and in this case, the electronic device101may perform the operations of the external electronic device205in the same manner. For example, according to an embodiment, the electronic device101may transmit the downlink data517to the external electronic device205. According to an embodiment, the electronic device101may receive, from the external electronic device205, information (e.g., the ACK message) on at least one data frame received by the external electronic device205. According to an embodiment, the electronic device101may identify a missing data frame of the downlink data517on the basis of the information (e.g., the ACK message) on at least one data frame received by the external electronic device205. According to an embodiment, the electronic device101may determine whether or not the missing data frame is able to be transmitted to the external electronic device205within the TWT service period. If it is determined that the missing data frame is able to be transmitted to the external electronic device205within the TWT service period, according to an embodiment, the electronic device101may transmit the missing data frame to the external electronic device205during the TWT service period. If there are one or more missing frames for the downlink data517, the electronic device101may delay transmission of a trigger frame (e.g., the trigger frame521inFIG.5) until retransmission of one or more missing frames are completed. Although it has been described inFIG.8that the external electronic device205transmits the uplink data523after the electronic device101transmits the downlink data517, the electronic device101may transmit the downlink data517after the external electronic device205transmits the uplink data523. FIG.9Ais a diagram illustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments.FIG.9Bis a diagram illustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments. Hereinafter, a description will be made with reference toFIG.5as well. A duplicate of a description that has been made inFIG.8will be omitted below. According to various embodiments, transmission and/or reception of data between the electronic device101and the external electronic device205may be performed during a TWT service period. Referring toFIG.9A, the electronic device101may transmit downlink data517to the external electronic device205within the TWT service period (e.g., the 2nd service period). According to various embodiments, if the downlink data517is received, the external electronic device205may transmit, to the electronic device101, an ACK message519indicating reception of the downlink data517. According to various embodiments, if the ACK message519is received, the electronic device101may identify whether or not there are one or more missing frames at a time T3 at which the ACK message519is received. For example, if the ACK message519is received, the electronic device101may identify at least one data frame received by the external electronic device205, among one or more data frames transmitted to the device205, on the basis of information included in the received ACK message519(e.g., information on the number of the received data frame). If it is identified that at least some of the one or more transmitted data frames have not been received by the external electronic device205, the electronic device101may determine at least some of the data frames, which have not been received by the electronic device205, as missing frames. For example, if the ACK message519is not received from the external electronic device205, the electronic device101may identify that none of the one or more data frames transmitted to the external electronic device205have been received by the external electronic device205, and determine one or more transmitted data frames as missing frames. According to various embodiments, if it is identified that there are one or more missing frames, the electronic device101may retransmit one or more missing frames901in the downlink data517to the external electronic device205within the TWT service period (e.g., the 2ndservice period). According to various embodiments, the electronic device101may identify whether or not one or more missing frames901are able to be transmitted within the remaining period (e.g., before the expiration of the TWT service period (e.g., the 2ndservice period) after the time T3) of the TWT service period (e.g., the 2ndservice period), and then retransmit one or more missing frames901. If it is identified that one or more missing frames901are unable to be transmitted within the remaining period of the TWT service period (e.g., the 2ndservice period), the electronic device101may retransmit the one or more missing frames901in the next scheduled TWT service period, or may adjust the target wake time of the TWT service period and then transmit the one or more missing frames901during a new TWT service period. According to various embodiments, if at least one missing frame901is received, the external electronic device205may transmit, to the electronic device101, an ACK message903indicating reception of at least some of the one or more missing frames901. According to various embodiments, if the ACK message903is received, the electronic device101may identify whether or not one or more missing frames901have been received by the external electronic device205. For example, the ACK message903may include information indicating at least some missing frames received by the external electronic device205, among one or more missing frames transmitted to the external electronic device205. According to various embodiments, the electronic device101may identify whether or not all of one or more missing frames901have been received by the external electronic device205on the basis of information included in the received ACK message903. According to various embodiments, if it is identified that all of the one or more missing frames901have been received by the electronic device101, the external electronic device205may transmit a trigger frame (e.g., the trigger frame521inFIG.5) to the external electronic device205, thereby controlling the external electronic device205to perform an uplink operation. According to various embodiments, if the trigger frame521is received, the external electronic device205may identify whether or not uplink data (e.g., the uplink data523inFIG.5) is able to be transmitted within the remaining period (e.g., before the expiration of the TWT service period (e.g., the 2ndservice period) after a time T4) of the TWT service period (e.g., the 2nd service period) at the time T4 at which the trigger frame521is received. The external electronic device205may identify the time required to transmit uplink data (e.g., the uplink data523inFIG.5) on the basis of a network bandwidth or a bit rate, and, if it is identified that the time required to transmit uplink data (e.g., the uplink data523inFIG.5) exceeds the remaining period of the TWT service period (e.g., the 2nd service period), may determine that the uplink data (e.g., the uplink data523inFIG.5) is unable to be transmitted. According to various embodiments, the external electronic device205may determine the uplink data (e.g., the uplink data523inFIG.5) that failed to be transmitted as a missing frame. If it is identified that the uplink data (e.g., the uplink data523inFIG.5) is unable to be transmitted, in order to adjust the target wake time of the next TWT service period, the external electronic device205may transmit, to the electronic device101, a message (e.g., a TWT information frame905) including information indicating the starting time of the next TWT service period. For example, the information indicating the starting time of the next TWT service period may include information (e.g., next TWT information) indicating the time at which a new TWT service period (e.g., an added service period) starts from the end time of the TWT service period (e.g., the 2ndservice period) during which the message (e.g., the TWT information frame905) was transmitted (hereinafter, a starting time value of the next TWT service period) (e.g., the next TWT909) (e.g., 2 ms from the end time of the TWT service period (e.g., the 2ndservice period)). For example, the starting time value of the next TWT service period may be determined within the time range obtained by excluding the TWT wake duration and the TWT wake duration of a new added TWT service period (e.g., the added service period) from the TWT wake interval. As another example, the starting time value of the next TWT service period, for example, may be configured to have the same duration as the TWT service period, may be pre-configured, or may be configured to have a duration corresponding to an integer multiple of the TWT service period. As another example, the starting time value of the next TWT service period may be determined within a range such that the new added TWT service period (e.g., the added service period) does not overlap the TWT service period of another external electronic device. For example, if the electronic device101is communicating or is to communicate with another external electronic device during the determined time interval (e.g., the interval between the end time of the TWT service period (e.g., the 2ndservice period) and the next TWT909), the external electronic device205may wait until the communication with another external electronic device is terminated and then initiate the new TWT service period (e.g., the added service period). According to various embodiments, the external electronic device205may identify that the electronic device101is communicating or is to communicate with another external electronic device before the starting time of the next determined TWT service period by receiving, from the electronic device101, information (e.g., a trigger frame, RTS (ready-to-send or request-to-send), and/or CTS (clear-to-send)) indicating that the electronic device101is communicating or is to communicate with another external electronic device. If the message (e.g., the TWT information frame905) is received, the electronic device101may transmit, to the external electronic device205, an ACK message907indicating reception of the message (e.g., the TWT information frame905). According to various embodiments, if the time required to transmit uplink data (e.g., the uplink data523inFIG.5) is identified to be less than or equal to the remaining period of the TWT service period (e.g., the 2ndservice period) on the basis of the network bandwidth, the external electronic device205may determine that the uplink data (e.g., the uplink data523inFIG.5) is able to be transmitted and transmit the uplink data (e.g., the uplink data523inFIG.5) within the remaining period of the TWT service period (e.g., the 2ndservice period). According to various embodiments, the electronic device101may identify a new TWT service period (e.g., the added service period) on the basis of the message (e.g., the TWT information frame905) including information indicating the starting time of the next TWT service period. According to various embodiments, the new TWT service period (e.g., the added service period) may have the TWT wake interval and/or the TWT wake duration corresponding to the TWT service period initially configured by the electronic device101and/or the external electronic device205. According to an embodiment, if the electronic device101transmits the TWT response frame503inFIG.5in order to set up a new TWT service period (e.g., the added service period), at least one of the TWT wake interval or the TWT wake duration of the new TWT service period (e.g., the added service period) may be different from the TWT wake interval and/or the TWT wake duration corresponding to the initially set TWT service period. According to various embodiments, the external electronic device205may receive, from the electronic device101, a trigger frame (not shown) that requests (e.g., triggers) an uplink operation during the new TWT service period (e.g., the added service period). According to various embodiments, if the trigger frame (not shown) is received, the external electronic device205may transmit the uplink data523, which failed to be transmitted during the prior TWT service period (e.g., the 2ndservice period), to the electronic device101within the new TWT service period (e.g., the added service period). According to various embodiments, the external electronic device205may receive the ACK message525indicating reception of the uplink data523from the electronic device101. According to various embodiments, the external electronic device205may identify whether or not there is a missing frame at a time T5 at which the ACK message525is received, and, if it is identified that there are one or more missing frames, may retransmit the one or more missing frames911to the electronic device101. According to various embodiments, the external electronic device205may receive an ACK message913indicating reception of the one or more missing frames911from the electronic device101. According to various embodiments, the external electronic device205may identify whether or not all of the one or more missing frames911have been received by the electronic device101at a time T6 at which the ACK message913is received. According to various embodiments, if it is identified that all of the one or more missing frames911have been received by the electronic device101, in order to re-adjust the target wake time of the next TWT service period, the external electronic device205may transmit, to the electronic device101, a message (e.g., a TWT information frame915) including information indicating the starting time of the next TWT service period. For example, information indicating the starting time of the next TWT service period may include information (e.g., next TWT information) indicating the time at which the initially scheduled TWT service period (e.g., the 3rdservice period) starts from the end time of the TWT service period (e.g., the added service period) at which the message (e.g., the TWT information frame915) is transmitted (hereinafter, a starting time value of the next TWT service period) (e.g., the next TWT917). For example, the starting time value of the next TWT service period (e.g., the next TWT917) may be the time obtained by excluding the TWT wake duration and the time used for retransmission of the uplink data523(e.g., the sum of the time of the TWT wake duration of the new TWT service period (e.g., the added service period) and the starting time value of the next TWT service period (e.g., the next TWT909)) from the TWT wake interval. According to various embodiments, if an ACK message919indicating reception of the message (e.g., the TWT information frame915) is received after transmitting the message (e.g., TWT information frame915) to the electronic device101, the external electronic device205may switch to a doze state until the initially scheduled TWT service period (e.g., the 3rdservice period) starts. According to various embodiments, the electronic device101and the external electronic device205may perform transmission and/or reception of data during the previously scheduled TWT service period (e.g., the 3rdservice period) determined on the basis of the starting time value of the next TWT service period (e.g., the next TWT917). According to the method described above, although latency occurs corresponding to the starting time value (e.g., the next TWT909) of the next TWT service period in transmission of the uplink data523, subsequent data is able to be transmitted and/or received during the previously scheduled TWT service period (e.g., the 3rdservice period), so no latency occurs in transmission and/or reception of the subsequent data. FIG.9Bis a diagram obtained by simplifyingFIG.9Adescribed above. According to various embodiments, the electronic device101or the external electronic device205may transmit a data frame (e.g., F1921) during a TWT service period (e.g., a 1stservice period). According to various embodiments, if it is identified that the entire data frame (e.g., F1921) has been received by a counterpart device during the TWT service period (e.g., the 1stservice period), the electronic device101or the external electronic device205may perform an operation of transmitting the next data frame (e.g., F2923) during the next service period (e.g., a 2ndservice period). If at least a portion of the data frame (e.g., F2923) fails to be transmitted within the service period (e.g., the 2ndservice period) or if at least portion of the transmitted data frame (e.g., F2923) is not received by the counterpart device due to issues such as wireless channel interference, congestion, and/or low signal quality (e.g., failure of transmission of all/some of the data frame (e.g., F2923)), the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., the TWT information frame905) including information indicating the next TWT909. According to various embodiments, the electronic device101or the external electronic device205may transmit, to the counterpart device, the missing frame925of the data frame (e.g., F2923) during a new service period (e.g., an added service period) determined on the basis of the next TWT909. According to various embodiments, if it is identified that the entire missing frame925has been received by the counterpart device, the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., the TWT information frame915) including information indicating the next TWT917. According to various embodiments, the electronic device101or the external electronic device205may transmit corresponding data frames (e.g., F3927and F4929) to the counterpart device during a previously scheduled TWT service period (e.g., a 3rdservice period and a 4thservice period) determined on the basis of the starting time value (e.g., the next TWT917) of the next TWT service period. According to the method described above, the electronic device101and/or the external electronic device205may transmit a corresponding data frame during each TWT service period after initial TWT setup, identify whether or not retransmission of the missing frame is possible within the remaining TWT service period if a missing frame occurs during each TWT service period, and reschedule the next TWT service period if retransmission of the missing frame is not possible within the remaining TWT service period, thereby minimizing the latency caused by the occurrence of the missing frame. FIG.10Ais a flowchart1000aillustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments. According to various embodiments, the electronic device101or the external electronic device205may determine periods for transmitting and/or receiving data frames in operation1010a. For example, the electronic device101or the external electronic device205may determine at least one parameter of the periods (e.g., TWT service periods) for transmitting and/or receiving data frames on the basis of a first data amount, a second data amount, and a network bandwidth, thereby determining the periods in operation1010a. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive the data frames according to the determined periods in operation1030a. For example, the electronic device101or the external electronic device205may transmit and/or receive at least one data frame during a first period among the determined periods. According to various embodiments, in operation1050a, the electronic device101or the external electronic device205may identify the existence of missing frames among the at least one frame transmitted and/or received during the first period. For example, the electronic device101or the external electronic device205may transmit at least one data frame to the counterpart device during the first period, and, if an ACK message (e.g., the ACK message519or the ACK message525inFIG.5) is not received from the counterpart device, may determine at least one transmitted data frame to be a missing frame. As another example, in the case where the electronic device101or the external electronic device205failed to transmit a data frame to the counterpart device, the electronic device101or the external electronic device205may determine the data frame that failed to be transmitted to be a missing frame. According to various embodiments, if there is a missing frame, in operation1070a, the electronic device101or the external electronic device205may transmit and/or receive the missing frame during a second period, which is different from the determined periods. For example, the electronic device101or the external electronic device205may further determine a period for transmitting and/or receiving the data frame in order to retransmit the missing frame. The electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., the TWT information frame) including information indicating the starting time of the next period of the first period for transmitting and/or receiving the data frame, thereby determining the second period (e.g., the added service period). For example, the second period may be different from the periods determined in operation1010a, and may be a period that starts prior to the starting time of the next period of the first period, among the determined periods. The electronic device101or the external electronic device205may transmit and/or receive the missing frame according to the determined second period. FIG.10Bis a flowchart1000billustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments. According to various embodiments, the electronic device101or the external electronic device205may set up a TWT service period in operation1010b. According to various embodiments, the electronic device101or the external electronic device205may determine at least one parameter of the TWT service period. For example, the electronic device101or the external electronic device205may determine the duration of the TWT service period (e.g., the TWT wake duration) on the basis of a first data amount, a second data amount, and a network bandwidth. For example, the electronic device101or the external electronic device205may determine the interval of the TWT service period (e.g., the TWT wake interval) on the basis of a refresh rate. For example, the electronic device101or the external electronic device205may determine the time at which the TWT service period starts (e.g., the target wake time) on the basis of a result of monitoring the wireless channel. According to various embodiments, the electronic device101or the external electronic device205may transmit at least one determined parameter to the counterpart device. For example, the electronic device101may transmit, to the external electronic device205, a response message (e.g., the TWT response frame503inFIG.5) including information on the at least one determined parameter. For example, the external electronic device205may transmit, to the electronic device101, a request message (e.g., the TWT request frame501inFIG.5) including information on at least one determined parameter and then receive, from the electronic device101, a response message (e.g., the TWT response frame503inFIG.5) indicating approval or rejection of at least one determined parameter. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive a data frame according to the set TWT service period in operation1020b. According to various embodiments, in operation1030b, the electronic device101or the external electronic device205may identify whether or not there is a missing frame among at least one data frame transmitted and/or received during at least one TWT service period. For example, the electronic device101or the external electronic device205may transmit the data frame to the counterpart device every TWT service period and receive an ACK message (e.g., the ACK message519or the ACK message525inFIG.5) from the counterpart device. The electronic device101or the external electronic device205may identify whether or not there is a missing frame among the transmitted data frames on the basis of the ACK message received from the counterpart device. As another example, if the electronic device101or the external electronic device205fails to transmit a data frame to the counterpart device, the electronic device101or the external electronic device205may determine the data frame that fails to be transmitted to be a missing frame. According to various embodiments, if it is identified that there is no missing frame among the at least one data frame transmitted and/or received during at least one TWT service period, the electronic device101or the external electronic device205may reperform operation1020b, thereby transmitting and/or receiving a corresponding next data frame during the next TWT service period. According to various embodiments, if it is identified that there is a missing frame among the at least one data frame transmitted and/or received during at least one TWT service period, the electronic device101or the external electronic device205may identify whether or not the missing frame is able to be transmitted and/or received within the TWT service period in operation1040b. For example, the electronic device101or the external electronic device205may identify the time required to transmit the identified missing frame and identify whether or not the missing frame is able to be retransmitted within the remaining period of the TWT service period. According to various embodiments, if it is identified that the missing frame is able to be transmitted and/or received within the TWT service period, the electronic device101or the external electronic device205may transmit and/or receive the missing frame within the remaining period of the TWT service period in operation1050b. According to various embodiments, if it is identified that the missing frame is unable to be transmitted and/or received within the TWT service period, the electronic device101or the external electronic device205may adjust the target wake time of the TWT service period in operation1060b. For example, the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., the TWT information frame) including information indicating the starting time of the next TWT service period (e.g., the added service period) in order to retransmit the missing frame of the transmitted data frame. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive the missing frame within the TWT service period (e.g., the added service period) determined based on the adjusted target wake time in operation1070b. According to various embodiments, in operation1080b, the electronic device101or the external electronic device205may identify whether or not transmission and/or reception of the missing frame is successful as a result of operation1050bor operation1070b. For example, the electronic device101or the external electronic device205may receive an ACK message indicating reception of the missing frame from the counterpart device after transmitting the missing frame within the remaining period of the TWT service period in operation1050b. For example, the electronic device101or the external electronic device205may receive an ACK message (e.g., the ACK message913inFIG.9) indicating reception of the missing frame from the counterpart device after transmitting the missing frame during the TWT service period determined based on the target wake time adjusted in operation1070b. The electronic device101or the external electronic device205may identify whether or not the counterpart device has received all the missing frames based on the received ACK message. According to various embodiments, if it is identified that transmission and/or reception of the missing frame is not successful, the electronic device101or the external electronic device205may reperform operation1040b, thereby identifying whether or not the missing frame is able to be transmitted within the TWT service period (e.g., within the remaining period of the corresponding TWT service period). According to various embodiments, if it is identified that transmission and/or reception of the missing frame is successful, the electronic device101or the external electronic device205, in operation1090b, may transmit and/or receive the next data frame according to the next TWT service period. For example, the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., the TWT information frame) including information indicating the starting time of the next TWT service period in order to re-adjust the target wake time of the next TWT service period. The electronic device101or the external electronic device205may transmit and/or receive a corresponding next TWT data frame during the initially scheduled next TWT service period based on the re-adjusted target wake time. According to various embodiments, the electronic device101or the external electronic device205may reperform operation1040bafter operation1090b. FIG.10Cis a flowchart1000billustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments. A duplicate of a description that has been made inFIG.10A or10Bwill be omitted below. According to various embodiments, the electronic device101or the external electronic device205may set up a TWT service period in operation1005c. According to various embodiments, the electronic device101or the external electronic device205may determine at least one parameter of the TWT service period. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive a data frame according to the set TWT service period in operation1010c. According to various embodiments, in operation1015c, the electronic device101or the external electronic device205may identify whether or not a trigger frame, RTS (ready-to-send or request-to-send), and/or CTS (clear-to-send) of another external electronic device are detected. For example, the RTS may include information indicating that a transmitting device is to transmit data to a receiving device. For example, the CTS may include information transmitted from a receiving device to a transmitting device so as to indicate that the receiving device is in the state capable of receiving data. According to various embodiments, the detected trigger frame, RTS, or CTS may include information indicating a network allocation vector (NAV) value. For example, the NAV value may indicate information on the time for which the wireless channel is occupied by the first external electronic device that transmits the trigger frame, the RTS (ready-to-send or request-to-send), or the CTS (clear-to-send) and the second external electronic device that transmits and/or receives data to and/or from the first external electronic device, and may provide a function of restricting access of devices, which are different from the first and second external electronic devices, to a wireless medium until the NAV value becomes 0. According to various embodiments, if the trigger frame, RTS, and/or CTS of another external electronic device are not detected, the electronic device101or the external electronic device205may reperform operation1010c, thereby transmitting and/or receiving a corresponding next data frame during the next TWT service period. According to various embodiments, if it is identified that the trigger frame, RTS, and/or CTS of another external electronic device are detected, the electronic device101or the external electronic device205, in operation1020c, may identify the remaining period excluding the period corresponding to the NAV value of another external electronic device from the TWT service period. For example, the electronic device101or the external electronic device205may identify the time during which access of the electronic device101and the external electronic device205to the wireless medium is restricted in the TWT service period in which the trigger frame, RTS, and/or CTS of another external electronic device are detected on the basis of the NAV value, and may identify the time value obtained by subtracting the time during which access to the wireless medium is restricted from the duration of the TWT service period. According to various embodiments, in operation1025c, the electronic device101or the external electronic device205may identify whether or not the missing frame is able to transmitted and/or receive within the remaining period of the TWT service period. For example, the electronic device101or the external electronic device205may identify the time value obtained by subtracting the time during which access to the wireless medium is restricted from the duration of the TWT service period and identify whether or not the missing frame is able to be transmitted and/or received during the time corresponding to the time value. For example, the electronic device101or the external electronic device205may identify the time for transmitting and/or receiving the missing data from the value obtained by dividing the amount of missing data to be transmitted and/or received by a network bandwidth and, if the time value, obtained by subtracting the time during which access to the wireless medium is restricted from the duration of the TWT service period, is greater than the time for transmitting and/or receiving the missing data, determine that the missing frame is able to be transmitted and/or received. According to various embodiments, if it is identified that the missing frame is able to be transmitted and/or received within the remaining period of the TWT service period, in operation1030c, the electronic device101or the external electronic device205may transmit and/or receive the missing frame within the remaining period of the TWT service period. According to various embodiments, if it is identified that the missing frame is unable to be transmitted and/or received within the remaining period of the TWT service period, the electronic device101or the external electronic device205may adjust the target wake time of the TWT service period in operation1035c. For example, the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., the TWT information frame) including information indicating the starting time of the next TWT service period (e.g., the added service period) in order to retransmit the missing frame of the transmitted data frame. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive the missing frame during the TWT service period (e.g., the added service period) determined based on the adjusted target wake time in operation1040c. According to various embodiments, in operation1045c, the electronic device101or the external electronic device205may identify whether or not transmission and/or reception of the missing frame is successful as a result of performing operation1030cor operation1040c. For example, the electronic device101or the external electronic device205may receive an ACK message indicating reception of the missing frame from the counterpart device. The electronic device101or the external electronic device205may identify whether or not the counterpart device has received the entire missing frame on the basis of the received ACK message. According to various embodiments, if it is identified that transmission and/or reception of the missing frame is not successful, the electronic device101or the external electronic device205may perform operation1025cagain. According to various embodiments, if it is identified that transmission and/or reception of the missing frame is successful, the electronic device101or the external electronic device205, in operation1050c, may transmit and/or receive the next data frame corresponding to the next TWT service period (e.g., the next TWT service period that is initially scheduled). According to various embodiments, the electronic device101or the external electronic device205may reperform operation1015cafter operation1050c. FIG.10Dis a flowchart1000cillustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments. A duplicate of a description that has been made with reference toFIG.10A,10B, or10C will be omitted below. According to various embodiments, the electronic device101or the external electronic device205may set up a TWT service period in operation1010d. According to various embodiments, the electronic device101or the external electronic device205may determine at least one parameter of the TWT service period. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive a data frame according to the set TWT service period in operation1020d. According to various embodiments, the electronic device101or the external electronic device205may identify whether or not the wireless channel is capable of being occupied in operation1030d. According to various embodiments, the electronic device101or the external electronic device205may perform clear channel assessment (CCA), thereby identifying the wireless channel section occupied by another external electronic device. According to various embodiments, if it is identified that the wireless channel is capable of being occupied to transmit the data frame during the set TWT service period as a result of performing CCA, operation1020dmay be reperformed so that the corresponding data frame may be transmitted and/or received during the TWT service period. According to various embodiments, if the corresponding data frame is unable to be transmitted during the set TWT service period due to occupation of the wireless channel by another external electronic device as a result of performing CCA, occupation of the wireless channel may be identified to be impossible. According to various embodiments, if it is identified that occupation of the wireless channel is impossible, the electronic device101or the external electronic device205may adjust the target wake time of the TWT service period in operation1040d. For example, the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., the TWT information frame) including information indicating the starting time of the next TWT service period (e.g., the added service period) in order to retransmit the missing frame of the transmitted data frame. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive the missing frame during the TWT service period (e.g., the added service period) determined based on the adjusted target wake time in operation1050d. According to various embodiments, the electronic device101or the external electronic device205may identify whether or not transmission and/or reception of the missing frame is successful in operation1060d. For example, the electronic device101or the external electronic device205may receive an ACK message indicating reception of the missing frame from the counterpart device. The electronic device101or the external electronic device205may identify whether or not the counterpart device has received the entire missing frame on the basis of the received ACK message. According to various embodiments, if it is identified that transmission and/or reception of the missing frame is not successful, the electronic device101or the external electronic device205may reperform operation1040d. According to various embodiments, if it is identified that transmission and/or reception of the missing frame is successful, the electronic device101or the external electronic device205may transmit and/or receive the next data frame according to the next TWT service period (e.g., the next TWT service period that is initially scheduled) in operation1070d. FIG.11Ais a diagram illustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission by shifting a TWT service period according to various embodiments. Hereinafter, a description will be made with reference toFIGS.9A and9Bas well. A duplicate of a description that has been made inFIG.9A or9Bwill be omitted below. Referring toFIG.9Bas well, if at least a portion of the data frame (e.g., F3927) fails to be transmitted within a service period (e.g., a 3rdservice period) or if at least a portion of the transmitted data frame (e.g., F3927) is not received by the counterpart device due to issues such as wireless channel interference, congestion, and/or low signal quality (e.g., failure of transmission of all/some of the data frame (e.g., F3927)), the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., a TWT information frame1101) including information indicating the next TWT1103. Thereafter, the electronic device101or the external electronic device205may transmit, to the counterpart device, a missing frame1105of the data frame (e.g., F3927) during a new service period (e.g., a 2ndadded service period) determined on the basis of the next TWT1103. According to various embodiments, the electronic device101or the external electronic device205may identify that failure of transmission of all/some of the data frame continuously occurs during the consecutive TWT service periods (e.g., a 2ndservice period and a 3rdservice period). As another example, the electronic device101or the external electronic device205may identify that the number of TWT service periods, in which failure of transmission of all/a portion of the data frame occurred, is greater than or equal to a threshold number, among a predetermined number of consecutive TWT service periods. According to various embodiments, if it is identified that failure of transmission of all/a portion of the data frame continuously occurs, the electronic device101or the external electronic device205may maintain the target wake time of the next TWT service period without re-adjusting the same. For example, referring toFIG.9Aas well, after identifying that the entire missing frame1105is received by the counterpart device, the electronic device101or the external electronic device205may not transmit, to the counterpart device, a message (e.g., the TWT information frame915inFIG.9A) for re-adjusting the target wake time of the next TWT service period, thereby maintaining the target wake time of the next TWT service period. In this case, the next TWT service period (e.g., a 4thservice period) may start after the TWT wake interval, which is determined at the time of initial setup, from the new TWT service period (e.g., the 2ndservice period) in which the missing frame1105is transmitted. Referring toFIGS.9B and11A, if the message (e.g., the TWT information frame915inFIG.9A) is transmitted (in the case shown inFIG.9A), the next TWT service period may be determined to be the TWT service period (e.g., a 4thservice period (initial))1107scheduled at the time of initial setup, whereas if the message (e.g., the TWT information frame915inFIG.9A) is not transmitted (in the case shown inFIG.11A), the next TWT service period may be determined to be the TWT service period (e.g., a 4thservice period (shifted))1109shifted from the TWT service period (e.g., the 4thservice period (initial))1107scheduled at the time of initial setup. According to various embodiments, the electronic device101and the external electronic device205may transmit and/or receive a data frame (e.g., F4929) during the shifted TWT service period (e.g., the 4thservice period (shifted))1109. According to the method described above, in the case where it is identified that the missing frame frequently occurs after the initial TWT setup, the electronic device101and/or the external electronic device205may determine that the missing frame may frequently occur afterwards if transmission and/or the reception of the data frame continues based on the TWT service period according to the initial TWT setup. The electronic device101and/or the external electronic device205may perform rescheduling based on the new TWT service period in which the last transmission and/or reception of the missing frame was successful, instead of the TWT service period scheduled at the time of initial setup, thereby reducing the possibility of the missing frame occurring and thus minimizing the latency due to the occurrence of the missing frame. FIG.11Bis a diagram illustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission by adjusting a TWT wake duration according to various embodiments. Hereinafter, a description will be made with reference toFIGS.9A,9B, and11Aas well. A duplicate of a description that has been made inFIG.9A,9B, or11A will be omitted below. Referring toFIG.11Aas well, according to various embodiments, failure of transmission of all/a portion of the data frame may occur continuously in consecutive TWT service periods (e.g., the 2ndservice period and the 3rdservice period). According to various embodiments, if it is identified that failure of transmission of all/a portion of the data frame continuously occurs in the consecutive TWT service periods (e.g., the 2ndservice period and the 3rdservice period), the electronic device101or the external electronic device205may transmit, to the counterpart device, a message (e.g., a TWT request frame and/or a TWT response frame) for TWT reset (e.g., a TWT setup1111). For example, the message (e.g., the TWT request frame and/or the TWT response frame) may include information on at least one parameter (e.g., a TWT wake duration, a TWT wake interval, and a target wake time) of the TWT service period. According to various embodiments, the electronic device101or the external electronic device205may transmit a message (e.g., the TWT request frame and/or TWT response frame) including information on a TWT wake duration1113bthat is different from the TWT wake duration determined at the time of initial setup. According to various embodiments, the electronic device101or the external electronic device205may determine the TWT wake duration1113b, which is different from the TWT wake duration determined at the time of initial setup, on the basis of the additional time required for transmission of the data frame that failed to be transmitted during the previous TWT service period. For example, in the case where the TWT wake duration determined at the time of initial setup is 2 ms, if a time of 4 ms is taken for transmission of the data frame that failed to be transmitted during the previous TWT service period due to channel congestion and retransmission of the missing frame, the TWT wake duration1113bmay be determined to be 4 ms taken for transmission of the data frame that failed to be transmitted during the previous TWT service period and retransmission of the missing frame. According to various embodiments, the message (e.g., the TWT request frame and/or the TWT response frame) including information on the different TWT wake duration1113bmay be received by the counterpart device, and the TWT service period of the electronic device101and the external electronic device205may be rescheduled. For example, referring toFIG.11B, the TWT service period (e.g., a 4th service period or a 5thservice period) of the electronic device101and the external electronic device205may have the TWT wake duration1113bthat is changed from the initially configured TWT wake duration1113a. According to various embodiments, the rescheduled TWT service period (e.g., the 4thservice period) may start at the starting time of the TWT service period scheduled at the time of initial setup (e.g., at the time after the lapse of the target wake time1115determined at the time of initial setup from the new service period (e.g., the 2ndadded service period)). According to various embodiments, the electronic device101and the external electronic device205may transmit and/or receiving a corresponding next data frame (e.g., F4929or F51117) during each TWT service period (e.g., the 4thservice period or the 5thservice period) having the reset TWT wake duration. According to various embodiments, the electronic device101or the external electronic device205may change the TWT wake duration back to the TWT wake duration1113aof the initial setup, if the channel congestion is reduced (e.g., a reduction in the frequency of the occurrence of the missing frames) or if the data frame is able to be transmitted within the TWT wake duration determined at the time of initial setup, after the TWT reset. FIG.11Cis a diagram illustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission by adjusting a TWT wake duration and a TWT wake interval according to various embodiments. Hereinafter, a description will be made with reference toFIGS.11A and11Bas well. A duplicate of a description that has been made inFIG.11A or11Bwill be omitted below. Referring toFIG.11Bas well, according to various embodiments, if it is identified that failure of transmission of all/a portion of the data frame continuously occurs in consecutive TWT service periods (e.g., the 2ndservice period and the 3rdservice period), the electronic device101or the external electronic device205may transmit, to the counterpart device, a message for TWT reset (e.g., the TWT request frame and/or the TWT response frame). For example, the message (e.g., the TWT request frame and/or the TWT response frame) may include information on at least one parameter (e.g., a TWT wake duration, a TWT wake interval, and a target wake time) of the TWT service period. According to various embodiments, the electronic device101or the external electronic device205may transmit a message (e.g., the TWT request frame and/or the TWT response frame) including information on a TWT wake duration1113b, which is different from the TWT wake duration determined at the time of initial setup, and a TWT wake interval1119b, which is different from the TWT wake interval determined at the time of initial setup. According to various embodiments, the electronic device101or the external electronic device205may determine the TWT wake duration1113band the TWT wake interval1119bon the basis of the additional time required for transmission of the data frame that failed to be transmitted during the previous TWT service period. For example, in the case where the TWT wake duration determined at the time of initial setup is 2 ms, if a time of 4 ms is taken for transmission of the data frame that failed to be transmitted during the previous TWT service period due to the channel congestion and retransmission of the missing frame, the TWT wake interval1119bmay be determined to be a value of ½ times the TWT wake interval1119adetermined at the time of initial setup. According to various embodiments, the message (e.g., the TWT request frame and/or the TWT response frame) may be received by the counterpart device, and the TWT service period of the electronic device101and the external electronic device205may be rescheduled. For example, referring toFIG.11Bas well, the TWT service period (e.g., the 4thservice period or the 5thservice period) of the electronic device101and the external electronic device205may have the TWT wake duration1113bchanged from the TWT wake duration1113aof the initial setup and the TWT wake interval1119bchanged from the TWT wake interval1119aof the initial setup. According to various embodiments, the rescheduled TWT service period (e.g., the 4thservice period) may start at the starting time of the TWT service period scheduled at the time of initial setup (e.g., at the time after the lapse of the target wake time1115determined at the time of initial setup from the new service period (e.g., the 2ndadded service period)). According to various embodiments, the electronic device101and the external electronic device205may transmit and/or receive a corresponding next data frame (e.g., F4929or F51117) during each TWT service period (e.g., the 4thservice period or the 5thservice period) having the reset TWT wake duration and the reset TWT wake interval. According to various embodiments, if the channel congestion is reduced (e.g., a reduction in the frequency of occurrence of the missing frames) after the TWT reset, the electronic device101or the external electronic device205may change them back to the initially set TWT wake duration1113aand/or TWT wake interval1119a. FIG.12Ais a flowchart1200aillustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission by rescheduling a TWT service period according to various embodiments. Hereinafter, a description will be made with reference toFIG.11A,11B, or11C as well. According to various embodiments, the electronic device101or the external electronic device205may set up a TWT service period in operation1210a. According to various embodiments, the electronic device101or the external electronic device205may determine one or more parameters of the TWT service period. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive a data frame according to the set TWT service period in operation1220a. According to various embodiments, the electronic device101or the external electronic device205may identify whether or not missing frames occurs consecutively in operation1230a. For example, the electronic device101or the external electronic device205may identify whether or not failure of transmission of all/a portion of the data frame occurs in consecutive TWT service sections. As another example, the electronic device101or the external electronic device205may identify that the number of TWT service sections in which failure of transmission of all/a portion of the data frame occurs, among a predetermined number of consecutive TWT service sections, is equal to or greater than a threshold number. According to various embodiments, if it is identified that no missing frame consecutively occurs, the electronic device101or the external electronic device205may reperform operation1220a, thereby transmitting and/or receiving a corresponding next data frame during the next TWT service period. According to various embodiments, when it is identified that there are consecutive missing frames, the electronic device101or the external electronic device205may change at least one parameter of the TWT service period in operation1240a. For example, the electronic device101or the external electronic device205may change at least one of a TWT wake duration or a TWT wake interval of the TWT service period, thereby determining (e.g., rescheduling) the TWT service period. The electronic device101or the external electronic device205may transmit a message (e.g., the TWT request frame and/or the TWT response frame) including information on the changed parameter to the counterpart device. According to various embodiments, the electronic device101or the external electronic device205may not transmit a message (e.g., the TWT information frame915inFIG.9) for re-adjusting the target wake time of the TWT service period, thereby shifting the next TWT service period. According to various embodiments, the electronic device101or the external electronic device205may change the target wake time of the TWT service period to the target wake time, which is different from the target wake time determined at the time of initial setup, thereby rescheduling the TWT service period. According to various embodiments, in operation1250a, the electronic device101or the external electronic device205may transmit and/or receive the next data frame according to the TWT service period determined on the basis of at least one changed parameter. According to various embodiments, the electronic device101or the external electronic device205may reperform operation1230aafter performing operation1250a. FIG.12Bis a flowchart1200billustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission by changing a wireless channel according to various embodiments. Hereinafter, a description will be made with reference toFIG.12Aas well. According to various embodiments, the electronic device101or the external electronic device205may set up a TWT service period in operation1210b. According to various embodiments, the electronic device101or the external electronic device205may determine one or more parameters of the TWT service period. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive a data frame according to the set TWT service period in operation1220b. According to various embodiments, the electronic device101or the external electronic device205may identify whether or not missing frames occurs consecutively in operation1230b. According to various embodiments, if it is identified that no missing frame consecutively occurs, the electronic device101or the external electronic device205may reperform operation1220b, thereby transmitting and/or receiving a corresponding next data frame during the next TWT service period. According to various embodiments, if it is identified that missing frames consecutively occur, the electronic device101or the external electronic device205may determine at least one parameter of the TWT service period to be changed in operation1240b. For example, the electronic device101or the external electronic device205may determine at least one of a TWT wake duration or a TWT wake interval of the TWT service period. According to various embodiments, the electronic device101or the external electronic device205may identify channel utilization (CU) for a specified time in operation1250b. For example, the electronic device101or the external electronic device205may identify the channel utilization by monitoring the wireless channel for a specified time. For example, the channel utilization may indicate a ratio of the time occupied by another external electronic device to a specified time for a wireless channel. According to various embodiments, in operation1260b, the electronic device101or the external electronic device205may identify whether or not the wireless channel is capable of being occupied according to at least one determined parameter on the basis of a result of identifying the channel utilization. For example, in the case where the TWT wake interval determined in operation1240bis 16.6 ms, if the channel utilization is identified to be 20%, the electronic device101or the external electronic device205may identify the channel occupiable time to be 12.28 ms corresponding to 80% of 16.6 ms, which is the TWT wake interval. In the case where the TWT wake duration determined in operation1240bis 4 ms, since the determined TWT wake duration is less than the channel occupiable time 12.28 ms, the electronic device101or the external electronic device205may identify that the wireless channel is capable of being occupied according to the determined TWT wake interval and TWT wake duration. In the case where the channel utilization is identified to be 80%, the channel occupiable time is 3.32 ms, which is 20% of 16.6 ms, and is less than 4 ms, which is the TWT wake duration determined in operation1240b, and in this case, the electronic device101or the external electronic device205may identify that the wireless channel is not occupiable according to the determined TWT wake interval and TWT wake duration. According to various embodiments, if it is identified that the wireless channel is capable of being occupied according to at least one determined parameter on the basis of a result of identifying the channel utilization, in operation1270b, the electronic device101or the external electronic device205may transmit and/or receive the next data frame according to the TWT service period determined on the basis of at least one changed parameter. According to various embodiments, the electronic device101or the external electronic device205may reperform operation1230bafter performing operation1270b. According to various embodiments, if it is identified that the wireless channel is not occupiable according to at least one determined parameter on the basis of a result of identifying the channel utilization, in operation1280b, the electronic device101or the external electronic device205may change the wireless channel for transmitting and/or receiving the data frame. When it is determined to change the wireless channel, the electronic device101or the external electronic device205may exchange information on the channel to be changed and the time for the changed channel to be applied with the counterpart device through out-of-band (00B) communication (e.g., Bluetooth low energy (BLE) communication or 2.4 GHz-band WiFi communication). According to various embodiments, the electronic device101or the external electronic device205may reperform operation1210bafter performing operation1280b. FIG.13Ais a flowchart1300aillustrating a method in which an electronic device101resets a TWT service period on the basis of quality of service (QoS) according to various embodiments.FIG.13Bis a flowchart1300billustrating a method in which an external electronic device205resets a TWT service period on the basis of quality of service according to various embodiments. Referring toFIG.13A, according to various embodiments, the electronic device101may determine one or more parameters of a TWT service period in operation1310a. According to various embodiments, the electronic device101may identify channel utilization for a specified time in operation1320a. According to various embodiments, if it is identified that the wireless channel is capable of being occupied according to one or more determined parameters on the basis of a result of identifying the channel utilization, the electronic device101may transmit, to the external electronic device205, a response message (e.g., the TWT response frame) including information on the one or more determined parameters in operation1330a. Accordingly, the TWT service period may be set up between the electronic device101and the external electronic device205. According to various embodiments, the electronic device101may periodically identify parameters related to quality of service (QoS) regarding at least one data frame in operation1340a. For example, the parameter related to quality of service may include at least one of the end-to-end latency of the application or an error rate of the image frame. Alternatively, the parameter related to quality of service (QoS) may include at least one parameter among a delay, a packet loss, a delay variation, connectivity, a bandwidth or throughput, or reliability or availability. Alternatively, the parameter related to quality of service (QoS) may include at least one parameter among a QoS class-of-identifier (QCI), a guaranteed bit rate (GBR), a maximum bit rate (MBR), or an allocation and retention priority (ARP). According to various embodiments, the electronic device101may identify the end-to-end latency of an application in the application layer for generating the data frame (e.g., image data) transmitted to the external electronic device205. For example, after the electronic device101periodically transmits a specified packet to the external electronic device205, if a response to the transmitted packet is received from the external electronic device205, the electronic device101may identify the difference between the time of triggering transmission of the specified packet for identifying the end-to-end latency and the time at which a response to the transmitted packet is received using the application, thereby identifying the end-to-end latency of the application. As another example, the electronic device101may also identify the end-to-end latency between the electronic device101and a server (e.g., the server108inFIG.1), and may determine, as the parameter related to quality of service in operation1340a, the latency obtained by adding the identified end-to-end latency between the electronic device101and the server108to the identified end-to-end latency of the application. Alternatively, according to various embodiments, the electronic device101may identify the delay or packet loss parameter among the QoS parameters in order to identify the end-to-end latency of the application or the error rate of the image frame. If the delay or packet loss parameter exceeds a predetermined threshold value or is less than the predetermined threshold value (that is, if the parameter does not satisfy the QoS), the electronic device101according to various embodiments may determine that end-to-end latency has occurred in the application or that the error rate of the image frame is high. According to various embodiments, the electronic device101may change or maintain one or more parameters of the TWT service period in operation1350a. For example, the electronic device101may change the TWT wake duration and/or the TWT wake interval of the TWT service period on the basis of the periodically identified parameter related to quality of service, which will be described in more detail with reference to the following drawings. According to various embodiments, the electronic device101may transmit a response message (e.g., the TWT response frame) including information on the one or more changed or maintained parameters to the external electronic device205in operation1360a. Accordingly, the TWT service period may be reset between the electronic device101and the external electronic device205. Referring toFIG.13B, according to various embodiments, the external electronic device205may determine one or more parameters of the TWT service period in operation1310b. According to various embodiments, the external electronic device205may identify channel utilization for a specified time in operation1320b. According to various embodiments, if it is identified that the wireless channel is capable of being occupied according to one or more determined parameters on the basis of the result of identifying the channel utilization, in operation1330b, the external electronic device205may transmit, to the electronic device101, a request message (e.g., the TWT request frame) including information on the one or more determined parameters. According to various embodiments, the electronic device101may receive the request message (e.g., the TWT request frame) and transmit a response message (e.g., the TWT response frame) to the external electronic device205in operation1340b. According to various embodiments, the electronic device101may identify one or more parameters of the TWT service period included in the request message (e.g., the TWT request frame), determine whether or not to approve or reject the same, and transmit a response message (e.g., the TWT response frame) indicating approval or rejection thereof. Accordingly, the TWT service period may be set up between the electronic device101and the external electronic device205. According to various embodiments, the external electronic device205may periodically identify the parameter related to quality of service (QoS) regarding at least one data frame in operation1350b. According to various embodiments, the electronic device101may identify the end-to-end latency of an application in the application layer for generating the data frame (e.g., sensing data) transmitted to the external electronic device205. For example, if the external electronic device205periodically transmits a specified packet to the electronic device101and receives a response to the transmitted packet from the electronic device101, the electronic device101may identify the difference between the time of triggering transmission of the specified packet for identifying the end-to-end latency and the time at which a response to the transmitted packet is received, using the application, thereby identifying the end-to-end latency of the application. As another example, the external electronic device205may also identify the end-to-end latency between the external electronic device205and the server108, and may determine, as the parameter related to quality of service in operation1350b, the latency obtained by adding the identified end-to-end latency between the external electronic device205and the server108to the identified end-to-end latency of the application. According to various embodiments, the external electronic device205may change or maintain one or more parameters of the TWT service period in operation1360b. For example, the external electronic device205may change the TWT wake duration and/or the TWT wake interval of the TWT service period on the basis of the periodically identified parameter related to quality of service, which will be described in more detail with reference to the following drawings. According to various embodiments, the external electronic device205may transmit a request message (e.g., the TWT request frame) including information on the one or more changed or maintained parameters to the electronic device101in operation1370b. According to various embodiments, the electronic device101may receive the request message (e.g., the TWT request frame), and transmit a response message (e.g., the TWT response frame) to the external electronic device205in operation1380b. According to various embodiments, the electronic device101may identify one or more parameters of the TWT service period included in the request message (e.g., the TWT request frame), determine whether or not to approve or reject the same, and transmit a response message (e.g., the TWT response frame) indicating approval or rejection thereof. Accordingly, the TWT service period between the electronic device101and the external electronic device205may be reset. FIG.14Ais a flowchart1400aillustrating a method in which an electronic device101or an external electronic device205changes a parameter of a TWT service period on the basis of quality of service according to various embodiments. According to various embodiments, the electronic device101or the external electronic device205may determine one or more parameters of a TWT service period in operation1410a. According to various embodiments, the electronic device101or the external electronic device205may identify quality of service of at least one data frame in operation1430a. According to various embodiments, the electronic device101or the external electronic device205may compare the identified quality of service of at least one data frame with at least one threshold value in operation1450a. For example, the threshold value may be determined on the basis of the end-to-end latency required by the application for generating the data frame transmitted to the counterpart device (hereinafter referred to as “required latency”), and may be determined as a single value or two or more values. According to various embodiments, the electronic device101or the external electronic device205may change or maintain one or more parameters of the TWT service period on the basis of the comparison result in operation1470a. For example, if the end-to-end latency identified inFIG.13A or13Bis equal to or greater than the required latency value by a certain ratio, the electronic device101or the external electronic device205may reduce the TWT wake interval of the TWT service period or increase the TWT wake duration thereof. If the identified end-to-end latency is less than the required latency value by a certain ratio, the electronic device101or the external electronic device205may increase the TWT wake interval of the TWT service period or reduce the TWT wake duration thereof. If the identified end-to-end latency is less than the required latency value by a certain ratio, the electronic device101or the external electronic device205may maintain at least one of the TWT wake interval or the TWT wake duration of the TWT service period. For example, the electronic device101or the external electronic device205may change the duration (SP duration) and/or the interval of the TWT service period in stages. For example, the electronic device101or the external electronic device205may determine a parameter set including the duration and/or interval of the TWT service period on the basis of a refresh rate, periodically identify quality of service, reduce the same by 1 stage whenever quality of service is identified to be good, increase the same by 1 stage whenever quality of service is identified to be bad, and determine the parameters corresponding to the stages as the parameters to be applied to the TWT service period. Table 1 is an example of a parameter set of the TWT service period when the refresh rate is 60 Hz. TABLE 1StagesSP durationsIntervals12 ms16.6 ms24 ms16.6 ms36 ms16.6 ms48 ms16.6 ms54 ms8.3 ms66 ms8.3 ms7TWT tear down Referring to Table 1, “TWT tear down” in stage7may indicate that the scheduling operation according to the TWT service period ends if quality of service continues to be bad, and the electronic device101and/or the external electronic device205may operate in a normal mode (e.g., state) in stage7. FIG.14Bis a flowchart1400billustrating a method in which an electronic device101or an external electronic device205changes a parameter of a TWT service period on the basis of quality of service according to various embodiments. According to various embodiments, the electronic device101or the external electronic device205may determine one or more parameters of a TWT service period in operation1405b. According to various embodiments, the electronic device101or the external electronic device205may identify quality of service of at least one data frame in operation1410b. According to various embodiments, in operation1415b, the electronic device101or the external electronic device205may compare the identified quality of service for at least one data frame with at least one threshold value. For example, the threshold values may include three threshold values. For example, a first threshold value may be 70% of the required latency value, the second threshold value may be 90% of the required latency value, and the third threshold value may be 150% of the required latency value. For example, if it is identified that the end-to-end latency identified inFIG.13A or13Bis less than the first threshold value, quality of service (QoS) may be determined to be “very good”. For example, if it is identified that the identified end-to-end latency is greater than or equal to the first threshold value and less than the second threshold value, quality of service may be determined to be “good”. For example, if it is identified that the identified end-to-end latency is greater than or equal to the second threshold value and less than the third threshold value, quality of service may be determined to be “bad”. For example, if it is identified that the identified end-to-end latency is greater than or equal to the third threshold value, quality of service may be determined to be “very bad”. The number and ratios of the threshold values are provided by way of example, and are not necessarily limited to the above description. According to various embodiments, if it is identified that quality of service is “very good” as a result of the comparison, in operation1420b, the electronic device101or the external electronic device205may reduce the duration (e.g., the TWT wake duration) of the TWT service period or increase the interval (e.g., the TWT wake interval) of the TWT service period. According to various embodiments, in operation1425b, the electronic device101or the external electronic device205may identify whether or not a change into the determined duration (e.g., the TWT wake duration) and interval (e.g., the TWT wake interval) of the TWT service period is possible. For example, the electronic device101or the external electronic device205may identify channel utilization for a specified time and identify whether or not the wireless channel is capable of being occupied according to the determined duration (e.g., the TWT wake duration) or interval (e.g., the TWT wake interval) of the TWT service period. According to various embodiments, if it is identified that a change into the determined duration (e.g., the TWT wake duration) and interval (e.g., the wake interval) of the TWT service period is possible, the electronic device101or the external electronic device205, in operation1430b, may reset the TWT service period on the basis of the determined duration (e.g., the TWT wake duration) and interval (e.g., the TWT wake interval) of the TWT service period. According to various embodiments, if it is identified that a change into the determined duration (e.g., the TWT wake duration) and interval (e.g., the wake interval) of the TWT service period is impossible, the electronic device101or the external electronic device205may perform operation1435b. According to various embodiments, if it is identified that quality of service is “good” as a result of the comparison, the electronic device101or the external electronic device205may maintain the duration of the TWT service period and the interval of the TWT service period in operation1435b. According to various embodiments, if it is identified that quality of service is “bad” as a result of the comparison, the electronic device101or the external electronic device205may increase the duration (e.g., the TWT wake duration) of the TWT service period or reduce the interval (e.g., the TWT wake interval) of the TWT service period in operation1440b. According to various embodiments, in operation1445b, the electronic device101or the external electronic device205may identify whether or not a change into the determined duration (e.g., the TWT wake duration) and interval (e.g., the TWT wake interval) of the TWT service period is possible. For example, the electronic device101or the external electronic device205may identify channel utilization for a specified time and identify whether or not the wireless channel is capable of being occupied according to the determined duration (e.g., the TWT wake duration) and interval (e.g., the TWT wake interval) of the TWT service period. According to various embodiments, if it is identified that a change into the determined duration (e.g., the TWT wake duration) and interval (e.g., the TWT wake interval) of the TWT service period is possible, the electronic device101or the external electronic device205may perform operation1430b. According to various embodiments, if it is identified that a change into the determined duration (e.g., the TWT wake duration) and interval (e.g., the TWT wake interval) of the TWT service period is impossible, the electronic device101or the external electronic device205may perform operation1450b. According to various embodiments, if it is identified that quality of service is “very bad” as a result of the comparison, the electronic device101or the external electronic device205may change the wireless channel for transmitting and/or receiving the data frame in operation1450b. According to various embodiments, the electronic device101or the external electronic device205may change the duration (e.g., the TWT wake duration) and/or interval (e.g., the TWT wake interval) of the TWT service period in stages. For example, referring to Table 1, the electronic device101or the external electronic device205may periodically identify quality of service. If it is identified that quality of service continues to be “very good”, the electronic device101or the external electronic device205may reduce quality of service by 1 stage. If it is identified that quality of service is “good”, the electronic device101or the external electronic device205may maintain the stage. If it is identified that quality of service is “bad”, the electronic device101or the external electronic device205may increase quality of service by 1 stage. If quality of service continues to be “bad” to thus reach stage7, the electronic device101and the external electronic device205may terminate the scheduling operation according to the TWT service period and operate in the normal state. If quality of service continues to be “bad” to thus reach stage7, the electronic device101and the external electronic device205may change the wireless channel for transmitting and/or receiving the data frame. If it is identified that quality of service is “very bad”, the electronic device101or the external electronic device205may change the wireless channel for transmitting and/or receiving the data frame. FIG.15is a flowchart1500illustrating a method in which an electronic device101and/or an external electronic device205control latency due to failure of transmission according to various embodiments. According to various embodiments, the electronic device101or the external electronic device205may set up a TWT service period in operation1505. According to various embodiments, the electronic device101or the external electronic device205may transmit and/or receive a data frame according to the set TWT service period in operation1510. According to various embodiments, in operation1515, the electronic device101or the external electronic device205may identify whether or not there is a missing frame among at least one data frame transmitted and/or received during at least one TWT service period. According to various embodiments, if it is identified that there is no missing frame among at least one data frame transmitted and/or received during at least one TWT service period, the electronic device101or the external electronic device205may reperform operation1510, thereby transmitting and/or receiving a corresponding next data frame during the next TWT service period. According to various embodiments, if it is identified that there is a missing frame among at least one data frame transmitted and/or received during at least one TWT service period, the electronic device101or the external electronic device205may identify whether or not the missing frame is able to be transmitted and/or received within the TWT service period in operation1520. According to various embodiments, if it is identified that the missing frame is able to be transmitted and/or received within the TWT service period, the electronic device101or the external electronic device205may transmit and/or receive the missing frame within the remaining period of the TWT service period in operation1525. According to various embodiments, if it is identified that the missing frame is unable to be transmitted and/or received within the TWT service period, the electronic device101or the external electronic device205may adjust the target wake time of the TWT service period in operation1530. According to various embodiments, in operation1535, the electronic device101or the external electronic device205may transmit and/or receive the missing frame within a TWT service period (e.g., the added service period) determined based on the adjusted target wake time. According to various embodiments, in operation1540, the electronic device101or the external electronic device205may identify whether or not the missing frames consecutively occur. According to various embodiments, if it is identified that the missing frames do not occur consecutively, the electronic device101or the external electronic device205may perform reoperation1510and transmit and/or receive a corresponding next data frame during the next TWT service period. According to various embodiments, if it is identified that missing frames consecutively occur, the electronic device101or the external electronic device205may identify quality of service of at least one data frame in operation1545. The electronic device101or the external electronic device205may periodically identify quality of service, thereby identifying the parameters related to quality of service regarding at least one data frame. The electronic device101or the external electronic device205may determine the TWT wake duration and/or the TWT wake interval of the TWT service period to be changed based on the parameters related to quality of service. According to various embodiments, the electronic device101or the external electronic device205may reset the TWT service period in operation1550. The electronic device101or the external electronic device205may reset the TWT service period between the electronic device101and the external electronic device205on the basis of the TWT wake duration and/or the TWT wake interval of the determined TWT service period. According to various embodiments, an electronic device may include a communication circuit operably coupled with an external electronic device and at least one processor, wherein the at least one processor may be configured to: determine one or more target-wake-time (TWT) parameters of at least one TWT service period based on at least one of the amount of data transmitted to the external electronic device, an amount of data received from the external electronic device, or a bandwidth, wherein at least one data frame is transmitted or received between the electronic device and the external electronic device during the at least one TWT service period; identify quality of service (QoS) for the at least one data frame transmitted or received during the at least one TWT service period; change at least one TWT parameter among the one or more TWT parameters based on the identified QoS; and control the communication circuit to transmit or receive at least one next data frame during a next TWT service period based on the changed at least one TWT parameter. According to various embodiments, the at least one processor may be configured to identify the end-to-end latency of the at least one data frame and identify the QoS for the at least one data frame based on the identified end-to-end latency of the at least one data frame. According to various embodiments, the QoS may include the end-to-end latency of the at least one data frame, and the at least one processor may be further configured to compare the end-to-end latency of the at least one data frame with at least one threshold value and determine at least one TWT parameter for the next TWT service period based on the identified comparison result of the identified end-to-end latency of the at least one data frame and the at least one threshold value. According to various embodiments, the at least one threshold value may be determined based on a required end-to-end latency for an application that generates at least a portion of the at least one data frame. According to various embodiments, the at least one processor may be further configured to determine at least one TWT parameter for the next TWT service period and based on determining the at least one TWT parameter for the next TWT service period, control the communication circuit to transmit a TWT response frame to the external electronic device, and the TWT response frame may include information about the changed at least one TWT parameter. According to various embodiments, the at least one TWT parameter may include at least one of a TWT wake duration and a TWT wake interval of a TWT service period. According to various embodiments, the QoS may include end-to-end latency of the at least one data frame, and the at least one processor may be further configured to, in response to determining that the end-to-end latency of the at least one data frame is less than a first threshold value, reduce a TWT wake duration of the next TWT service period, or increase a TWT wake interval of the next TWT service period. According to various embodiments, the QoS may include the end-to-end latency of the at least one data frame, and the at least one processor may be further configured to, in response to determining that the end-to-end latency of the at least one data frame is greater than a second threshold value, increase the duration of the next TWT service period or reduce the interval of the next TWT service period. According to various embodiments, the QoS may include the end-to-end latency of the at least one data frame, and the at least one processor may be further configured to change a channel for transmitting or receiving the at least one next data frame in response to identifying that the end-to-end latency of the at least one data frame is greater than a third threshold value. According to various embodiments, the at least one processor may be further configured to identify channel utilization of a channel through which the at least one data frame is transmitted or received, identify whether the at least one next data frame is able to be transmitted or received according to the changed at least one TWT parameter based on the identified channel utilization, and change a channel for transmitting and/or receiving the at least one next data frame in response to identifying that the at least one next data frame is unable to be transmitted or received according to the changed at least one TWT parameter. According to various embodiments, the at least one processor may be further configured to, in response to identifying that the at least one next data frame is unable to be transmitted or received according to the changed at least one TWT parameter, control the communication circuit to transmit, to the external electronic device, information on the channel to be changed through at least one of a communication scheme or a channel that is different from how the at least one data frame is transmitted or received. According to various embodiments, the at least one processor may be further configured to identify whether a missing frame exists during a first TWT service period among the at least one TWT service period, and, in response to identifying that the missing frame exists during the first TWT service period, control the communication circuit to transmit and/or receive the missing frame to or from the external electronic device during the first TWT service period or a second TWT service period different from the at least one TWT service period, and the second TWT service period may be determined to be a period added prior to the starting time of the next TWT service period of the first TWT service period among the at least one TWT service period based on TWT information transmitted during the first TWT service period among the at least one TWT service period. According to various embodiments, the at least one processor may be further configured to control the communication circuit to transmit a first data frame during the first TWT service period, and, after transmitting the first data frame, in response to a response message not being received from the external electronic device or in response to identifying from the response message received from the external electronic device, that at least a portion of the first data frame is not received by the external electronic device, determine that the missing frame exists. According to various embodiments, the at least one processor may be further configured to identify whether the missing frame is able to be transmitted to the external electronic device within the first TWT service period, and, in response to identifying that the missing frame is unable to be transmitted to the external electronic device within the first TWT service period, control the communication circuit to transmit a TWT information frame including the TWT information to the external electronic device in the next TWT service period. According to various embodiments, the TWT information may include information indicating the starting time of the second TWT service period. According to various embodiments, a method for controlling an electronic device may include: determining one or more TWT parameters of at least one TWT service period based on at least one of the amount of data transmitted to an external electronic device connected to the electronic device, an amount of data received from the external electronic device connected to the electronic device, or a bandwidth, wherein at least one data frame is transmitted or received between the electronic device and the external electronic device during the at least one TWT service period; identifying a quality of service (QoS) for the at least one data frame transmitted and/or received during the at least one TWT service period; changing at least one TWT parameter among the one or more TWT parameters on the basis of the identified QoS; and transmitting or receiving at least one next data frame during the next TWT service period based on the changed at least one TWT parameter. According to various embodiments, an electronic device may include a communication circuit and at least one processor, wherein the at least one processor may be configured to: determine one or more periods for transmitting and/or receiving data frames between the electronic device and an external electronic device based on at least one of the amount of data transmitted to and received from an external electronic device, which is operably connected through the communication circuit, or a bandwidth; identify whether a missing frame exists among one or more data frames transmitted and/or received during a first period of the determined one or more periods; and, in response to identifying the missing frame exists, control the communication circuit to transmit and/or receive the missing frame to and/or from the external electronic device during a second period, which is different from the determined periods, and wherein the second period may be determined to be a period prior to a starting time of the next period of the first period, among the one or more determined periods, based on information transmitted during the first period. According to various embodiments, the at least one processor may be further configured to control the communication circuit to transmit a first data frame during the first period, and, after transmitting the first data frame, in response to identifying a response message is not received from the external electronic device or in response to identifying, from the response message received from the external electronic device, that at least a portion of the first data frame has not been received by the external electronic device, to determine that there is a missing frame. According to various embodiments, the at least one processor may be further configured to identify whether the missing frame is able to be transmitted to the external electronic device within the first TWT service period, and, in response to identifying that the missing frame is unable to be transmitted to the external electronic device within the first TWT service period, control the communication circuit to transmit a TWT information frame including information for configuring a second period to the external electronic device in the next TWT service period. According to various embodiments, the information for configuring the second period may include information indicating the starting time of the second period. The electronic device according to various embodiments may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to an embodiment of the disclosure, the electronic devices are not limited to those described above. It should be appreciated that various embodiments of the disclosure and the terms used therein are not intended to limit the technological features set forth herein to particular embodiments and include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the description of the drawings, similar reference numerals may be used to refer to similar or related elements. It is to be understood that a singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include any one of, or all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another, and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element. As used in connection with various embodiments of the disclosure, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC). Various embodiments as set forth herein may be implemented as software (e.g., the program140) including one or more instructions that are stored in a storage medium (e.g., internal memory136or external memory138) that is readable by a machine (e.g., the electronic device101). For example, a processor (e.g., the processor120) of the machine (e.g., the electronic device101) may invoke at least one of the one or more instructions stored in the storage medium, and execute it. This allows the machine to be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include a code generated by a complier or a code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Wherein, the term “non-transitory” simply means that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to an embodiment, a method according to various embodiments of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., Play Store™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to various embodiments, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities, and some of the multiple entities may be separately disposed in different components. According to various embodiments, one or more of the above-described components or operations may be omitted, or one or more other components or operations may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In such a case, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. According to various embodiments, operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. Although the present disclosure has been described with various embodiments, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims.
160,345
11943640
DETAILED DESCRIPTION In the following description, for purposes of explanation and not limitation, specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. While, for example, the following description focuses on exemplary NN implementations, the present disclosure is not limited in this regard. For example, any machine learning approach that that is based on a generator-discriminator approach could be implemented to fulfil the same purpose. Moreover, while the present disclosure will be explained in the context of exemplary types of CM parameters and RAN characteristic parameters, it will be readily apparent that other parameter types may be used as well. Additionally, the present disclosure is not limited to recommending cell-level configurations, although some of the following embodiments will be discussed in this specific context. Those skilled in the art will further appreciate that the steps, services and functions explained herein may be implemented using individual hardware circuits, using software functioning in conjunction with a programmed microprocessor or general purpose computer, using one or more Application Specific Integrated Circuits (ASICs) and/or using one or more Digital Signal Processors (DSP). It will also be appreciated that when the present disclosure is described in terms of a method, it may also be embodied in one or more processors and one or more memories coupled to the one or more processors, wherein the one or more memories store one or more computer programs that perform the steps, services and functions disclosed herein when executed by one or more processors. In the following description of exemplary embodiments, the same reference numerals denote the same or similar components. The present disclosure provides, inter alia, a machine learning-based recommendation approach for RAN configurations (as defined, for example, by a combination of multiple CM parameter values that optimize RAN operation). In general, machine learning-based recommendation approaches provide, or recommend, items to users based on predicted ratings of the users on the items, and the prediction is done by analyzing known relationships between the users and the items. Such recommendation processes are based on the insight that two or more users who have given similar ratings on one or more items tend to also give a similar rating to an unrated or new item. For this reason ratings can be predicted for relationships between a user and unrated items based on the ratings of other users with similar rating preferences. Recommendation approaches have previously been applied for item recommendation to customers in online shopping, article recommendation to news subscribers, book recommendation to readers, and so on. The task is to recommend new items (e.g., books) to users by predicting the users' ratings on unrated items. A common approach is collaboration-based, which gives ratings based on user/item interactions. The interaction can be represented by a user/item matrix as shown inFIG.1, in which the number in a given field represents the rating of a user on an item. As an example, user User1has rated item Item2with a rating of “4”. Matrices of the type illustrated inFIG.1are usually sparse as a user usually only interacts with a small subset of all items. Techniques have thus been developed to cope with the sparse nature of the data, such as matrix factorization with deep learning. Evidently, recommendation accuracy improves as more rating data becomes available. In some implementations of the present disclosure, it is suggested to model CM parameter optimization as a recommendation problem. In the scenario illustrated inFIG.1, for example, different types of RANs or RAN portions (e.g., RAN cells) could take the role of users, CM parameter values could take the role of items, and performance indicator (e.g., key performance indicator, KPI) values could take the role of item ratings. The different types of RANs or RAN portions could be defined using different RAN characteristic parameter values. Eventually, the task will be to find the CM parameter values giving the best performance for a certain type of cell (or other RAN portion or a RAN as a whole). Since in live mobile networks most cells only experience very few configurations changes, the typical user/item (cell type/CM parameter) matrix for CM recommendation will be very sparse. The performance of a recommender system based on such a sparse matrix is expected to be poor, especially for unobserved CM parameter settings. One may thus think of “enriching” data structures of the type illustrated inFIG.1, or similar data structures, with synthetic data that have been generated based on the available non-synthetic, or authentic, data. Most machine learning (ML) algorithms (e.g., for a user recommendation problem) require a large amount of data to work properly. In this context, modelling-based approaches such as generative adversarial networks (GANs) can be used for synthetic data generation. A GAN can be used to create supplementary synthetic data that belongs to the same (or at least similar) distribution as the true non-synthetic data. GAN technology is exemplarily discussed in Goodfellow et al., “Generative Adversarial Nets” (https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf). A GAN, as usable in the some implementations of the present disclosure, contains a generator network usually receiving a noise input z as input and outputting synthetic data in form of a vector {tilde over (x)}, and a discriminator network that tries to estimate the probability whether a given input of either non-synthetic data x or synthetic data {tilde over (x)} belongs to a true distribution p(x) or a synthetic data distribution p({tilde over (x)}). These two networks compete against each other during training, the discriminator trying to detect the output of the generator as synthetic and the generator focusing on fooling the discriminator to classify the synthetic data as non-synthetic. The generator network typically never gets to see any true (i.e., non-synthetic) data, but it updates its weights based on the output of the discriminator. At first, both networks will perform very poorly, but because they are adversaries playing a zero-sum game, each trying to “outsmart” the other, they are forced to improve whenever their opponent improves, until at the end (in theory) the generator exactly reproduces the true data distribution and the discriminator is guessing at random, unable to find any difference. Conditional GANs (cGAN) enhance the original GAN by adding conditional values c as side information to the generator for generating data that satisfy the conditional values. cGAN technology is exemplarily discussed in M. Mirza et a., “Conditional Generative Adversarial Nets” (https://arxiv.org/abs/1411.1784) and T. Miyato et al. “cGANs with projection discriminator” (published as a conference paper at ICLR 2018, see https://openreview.net/forum?id=ByS1VpgRZ). The conditional values c are also added to the discriminator input to distinguish whether the generated data satisfies the conditional values. The following embodiments focus on cGAN technology, although GAN technology could be used as well. J. Yoon et al. in “GAIN: Missing Data Imputation using Generative Adversarial Nets” (http://proceedings.mlr.press/v80/yoon18a/yoon18a.pdf) propose an approach to derive missing data using a GAN, wherein the generator observes some components of a vector of authentic (i.e., non-synthetic) data, derives the missing components conditioned on what is actually observed, and outputs a completed vector. The discriminator then takes the completed vector and attempts to determine which components were actually observed and which were derived. Traditional GAN approaches such as described by J. Yoon et al. mainly focus on continuous data generation, such as for example pictures and audio signals. However, the data of interest in the present context (i.e., CM parameter values together with RAN characteristic values on, e.g. a cell level) inherently has a discrete property. For example, a certain CM parameter may only have few integral candidates to select from (such as 0 and 1; 1, 2, 3 and 4; 10, 100, 1000; etc.). Applying the original GAN or cGAN framework to this kind of discrete data generation will typically not be feasible because the generator needs to draw discrete sample values and this sampling step make the loss function of the generator undifferentiable. J. Wang et al. in “Irgan: A minimax game for unifying generative and discriminative information retrieval models” (https://arxiv.org/pdf/1705.10513.pdf) propose using the mini-max game framework to train Information Retrieval (IR) systems, including recommender system. The requirement underlying the present disclosure is, however, different in that the focus is to generate unobserved (“synthetic”) data to improve performance of a recommender system, and J. Wang et al. intend to select relevant items from a given items pool. In the present context that will be described in greater detail below, a modified GAN or cGAN is proposed that uses reinforcement techniques to train the generator network in order to overcome the problem of non-differentiable discrete values. In the following, embodiments of generating synthetic data for RAN configuration recommendation will be presented. FIGS.2A and2Billustrate two embodiments of a synthetic data generation apparatus200. In the embodiment illustrated inFIG.2A, the synthetic data generation apparatus200comprises a processor202and a memory204coupled to the processor202. The synthetic data generation apparatus200further comprises an optional input interface206and an optional output interface208. The memory204stores program code that controls operations executed by the processor202. The processor202is adapted to obtain, via the input interface206, a noise input. The processor202is also configured to generate, using a trained generative machine learning model, synthetic data from the noise input. The processor202is still further configured to output, via the output interface208, the synthetic data. FIG.2Bshows an embodiment in which the synthetic data generation apparatus200is implemented in a modular configuration. As shown inFIG.2B, the synthetic data generation apparatus200comprises an obtaining module210configured to obtain noise input, a generation module212configured to generate synthetic data from the noise input, and an outputting module214configured to output the synthetic data. FIGS.3A and3Billustrate two embodiments of a training apparatus300. In the embodiment illustrated inFIG.3A, the training apparatus300comprises a processor302and a memory304coupled to the processor302. The training apparatus300further comprises an optional input interface306. The memory304stores program code that controls operations executed by the processor302. The processor302is adapted to obtain, via the input interface306, a noise input, non-synthetic data comprising non-synthetic CM parameter values, non-synthetic RAN characteristic parameter values as well as non-synthetic performance indicator values. The processor302is further configured to generate, using a generative machine learning model, synthetic data from the noise input, the synthetic data comprising at least one of one or more synthetic CM parameter values, one or more synthetic RAN characteristic parameter values and one or more synthetic performance indicator values. The processor302is also configured to distinguish between an input of either the synthetic data or the non-synthetic data, using a discriminative machine learning model, by classifying the input into a first predicted class of either a class of non-synthetic data or a class of synthetic data. The processor302is further configured to update the discriminative machine learning model by minimizing an error based on a deviation between the first predicted class and a true class associated with the input. The processor302is also configured to update the generative machine learning model by maximizing the error. FIG.3Bshows an embodiment in which the training apparatus300is implemented in a modular configuration. As shown inFIG.3B, the training apparatus300comprises a first obtaining module308configured to obtain the noise input, a second obtaining module310configured to obtain the non-synthetic data, a third obtaining module3120configured to obtain an input of either the synthetic data or the non-synthetic data and a true class, a generating module314configured to generate the synthetic data, a classifying module316configured to classify the input into the first predicted class, a first updating module318configured to update the discriminative machine learning model, and a second updating module320configured to update the generative machine learning module. FIG.4illustrates in a flow diagram400a method embodiment of the present disclosure. The method embodiment ofFIG.4may be performed by any of the synthetic data generation apparatus embodiments ofFIGS.2A and2B. InFIG.4the synthetic data generation apparatus200initially obtains in step S402a noise input. It will in the following be assumed that the noise input is received in the form of a vector (i.e., as a noise vector). In step S404, the synthetic data generation apparatus200generates, using a trained generative machine learning model, synthetic data from the noise vector. Step S404can be performed by a generator of a cGAN. The synthetic data comprises at least one of one or more synthetic CM parameter values, one or more synthetic RAN characteristic parameter values and one or more synthetic performance indicator values. The generative machine learning model has been trained together with a discriminative machine learning model as adversaries based on non-synthetic data associating non-synthetic configuration management, CM, parameter values, non-synthetic RAN characteristic parameter values and non-synthetic performance indicator values. Each non-synthetic performance indicator value indicates a performance for a given RAN configuration as defined by one or more of the non-synthetic CM parameter values and a given RAN characteristic as defined by one or more of the non-synthetic RAN characteristic parameter values. The synthetic data, in the same form as the non-synthetic data, comprise at least one of one or more synthetic CM parameter values, one or more synthetic RAN characteristic parameter values and one or more synthetic performance indicator values. As understood here, the term “in the same form” may refer to the same data format as the non-synthetic data input. Further, in step S406, the synthetic data generation apparatus202outputs the synthetic data as an input for a machine learning process. The machine learning process will recommend RAN configurations (e.g., in terms of CM parameter values). FIG.5illustrates in a flow diagram500a further method embodiment of the present disclosure. The method embodiment ofFIG.5may be performed by any of the training apparatus embodiments ofFIGS.3A and3Bto train a generative machine learning model that outputs synthetic data as an input for a machine learning process that recommends RAN configurations. Here, the generative machine learning model is trained together with a discriminative machine learning model as adversaries. InFIG.5, the training apparatus300initially obtains in step S502a noise input in the form of a noise vector z. The training apparatus300also obtains, in step S504, non-synthetic data comprising non-synthetic CM parameter values, non-synthetic RAN characteristic parameter values and non-synthetic performance indicator values. In more detail, the non-synthetic data associate CM parameter values, RAN characteristic parameter values and performance indicator values, wherein each performance indicator value indicates a performance for a given RAN configuration as defined by one or more of the non-synthetic CM parameter values and a given RAN characteristic as defined by one or more of the non-synthetic RAN characteristic parameter values. In step S506, the training apparatus300generates, using a generative machine learning model, synthetic data from the noise vector. The synthetic data comprises, in the same form as the non-synthetic data, at least one of one or more synthetic CM parameter values, one or more synthetic RAN characteristic parameter values and one or more synthetic performance indicator values. Step S506can be performed by a generator of a cGAN. In step S508, the training apparatus300obtains an input of either the synthetic data or the non-synthetic data and a corresponding true class and then, in step S510, classifies the input, using the discriminative machine learning model, into a first predicted class of either a class of synthetic data or a class of non-synthetic data. In step S510, the training apparatus300updates the discriminative machine learning model by minimizing an error based on a deviation between the first predicted class and the true class and, in step S512, updates the generative machine learning model by maximizing the error of the discriminative machine learning model. In more detail, step S512may comprise maximizing a probability of the synthetic data being predicted as a class of non-synthetic data by the discriminative machine learning model. FIG.6illustrates a flow chart600a more detailed method embodiment of the present disclosure that can be based on the general aspects discussed above with reference toFIGS.1to5. The embodiment will be described in the exemplary context of a cell-level implementation. It will be appreciated that the present disclosure could also be implemented on other RAN levels. In an initial step S602data preparation takes place. Step S602may precede steps is S502and S504inFIG.5or may be performed during these steps. Data preparation includes obtaining authentic (i.e., non-synthetic) training data for the training apparatus300and optional pre-processing of the training data. Values (including settings) for CM parameters, cell characteristic parameters and cell-level performance indicators are obtained from one or multiple live mobile networks or otherwise (e.g., from a test environment or by simulation). In an matrix representation derived fromFIG.1, cell types as defined by values of one or more cell characteristic parameters (including cell characteristic parameter combinations) may represent users, CM parameter configurations as defined by values of CM parameters (including CM parameter combinations) may represent items, and KPI or other performance values may represent ratings. As an example, an individual cell type (“user”) may be represented by a first value for a first cell characteristic parameter and a second value for a second cell characteristic parameter. In a similar manner, an individual CM parameter configuration (“item”) may be represented by a first value for a first CM parameter and a second value for a second CM parameter. In the context of the present disclosure, a value can be indicative of a discrete value (e.g.,10) or of a range of values (e.g., the starting or end point of a range of values). Data pre-processing such as dimension reduction and embedding techniques may also be performed in step S602as needed. Then, in step S604, the training apparatus300(e.g., a cGAN or similar NN-based model) is trained with the (optionally pre-processed) training data obtained in step S602. Step S604may correspond to the procedure illustrated in the flow diagram ofFIG.5. After the training is completed, the trained cGAN is used as the synthetic data generation apparatus200. In step S606, which may correspond to the procedure illustrated in the flow diagram ofFIG.4, the trained synthetic data generation apparatus200is operated to generate synthetic data. As explained above, the synthetic data include one or more synthetic CM parameter values, one or more synthetic cell characteristic parameter values, and/or one or more performance indicator values. Optionally, a classification of the synthetic data might take place, using a discriminator of the cGAN, in order to determine whether the synthetic data can be regarded as realistic or not and, therefore, to determine whether the synthetic data can actually be used as input for the machine learning process. In step S608, the non-synthetic data for training the cGAN are enhanced with the synthetic data. The result may correspond to an extended matrix representation (similar toFIG.1), in which not only the non-synthetic training data obtained in step S602are included, but also the synthetic data generated in step S606. In step S610, both the generated synthetic data and the non-synthetic data are used to train the recommender, i.e., the actual ML process. The concrete details of the recommender algorithm are of minor importance. For instance, a collaborative filtering algorithm could be applied. To this end, matrix factorization techniques may be applied by the ML process. Moreover, still in step S610, the trained recommender system (i.e., potential candidates for CM parameter values recommendations) may be evaluated on some or all the non-synthetic data obtained in step S602. The synthetic data might be excluded from the evaluation process so as not to falsify the evaluation result. In a further step S612, the trained ML process recommends CM parameter values (including parameter settings) for a given RAN. The CM parameter settings may be recommended such that one or more KPIs of the given RAN are improved (e.g., optimized). In a still further step not shown inFIG.6, the RAN (e.g., one or more cells thereof) may be configured in accordance with the recommended CM parameter values. In the following, the various steps illustrated inFIG.6will be discussed in greater detail with reference toFIGS.7to11. In the following discussion, the synthetic data generation apparatus200applies a modified cGAN to generate synthetic data, which will be used as additional training data to train a CM parameter recommender system such as a ML process. In a more detailed implementation of step S602, a set of authentic data samples, x, and labels of the samples (i.e., conditional values), c, are obtained and prepared as input for cGAN training. In one variant, x is a vector comprising or consisting of CM parameter values and cell characteristic parameter values, and c corresponds to one or more performance indicator values associated with the CM parameter values and the cell characteristic parameter values. The performance indicator values have been observed (e.g., measured) from one or more live mobile networks. Alternatively, or in addition, the performance indicator values are observed (e.g., measured) in a testing (e.g., laboratory) environment. Examples of CM parameters that are configurable during RAN operation include physical uplink control channel (PUCCH) power boost, service specific discontinuous reception (DRX), and so on. In one variant, dimension reduction techniques, such as principal component analysis (PCA), are applied to one or more CM parameters to reduce the dimension of x. In another variant, embedding is applied to some CM parameters to convert categorical parameter settings (e.g., activated/deactivated or on/off) to numerical values (e.g., 0/1 or 1/2). Examples of cell characteristic parameters (as representations of RAN characteristic parameters) include path loss statistics, cell load statistics, inter-site distances, uplink interference statistics, and so on. These parameters may be measured, or calculated from measurements, to obtain associated cell characteristic parameter values. The cell characteristic parameters are used to classify, or categorize, cells into cell types (i.e., to define cell types). Cell types may be defined based on a single cell characteristic parameter, for example by assigning a dedicated value of that cell characteristic parameter or a dedicated range of values to a specific cell type. Of course, different cell types can also be defined based on multiple cell characteristic parameters, wherein each cell type will then be associated with a dedicated value or dedicated range of values of each individual cell characteristic parameter. Examples of performance indicators include aggregated user experience measures, user radio throughput statistics, signal to interference and noise ratio (SINR) statistics (e.g., cell average uplink SINR) and so on. The performance indicators may take the form of key performance indicators (KPIs). The performance indicators may be obtained on the bases of measurements. FIG.7exemplifies a CM recommendation matrix700that could form the basis for cGAN training. In this example, users and items in a conventional recommendation matrix (seeFIG.1) are represented by cell types and RAN configurations (here: CM parameter value combinations of two CM parameters), respectively. The ratings are represented by cell average uplink SINR measured in dB. Empty spaces mean that such CM combinations have not been seen for the corresponding cell type. For ease of explanation, the example only includes two CM parameters which have two configurable values: 1 and 2. Each value combination of these two CM parameters corresponds to a possible RAN (or more precisely: cell) configuration. In other examples, only one CM parameter or more than two parameters may need to be configured per cell, and the number of matrix columns may respectively decrease or increase. In the example ofFIG.7, different cell types may be defined by different cell load statistics value ranges (or values of other/additional cell characteristic parameters). With the increase of the number of configurable CM parameters in 5G and other mobile networks, and the extension of value ranges, any resulting matrix representation will become very sparsely populated since only limited CM parameter combinations will be configured in live mobile networks. Even with state-of-the-art matrix factorization methods, such as deep matrix factorization, recommendation performance is expected to be poor given the resulting sparsely populated matrix. Therefore, in step S604, the cGAN will be trained to generate synthetic data to supplement the available training samples as exemplarily shown inFIG.7. FIG.8illustrates an exemplary cGAN-based synthetic data generation apparatus800comprising a generator802(or “G”) and a discriminator804(or “D”). Each of the generator802and discriminator804is built from one or multiple NNs, such as deep NNs (DNN). In more detail, the generator802is built from a single DNN806. Moreover, the discriminator804is exemplarily built from three NNs808,810,812, wherein the NN808is configured as a DNN. This type of discriminator design achieves fast and stable learning (see T. Miyato et al. “cGANs with Projection Discriminator”, published as a conference paper at ICLR 2018; https://openreview.net/forum?id=ByS1VpgRZIt). It will be appreciated that many alternative NN realizations of the generator802and discriminator804are feasible, including building same from a single NN (e.g., a single DNN). DNN806of generator802has two input parameters, noise input received in the form of a vector z and conditional values c, and one output of the synthetic data {tilde over (x)}. DNN808of discriminator804illustrates two inputs of the non-synthetic data x and the synthetic data {tilde over (x)}, however, a single input of either the non-synthetic data x or the synthetic data {tilde over (x)} would be conceivable depending on the implementation. Another input of the conditional values c is fed into the discriminator804, in particular NN812, in order to determine whether the input of DNN808matches with the conditional values c. The noise vector z follows a certain distribution, for example a normal distribution. Moreover, the noise vector z may be a latent space representation of x. The dimension of z is a hyper-parameter to tune, normally it is smaller than the length of x. The conditional values c are a scalar value or a vector and constitutes the conditional information added to both cGAN generator802and cGAN discriminator804as input. The generator output {tilde over (x)} is a vector in the form of a concatenation of synthetic CM parameter values, synthetic cell characteristic parameter values and synthetic performance indicator values with the same length and in the same form as x. The goal of the generator802is to “fool” the discriminator804with synthetic data. The discriminator804has two tasks: 1) determine if {tilde over (x)} is realistic or not, and 2) determine if the p{tilde over (x)}pair is matched or not. The discriminator804might output a first probability p{tilde over (x)}indicative of its confidence whether {tilde over (x)} is realistic or not and a second probability Pcindicative of its confidence whether the ({tilde over (x)}, c) pair is matched or not. The generator802and discriminator804may be trained in step S604using an adversarial process with stochastic gradient ascent. In this context, a policy gradient can be employed to generate discrete data using reinforcement techniques, see J. Wang et al. “Irgan: A minimax game for unifying generative and discriminative information retrieval models”, https://arxiv.org/pdf/1705.10513.pdf. The underlying cGAN training algorithm is illustrated inFIG.9, where G stands for the generator802and D stands for the discriminator804. After the cGAN-based synthetic data generation apparatus800has been trained in step S604, it can generate—in step S606—synthetic data {tilde over (x)} given z and c as input, as shown inFIG.10. A first condition (e.g., threshold decision) can be defined for p{tilde over (x)}to determine whether or not the trained cGAN-based synthetic data generation apparatus800considers {tilde over (x)} as realistic and therefore similar to the authentic data and/or a second condition (e.g., threshold decision) can be defined for pcto determine whether or not the trained cGAN-based synthetic data generation apparatus800considers the ({tilde over (x)}, c) pair as matching or not. The synthetic data output by the cGAN-based synthetic data generation apparatus800is used to enhance, or supplement, a training data set to train the ML process that recommends cell configurations (see steps S608and S610). In this context,FIG.11illustrates the matrix example ofFIG.7with supplemented synthetic data (indicated in bold and italic font). As can be seen from the supplemented matrix1100, one or more of synthetic cell configurations (i.e., CM parameter values; new column), synthetic cell types (i.e., cell characteristic parameter values or value ranges; new row) and synthetic performance indicator values can generally be supplemented. Some spaces are still left empty, for example because of associated low confidence measures. When an output of the ML process is evaluated, only authentic data is used so that the evaluation is not biased by the synthetic data. Evaluation may include analyzing candidates for possible CM parameter value recommendations by the ML process and discarding some or all of the candidates based on a quality criterion. Then, in step S612, the trained ML process is used to recommend CM parameter values for a specific cell configuration. In one variant, a CM parameter value combination giving the best predicted rating (e.g., KPI) value is recommended. In the above embodiment, synthetic data comprising CM parameter values, cell characteristic parameter values and KPI values is generated using a modified cGAN. The synthetic data is used to enhance a training data set to train the ML process. The result is a more accurate recommendation of a cell configuration for a given RAN. The proposed technique provides a solution to a key challenge when building a per-RAN or per-cell CM recommender system. By generating synthetic data from non-synthetic data, the recommender system can give recommendations on a wider range of CM parameter settings with higher accuracy. Since the synthetic data can be generated offline using an ML model, it reduces risk, time and cost compared with trying the new settings in a live mobile network. The resulting CM recommender system is an important function for automated network optimization and zero-touch network management.
32,741
11943641
DETAILED DESCRIPTION The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. In wireless communication systems, the path loss of a signal (i.e., the reduction in power density (attenuation) of an electromagnetic wave as it propagates through space) can be undesirably high and range may be limited. Beamforming is a technique that may be used to direct or concentrate the wireless signal to a desired direction to mitigate path loss and/or extend communication range. For a beamformed transmission, the amplitude and phase of each antenna in an array of antennas may be precoded, or controlled to create a desired (i.e., directional) pattern of constructive and destructive interference in a wavefront for a transmission. A beam may provide more energy in a certain direction to the receiver. A base station may transmit one or more beam reference signals by sweeping in all directions so that a user equipment (UE) may identify a best “coarse” beam. Furthermore, the base station may transmit a beam refinement request signal so that the UE may track “fine” beams. If a “coarse” beam identified by the UE changes, the UE may inform the base station so that the base station may train one or more new “fine” beams for the UE. In some examples, when the UE can no longer “see” or loses the current beam, it is referred to as a beam failure. The UE may determine that the current beam experiences a beam failure when the signal quality or strength of the beam is below a predetermined threshold or not detected at all. In a beam failure recovery process, the UE may transmit a beam failure recovery request to the base station. The beam failure recovery request may indicate a new beam (e.g., best “coarse” beam) detected by the UE from a set of beams that are periodically transmitted by the base station. The base station and UE may use the new beam to replace the current beam to maintain communication. Various aspects of the disclosure are directed towards determining which of a plurality of uplink resources a UE should select for transmitting a beam failure recovery request. In some examples, these resources are selected according to a particular network configuration transmitted to the UE. Moreover, the disclosed aspects include aspects directed towards various network-based configurations of a UE, which provides the UE with rules for selecting particular uplink resources to utilize for transmitting a beam failure recovery request. The various concepts presented throughout this disclosure may be implemented across a broad variety of telecommunication systems, network architectures, and communication standards. Referring now toFIG.1, as an illustrative example without limitation, various aspects of the present disclosure are illustrated with reference to a wireless communication system100. The wireless communication system100includes three interacting domains: a core network102, a radio access network (RAN)104, and a user equipment (UE)106. By virtue of the wireless communication system100, the UE106may be enabled to carry out data communication with an external data network110, such as (but not limited to) the Internet. The RAN104may implement any suitable wireless communication technology or technologies to provide radio access to the UE106. As one example, the RAN104may operate according to 3rd Generation Partnership Project (3GPP) New Radio (NR) specifications, often referred to as 5G. As another example, the RAN104may operate under a hybrid of 5G NR and Evolved Universal Terrestrial Radio Access Network (eUTRAN) standards, often referred to as LTE. The 3GPP refers to this hybrid RAN as a next-generation RAN, or NG-RAN. Of course, many other examples may be utilized within the scope of the present disclosure. As illustrated, the RAN104includes a plurality of base stations108. Broadly, a base station is a network element in a radio access network responsible for radio transmission and reception in one or more cells to or from a UE. In different technologies, standards, or contexts, a base station may variously be referred to by those skilled in the art as a base transceiver station (BTS), a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS), an extended service set (ESS), an access point (AP), a Node B (NB), an eNode B (eNB), a gNode B (gNB), or some other suitable terminology. The radio access network104is further illustrated supporting wireless communication for multiple mobile apparatuses. A mobile apparatus may be referred to as user equipment (UE) in 3GPP standards. And in some cases, a mobile apparatus may also be referred as a mobile station (MS), a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal (AT), a mobile terminal, a wireless terminal, a remote terminal, a handset, a terminal, a user agent, a mobile client, a client, or some other suitable terminology. A UE can be an apparatus that provides a user with access to network services. Within the present document, a “mobile” apparatus need not necessarily have a capability to move, and may be stationary. The term mobile apparatus or mobile device broadly refers to a diverse array of devices and technologies. UEs may include a number of hardware structural components sized, shaped, and arranged to help in communication; such components can include antennas, antenna arrays, RF chains, amplifiers, one or more processors, etc. electrically coupled to each other. For example, some non-limiting examples of a mobile apparatus include a mobile, a cellular (cell) phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal computer (PC). a notebook, a netbook, a smartbook, a tablet, a personal digital assistant (PDA), and a broad array of embedded systems, e.g., corresponding to an “Internet of things” (IoT). A mobile apparatus may additionally be an automotive or other transportation vehicle, a remote sensor or actuator, a robot or robotics device, a satellite radio, a global positioning system (GPS) device, an object tracking device, a drone, a multi-copter, a quad-copter, a remote control device, a consumer and/or wearable device, such as eyewear, a wearable camera, a virtual reality device, a smart watch, a health or fitness tracker, a digital audio player (e.g., MPS player), a camera, a game console, etc. A mobile apparatus may additionally be a digital home or smart home device such as a home audio, video, and/or multimedia device, an appliance, a vending machine, intelligent lighting, a home security system, a smart meter, etc. A mobile apparatus may additionally be a smart energy device, a security device, a solar panel or solar array, a municipal infrastructure device controlling electric power (e.g., a smart grid), lighting, water, etc.; an industrial automation and enterprise device; a logistics controller; agricultural equipment; military defense equipment, vehicles, aircraft, ships, and weaponry, etc. Still further, a mobile apparatus may provide for connected medicine or telemedicine support, e.g., health care at a distance. Telehealth devices may include telehealth monitoring devices and telehealth administration devices, whose communication may be given preferential treatment or prioritized access over other types of information, e.g., in terms of prioritized access for transport of critical service data, and/or relevant QoS for transport of critical service data. Wireless communication between a RAN104and a UE106may be described as utilizing an air interface. Transmissions over the air interface from a base station (e.g., base station108) to one or more UEs (e.g., UE106) may be referred to as downlink (DL) transmission. in accordance with certain aspects of the present disclosure, the term. downlink may refer to a point-to-multipoint transmission originating at a scheduling entity (described further below; e.g., base station108). Another way to describe this scheme may be to use the term broadcast channel multiplexing. Transmissions from a UE (e.g., UE106) to a base station base station108) may be referred to as uplink (UL) transmissions. In accordance with further aspects of the present disclosure, the term uplink may refer to a point-to-point transmission originating at a scheduled entity (described further below; e.g., UE106). In some examples, access to the air interface may be scheduled. A a scheduling entity (e.g., a base station108) can allocate resources for communication among some or all devices and equipment within its service area or cell. Within the present disclosure and in some scenarios, as discussed further below, a scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more scheduled entities. That is, for scheduled communication, UEs106, which may be scheduled entities, may utilize resources allocated by the scheduling entity108. Base stations108are not the only entities that may function as scheduling entities. That is, in some examples, a UE may function as a scheduling entity, scheduling resources for one or more scheduled entities (e.g., one or more other UEs). As illustrated inFIG.1, a scheduling entity108may broadcast downlink traffic112to one or more scheduled entities106. Broadly, the scheduling entity108is a node or device responsible for scheduling traffic in a wireless communication network, including the downlink traffic112and, in some examples, uplink traffic116from one or more scheduled entities106to the scheduling entity108. On the other hand, the scheduled entity106is a node or device that receives downlink control information114, including but not limited to scheduling information (e.g., a grant), synchronization or timing information, or other control information from another entity in the wireless communication network such as the scheduling entity108. In general, base stations108may include a backhaul interface for communication with a backhaul portion120of the wireless communication system. The backhaul120may provide a link between a base station108and the core network102. Further, in some examples, a backhaul network may provide interconnection between the respective base stations108. Various types of backhaul interfaces may be employed, such as a direct physical connection, a virtual network, or the like using any suitable transport network. The core network102may be a part of the wireless communication system100, and may be independent of the radio access technology used in the RAN104. In some examples, the core network102may be configured according to 5G standards (e.g., a 5G Core Network designed to support throughput, latency, and mobility requirements of different service categories with the introduction of a Services Based Architecture (SBA) and Control and User Plane Separation (CUPS)). In other examples, the core network102may be configured according to a 4G evolved packet core (EPC), or any other suitable standard or configuration. Referring now toFIG.2, by way of example and without limitation, a schematic illustration of a RAN200is provided. In some examples, the RAN200may be the same as the RAN104described above and illustrated inFIG.1. The geographic area covered by the RAN200may be divided into cellular regions (cells) that can be uniquely identified by a user equipment (UE) based on an identification broadcasted from one access point or base station.FIG.2illustrates macrocells202,204, and206, and a small cell208, each of which may include one or more sectors (not shown). A sector is a sub-area of a cell. All sectors within one cell are served by the same base station. A radio link within a sector can be identified by a single logical identification belonging to that sector. In a cell that is divided into sectors, the multiple sectors within a cell can be formed by groups of antennas with each antenna responsible for communication with UEs in a portion of the cell. InFIG.2, two base stations210and212are shown in cells202and204; and a third base station214is shown controlling a remote radio head (RRH)216in cell206. That is, a base station can have an integrated antenna or can be connected to an antenna or RRH by feeder cables. In the illustrated example, the cells202,204, and126may be referred to as macrocells, as the base stations210,212, and214support cells having a large size. Further, a base station218is shown in the small cell208(e.g., a microcell, picocell, femtocell, home base station, home Node B, home eNode B, etc.) which may overlap with one or more macrocells. In this example, the cell208may be referred to as a small cell, as the base station218supports a cell having a relatively small size. Cell sizing can be done according to system design as well as component constraints. The radio access network200may include any number of wireless base stations, nodes, and cells. As one example, a relay node may be deployed to extend the size or coverage area of a given cell. The base stations210,212,214,218provide wireless access points to a core network for any number of mobile apparatuses. In some examples, the base stations210,212,214, and/or218may be the same as the base station/scheduling entity108described above and illustrated inFIG.1, FIG.2further includes a quadcopter or drone220, which may be configured to function as a base station. That is, in some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile base station such as the quadcopter220. Though not shown, the drone220may also be other types of vehicles, including but not limited to, high altitude crafts, aerial-based vehicles, land-based vehicles, or water-going vehicles. Within the RAN200, the cells may include UEs that may be in communication with one or more sectors of each cell. Further, each base station210,212,214,218, and220may be configured to provide an access point to a core network102(seeFIG.1) for all the UEs in the respective cells. For example, UEs222and224may be in communication with base station210; UEs226and228may be in communication with base station212; UEs230and232may be in communication with base station214by way of RRH216; UE234may be in communication with base station218; and UE236may be in communication with mobile base station220. in some examples, the UEs222,224,226,228,230,232,234,236,238,240, and/or242may be the same as the UE/scheduled entity106described above and illustrated inFIG.1. In some examples, a mobile network node (e.g., quadcopter220) may be configured to function as a UE. For example, the quadcopter220may operate within cell202by communicating with base station210. In a further aspect of the RAN200, sidelink signals may be used between UEs without necessarily relying on scheduling or control information from a base station. For example, two or more UEs (e.g., UEs226and228) may communicate with each other using peer to peer (P2P) or sidelink signals227without relaying that communication through a base station (e.g., base station212). In a further example, UE238is illustrated communicating with UEs240and242. Here, the UE238may function as a scheduling entity or a primary sidelink device, and UEs240and242may function as a scheduled. entity or a non-primary (e.g., secondary) sidelink device. In still another example, a UE may function as a scheduling entity in a device-to-device (D2D), peer-to-peer (P2P), or vehicle-to-vehicle (V2V) network, and/or in a mesh network. In a mesh network example, UEs240and242may optionally communicate directly with one another in addition to communicating with the scheduling entity238. Thus, in a wireless communication system with scheduled access to time-frequency resources and having a cellular configuration, a P2P configuration, or a mesh configuration, a scheduling entity and one or more scheduled entities may communicate utilizing the scheduled resources. In the radio access network200, the ability for a UE to communicate while moving, independent of its location, is referred to as mobility. The various physical channels between the UE and the radio access network are generally set up, maintained, and released under the control of an access and mobility management function (AMF, not illustrated, part of the core network102inFIG.1). Mobility features may also include a security context management function (SCMF) that manages the security context for both the control plane and the user plane functionality, and a security anchor function (SEAF) that performs authentication. In various aspects of the disclosure, a radio access network200may utilize DL-based mobility or UL-based mobility to enable mobility and handovers (i.e., the transfer of a UE's connection from one radio channel to another). In a network configured for DL-based mobility, during a call with a scheduling entity, or at any other time, a UE may monitor various parameters of the signal from its serving cell as well as various parameters of neighboring cells. Depending on the quality of these parameters, the UE may maintain communication with one or more of the neighboring cells. During this time, if the UE moves from one cell to another, or if signal quality from a neighboring cell exceeds that from the serving cell for a given amount of time, the UE may undertake a handoff or handover from the serving cell to the neighboring (target) cell. For example, UE224(illustrated as a vehicle, although any suitable form of UE may be used) may move from the geographic area corresponding to its serving cell202to the geographic area corresponding to a neighbor cell206. When the signal strength or quality from the neighbor cell206exceeds that of its serving cell202for a given amount of time, the UE224may transmit a reporting message to its seeing base station210indicating this condition. In response, the UE224may receive a handover command, and the UE may undergo a handover to the cell206. In a network configured for UL based mobility UL reference signals from each UE may be utilized by the network to select a serving cell for each UE. In some examples, the base stations210,212, and214/216may broadcast unified synchronization signals (e.g., unified Primary Synchronization Signals (PSSs), unified Secondary Synchronization Signals (SSSs) and unified Physical Broadcast Channels (PBCH)). The UEs222,224,226,22g,230, and232may receive the unified synchronization signals, derive the carrier frequency and slot timing from the synchronization signals, and in response to deriving timing, transmit an uplink pilot or reference signal. The uplink pilot signal transmitted by a UE (e.g., UE224) may be concurrently received by two or more cells (e.g., base stations210and214/216) within the radio access network200. Each of the cells may measure a strength of the pilot signal, and the radio access network (e.g., one or more of the base stations210and214/216and/or a central node within the core network) may determine a serving cell for the UE224. As the UE224moves through the radio access network200, the network may continue to monitor the uplink pilot signal transmitted by the UE224. When the signal strength or quality of the pilot signal measured by a neighboring cell exceeds that of the signal strength or quality measured by the serving cell, the network200may handover the UE224from the serving cell to the neighboring cell, with or without informing the UE224. Although the synchronization signal transmitted by the base stations210,212, and214/216may be unified, the synchronization signal may not identify a particular cell, but rather may identify a zone of multiple cells operating on the same frequency and/or with the same timing. The use of zones in 5G networks or other next generation communication networks enables the uplink-based mobility framework and improves the efficiency of both the UE and the network, since the number of mobility messages that need to be exchanged between the UE and the network may be reduced. In various implementations, the air interface in the radio access network200may utilize licensed spectrum, unlicensed spectrum, or shared spectrum. Licensed spectrum provides for exclusive use of a portion of the spectrum, generally by virtue of a mobile network operator purchasing a license from a government regulatory body. Unlicensed spectrum provides for shared use of a portion of the spectrum without need for a government-granted license. While compliance with some technical rules is generally still required to access unlicensed spectrum, generally, any operator or device may gain access. Shared spectrum may fall between licensed and unlicensed spectrum, wherein technical rules or limitations may be required to access the spectrum, but the spectrum may still be shared by multiple operators and/or multiple RATs. For example, the holder of a license for a portion of licensed spectrum may provide licensed shared access (ISA) to share that spectrum with other parties, e.g., with suitable licensee-determined conditions to gain access. The air interface in the radio access network200may utilize one or more duplexing algorithms. Duplex refers to a point-to-point communication link where both endpoints can communicate with one another in both directions. Full duplex means both endpoints can simultaneously communicate with one another. Half duplex means only one endpoint can send information to the other at a time. In a wireless link, a full duplex channel generally relies on physical isolation of a transmitter and receiver, and suitable interference cancellation technologies. Full duplex emulation is frequently implemented for wireless links by utilizing frequency division duplex (FDD) or time division duplex (TDD). In FDD, transmissions in different directions operate at different carrier frequencies. In TDD, transmissions in different directions on a given channel are separated from one another using time division multiplexing That is, at some times the channel is dedicated for transmissions in one direction, while at other times the channel is dedicated for transmissions in the other direction, where the direction may change very rapidly, e.g., several times per slot. In some aspects of the disclosure, the scheduling, entity and/or scheduled entity may be configured for beamforming and/or multiple-input multiple-output (MIMO) technology.FIG.3illustrates an example of a wireless communication system300supporting MIMO. In a MIMO system, a transmitter302includes multiple transmit antennas304(e.g., N transmit antennas) and a receiver306includes multiple receive antennas308(e.g., M receive antennas). Thus, there are N'M signal paths310from the transmit antennas304to the receive antennas308. Each of the transmitter302and the receiver306may be implemented, for example, within a scheduling entity108, a scheduled entity106, or any other suitable wireless communication device. The use of such multiple antenna technology enables the wireless communication system to exploit the spatial domain to support spatial multiplexing, beamforming, and transmit diversity. Spatial multiplexing may be used to transmit different streams of data, also referred to as layers, simultaneously on the same time-frequency resource. The data streams may be transmitted to a single UE to increase the data rate or to multiple UEs to increase the overall system capacity, the latter being referred to as multi-user MIMO (MU-MIMO). This is achieved by spatially precoding each data stream (i.e., multiplying the data streams with different weighting and phase shifting) and then transmitting each spatially precoded stream through multiple transmit antennas on the downlink. The spatially precoded data streams arrive at the UE(s) with different spatial signatures, which enables each of the UE(s) to recover the one or more data streams destined for that UE. On the uplink, each UE transmits a spatially precoded data stream, which enables the base station to identify the source of each spatially precoded data stream. The number of data streams or layers corresponds to the rank of the transmission. In general, the rank of the MIMO system300is limited by the number of transmit or receive antennas304or308, whichever is lower. In addition, the channel conditions at the UE, as well as other considerations, such as the available resources at the base station, may also affect the transmission rank. For example, the rank (and therefore, the number of data streams) assigned to a particular UE on the downlink may be determined based on the rank indicator (RI) transmitted from the UE to the base station. The RI may be determined based on the antenna configuration (e.g., the number of transmit and receive antennas) and a measured signal-to-interference-and-noise ratio (SINR) on each of the receive antennas. The RI may indicate, for example, the number of layers that may be supported under the current channel conditions. The base station may use the RI, along with resource information (e.g., the available resources and amount of data to be scheduled for the UE), to assign a transmission rank to the UE. In Time Division Duplex (TDD) systems, the UL and DL are reciprocal, in that each uses different time slots of the same frequency bandwidth. Therefore, in TDD systems, the base station may assign the rank for DL MIMO transmissions based on UL SINR measurements (e.g., based on a Sounding Reference Signal (SRS) transmitted from the UE or other pilot signal). Based on the assigned rank, the base station may then transmit the CSI-RS with separate C-RS sequences for each layer to provide for multi-layer channel estimation. From the CSI-RS, the UE may measure the channel quality across layers and resource blocks and feed back the CQI and RI values to the base station for use in updating the rank and assigning REs for future downlink transmissions. In the simplest case, as shown inFIG.3, a rank-2 spatial multiplexing transmission on a 2×2 MIMO antenna configuration will transmit one data stream from each transmit antenna304. Each data stream reaches each receive antenna308along a different signal path310. The receiver306may then reconstruct the data streams using the received signals from each receive antenna308. In order for transmissions over the radio access network200to obtain a low block error rate (BLER) while still achieving very high data rates, channel coding may be used. That is, wireless communication may generally utilize a suitable error correcting block code. In a typical block code, an information message or sequence is split up into code blocks (CBs), and an encoder (e.g., a CODEC) at the transmitting device then mathematically adds redundancy to the information message. Exploitation of this redundancy in the encoded information message can improve the reliability of the message, enabling correction for any bit errors that may occur due to the noise. According to 5G NR specifications, user data is coded using quasi-cyclic low-density parity check (LDPC) with two different base graphs. One base graph can be used for large code blocks and/or high code rates, and another base graph can be used otherwise. Of course, other use cases may be implemented with differing types of base graph combinations. Control information and the physical broadcast channel (PBCH) are coded using Polar coding, based on nested sequences. For these channels, puncturing, shortening, and repetition are used for rate matching. However, those of ordinary skill in the art will understand that aspects of the present disclosure may be implemented utilizing any suitable channel code. Various implementations of scheduling entities108and scheduled entities106may include suitable hardware and capabilities (e.g., an encoder, a decoder, and/or a CODEC) to utilize one or more of these channel codes for wireless communication. The air interface in the radio access network200may utilize one or more multiplexing and multiple access algorithms to enable simultaneous communication of the various devices. For example, 5G NR specifications provide multiple access for UL transmissions from UEs222and224to base station210, and for multiplexing for DL transmissions from base station210to one or more UEs222and224, utilizing orthogonal frequency division multiplexing (OFDM) with a cyclic prefix (CP). In addition, for UL transmissions, 5G NR specifications provide support for discrete Fourier transform-spread-OFDM (DFT-s-OFDM) with a CP (also referred to as single-carrier FDMA (SC-FDMA)). However, within the scope of the present disclosure, multiplexing and multiple access are not limited to the above schemes, and may be provided utilizing time division multiple access (TDMA), code division multiple access (CDMA), frequency division multiple access (FDMA), sparse code multiple access (SCMA), resource spread multiple access (RSMA), or other suitable multiple access schemes. Further, multiplexing DL transmissions from the base station210to UEs222and224may be provided utilizing time division multiplexing (TDM), code division multiplexing (CDM), frequency division multiplexing (FDM), orthogonal frequency division multiplexing (OFDM), sparse code multiplexing (SCM), or other suitable multiplexing schemes. Various aspects of the present disclosure will be described with reference to an OFDM waveform, schematically illustrated inFIG.4. It should be understood by those of ordinary skill in the art that the various aspects of the present disclosure may be applied to a DFT-s-OFDMA waveform in substantially the same way as described herein below. That is, while some examples of the present disclosure may focus on an OFDM link for clarity, it should be understood that the same principles may be applied as well to DFT-s-OFDMA waveforms. Within the present disclosure, a frame generally refers to a logical segment of transmission of a particular time interval. As one example configuration, a frame can refer to a duration of 10 ms for wireless transmissions, with each frame consisting of 10 subframes of 1 ms each. On a given carrier, there may be one set of frames in the UL, and another set of frames in the DL. Referring now toFIG.4, an expanded view of an exemplary DL subframe402is illustrated, showing an OFDM resource grid404. However, as those skilled in the art will readily appreciate, the PHY transmission structure for any particular application may vary from the example described here, depending on any number of factors. Here, time is in the horizontal direction with units of OFDM symbols; and frequency is in the vertical direction with units of subcarriers or tones. The resource grid404may be used to schematically represent time-frequency resources for a given antenna port. That is, in a MIMO implementation with multiple antenna ports available, a corresponding multiple number of resource grids404may be available for communication. The resource grid404is divided into multiple resource elements (REs)406. An RE, which is 1 subcarrier×1 symbol, is the smallest discrete part of the time-frequency grid, and contains a single complex value representing data from a physical channel or signal. Depending on the modulation utilized in a particular implementation, each RE may represent one or more bits of information. In some examples, a block of REs may be referred to as a physical resource block (PRB) or more simply a resource block (RB)408, which contains any suitable number of consecutive subcarriers in the frequency domain. In one example, an RB may include 12 subcarriers, a number independent of the numerology used. In some examples, depending on the numerology, an RB may include any suitable number of consecutive OFDM symbols in the time domain. Within the present disclosure, it is assumed that a single RB such as the RB408entirely corresponds to a single direction of communication (either transmission or reception for a given device). A UE generally utilizes only a subset of the resource grid404. An RB may be the smallest unit of resources that can be allocated to a UE. Thus, the more RBs scheduled for a UE, and the higher the modulation scheme chosen for the air interface, the higher the data rate for the UE. In this illustration, the RB408is shown as occupying less than the entire bandwidth of the subframe402, with some subcarriers illustrated above and below the RB408. In a given implementation, the subframe402may have a bandwidth corresponding to any number of one or more RBs408. Further, in this illustration, the RB408is shown as occupying less than the entire duration of the subframe402, although this is merely one possible example. Each 1 ms subframe402may consist of one or multiple adjacent slots. In the example shown inFIG.4, one subframe402includes four slots410, as an illustrative example. In some examples, a slot may be defined according to a specified number of OFDM symbols with a given cyclic prefix (CP) length. For example, a slot may include 7 or 14 OFDM symbols with a nominal CP. Additional examples may include mini-slots having a shorter duration (e.g., one or two OFDM symbols). These mini-slots may in some cases be transmitted occupying resources scheduled for ongoing slot transmissions for the same or for different UEs. An expanded view of one of the slots410illustrates the slot410including a control region412and a data region414. In general, the control region412may carry control channels (e.g., PDCCH), and the data region414may carry data channels (e.g., PDSCH or PUSCH). Of course, a slot may contain all DL, all UL, or at least one DL portion and at least one UL portion. The simple structure illustrated inFIG.4is merely exemplary in nature, and different slot structures may be utilized, and may include one or more of each of the control region(s) and data region(s). Although not illustrated inFIG.4, the various REs406within a RB408may be scheduled to carry one or more physical channels, including control channels, shared channels, data channels, etc. Other REs406within the RB408may also carry pilots or reference signals, including but not limited to a demodulation reference signal (DMRS) a control reference signal (CRS), or a sounding reference signal (SRS). These pilots or reference signals may provide for a receiving device to perform channel estimation of the corresponding channel, which may enable coherent demodulation/detection of the control and/or data channels within the RB408. In a DL transmission, the transmitting device (e.g., the scheduling entity108) may allocate one or more REs406(e.g., within a control region412) to carry DL control information114including one or more DL control channels, such as a PBCH; a PSS; a SSS; a physical control format indicator channel (PCFICH); a physical hybrid automatic repeat request (HARQ) indicator channel (PHICH); and/or a physical downlink control channel (PDCCH), etc., to one or more scheduled entities106. The PCFICH provides information to assist a receiving device in receiving and decoding the PDCCH. The PDCCH carries downlink control information (DCI) including but not limited to power control commands, scheduling information, a grant, and/or an assignment of REs for DL and UL transmissions. The PHICH carries HARQ feedback transmissions such as an acknowledgment (ACK) or negative acknowledgment (NACK). HARQ is a technique well-known to those of ordinary skill in the art, wherein the integrity of packet transmissions may be checked at the receiving side for accuracy, e.g., utilizing any suitable integrity checking mechanism, such as a checksum or a cyclic redundancy check (CRC). If the integrity of the transmission confirmed, an ACK may be transmitted, whereas if not confirmed, a NACK may be transmitted. In response to a NACK, the transmitting device may send a HARQ retransmission, which may implement chase combining, incremental redundancy. etc. In an UL transmission, the transmitting device (e.g., the scheduled entity106) may utilize one or more REs406to carry UL control information118including one or more UL control channels, such as a physical uplink control channel (PUCCH), to the scheduling entity108. UL control information may include a variety of packet types and categories, including pilots, reference signals, and information configured to enable or assist in decoding uplink data transmissions. In some examples, the control information118may include a scheduling request (SR), e.g., a request for the scheduling entity108to schedule uplink transmissions. Here, in response to the SR transmitted on the control channel118, the scheduling entity108may transmit downlink control information114that may schedule resources for uplink packet transmissions. UL control information may also include HARQ feedback, channel state feedback (CSF), or any other suitable UL control information. In addition to control information, one or more REs406(e.g., within the data region414) may be allocated for user data or traffic data. This data traffic may be carried on one or more traffic channels, such as, for a DL transmission, a physical downlink shared channel (PUSCH), or for an UL transmission, a physical uplink shared channel (PUSCH). In some examples, one or more REs406within the data region414may be configured to carry system information blocks (SIBs), carrying information that may enable access to a given cell. The channels or carriers described above and illustrated inFIGS.1and4are not necessarily all the channels or carriers that may be utilized between a scheduling entity108and scheduled entities106, and those of ordinary skill in the art will recognize that other channels or carriers may be utilized in addition to those illustrated, such as other traffic, control, and feedback channels. These physical channels described above are generally multiplexed and mapped to transport channels for handling at the medium access control (MAC) layer. Transport channels carry blocks of information called transport blocks (TB). The transport block size (TBS), which may correspond to a number of bits of information, may be a controlled parameter, based on the modulation and coding scheme (MCS) and the number of RBs in a given transmission. Exemplary Beamform Recovery Request Implementations The unique challenges in some wireless systems is that of high path loss. New techniques such as hybrid beamforming (analog and digital), which are not present in 3G and 4G systems, have been contemplated to address this issue. Hybrid/beamforming permits multi-beam operation to users that can enhance link budget/Signal to Noise Ratio (SNR). In a particular aspect of the disclosure, it is contemplated that a base station (e.g., eNB) and a user equipment (UE) communicate over active beams. Active beams are base station and UE beam pairs that carry data and control channels such as Physical Downlink Shared Channel (PUSCH), Physical Downlink Control Channel (PDCCH), Physical Uplink Shared Channel (PUSCH), and Physical Uplink Control Channel (PUSCH). In multi-beam operation, base station and UE active beam pairs may be misaligned (i.e., resulting in beam failure) due to beam switch failure or signal blockage. In such a scenario, the base station and UE cannot communicate over active beams (control or data). A UE may detect beam/link failure by monitoring a subset of reference beam(s) (or signals) that are quasi-colocationed (QCLed) with the demodulation reference signal (DMRS) of a control channel. Upon detection of beam/link failure the UE will ascertain uplink (UL) resources (time, frequency and beam) to reconnect with the serving cell. In multi-beam operation, UL resources should be configured so that the network can create a receive beam in those directions. FIGS.5A through5Gare diagrams illustrating exemplary communications between a base station (BS)504and a UE502using beamformed signals according to some aspects of the disclosure. The base station504may be any of the base stations or scheduling entities illustrated inFIGS.1and2, and the UE502may be any of the UEs or scheduled entities illustrated inFIGS.1and2. It should be noted that while some beams are illustrated as adjacent to one another, such an arrangement may be different in different aspects. In some examples, beams transmitted during a same symbol or time may not be adjacent to one another. In some examples, the BS504may transmit more or less beams distributed in all directions (e.g., 360 degrees) In one example, a beam set may contain eight different beams. For example.FIG.5Aillustrates eight beams521,522,523,524,525,526,527,528for eight directions. In some aspects of the disclosure, the base station (BS)504may be configured to transmit at least one of the beams521,522,523,524,525,526,527,528toward the UE502. For example, the BS504can sweep or transmit in eight directions using eight ports (e.g., antenna ports) during a synchronization slot. The BS504may transmit a beam reference signal (BRS) for each beam in the different beam directions during the synchronization slot. The receiver can use the BRS to identify the beam by performing received power measurements on the BRS. Referring toFIG.5B, the BS504may transmit a first set of beams521,523,525,527in four directions. For example, the BS504may transmit a BRS in a synchronization slot of each of the transmitted beams521,523,525,527. In one example, these beams521,523,525,527transmitted in four directions may be odd-indexed beams for the four directions out of the possible eight directions for the beam set. For example, the BS504may be capable of transmitting beams521,523,525,527in directions adjacent to other beams522,524,526,528that the BS504is configured to transmit. In this example, a configuration in which the BS504transmits beams521,523,525,527for the four directions may be considered a “coarse” beam set, which enables the UE502to identify a beam corresponding to a general direction from which a signal from BS504is most strongly detected. A “fine” beam set can then be used, as discussed with reference toFIG.5Dbelow, to identify the particular beam from BS504that is most strongly detected by the UE502. InFIG.5C, the UE502may determine or select a beam or beam index that is strongest (e.g., strongest signal) or preferable in the course beam set. For example, the UE502may determine that the beam525carrying a BRS is strongest or preferable. The UE502may select a beam by measuring values for a received power or received quality associated with each of the first set of course beams521,523,525,527, comparing respective values to one another, and selecting the beam that corresponds to the greatest, highest, or best value. The selected beam may correspond to a beam index at the BS504. The UE502may transmit an indication560of this beam index to the BS504. In one example, the indication560may include a request to transmit a beam refinement reference signal (BRRS). One of ordinary skill would appreciate that the BRRS may be referred to by different terminology without departing from the present disclosure, such as a beam refinement signal, a beam tracking signal, or another term. In various aspects of the disclosure, the UE502may determine a resource (e.g., time, frequency, and/or preamble) that corresponds to the selected beam or beam index. For example, a resource may include one of a radio frame, a subframe, a slot, a symbol, a subcarrier region, a preamble, a sequence, or an RE. Each resource may correspond to a value, for example, a radio frame index, a subframe index, slot index, a symbol index, or a subcarrier region. In one example, the UE502may have stored therein or may have access to a mapping or table (e.g., a lookup table) that indicates a respective resource (e.g., a value or index) to which the beam index corresponds. For example, the UE502may determine the beam index and then access a lookup table to determine a resource index or region that corresponds to the determined beam index. In one example, the resource may be included in the PUCCH. in one example, the resource may be included in a slot associated with a random access channel (RACH). For example, the resource may be included in a bandwidth or carrier reserved for RACH transmission or Physical Random Access Channel (PRACH). The BS504may receive the indication560, which may include a request for beam tracking (e.g., a request for a BRRS). Based on the indication560, the BS504may determine the index corresponding to the selected beam525. In one example, the indication560may be carried on a resource corresponding to the index of the selected beam525. In one aspect of the disclosure, the BS504may have stored therein or may have access to a mapping or table (e.g., a lookup table) that indicates a respective resource (e.g., a value or index) to which the beam index corresponds. For example, the BS504may determine the resource on which the indication560is received and then access a lookup table to determine a beam index (e.g., the index corresponding to the selected beam525) or resource region that corresponds to the determined beam index. InFIG.5D, the BS504may transmit a second set of beams based on the index included in the indication560. For example, the UE502may indicate that a first beam525is strongest or preferable and, in response, the BS504may transmit a second set of beams524,525,526to the UE502based on the indicated beam index. In an aspect of the disclosure, the second set of beams524,525,526transmitted based on the indicated beam index may be closer (e.g., spatially and/or directionally) to the selected beam525than those other beams521,523,527of the first set of beams. The second set of beams524,525,526transmitted based on the indicated beam index may be considered a “fine” beam set. The separation between two adjacent beams in the fine beam set is smaller than that of the course beam set. In one example, a BRRS may be transmitted in each of the beams524,525,526of the fine beam set. In one example, the beams524,525,526of the fine beam set may be adjacent beams. Based on one or more BRRSs received in the beams524,525,526of the fine beam set, the UE502may transmit a second indication565to the BS504to indicate a best, preferred, or selected “fine” beam or refined beam. In one example, the second indication565may use two (2) bits to indicate the selected beam. For example, the UE502may transmit an indication565that indicates an index corresponding to the selected beam525. The BS504may then transmit to the UE502using the selected beam525. Referring toFIG.5E, the BS504may transmit a BRS in a plurality of directions during a synchronization slot. In one example, the BS504may transmit the BRS continuously, e.g., even after the UE502has communicated the indication565of a selected beam525as described above. For example, the BS504may transmit simultaneously or sweep beams521,523,525,527that each include a BRS (e.g., a “coarse” beam set). The BRS may be transmitted periodically or in a predetermined interval. Referring toFIG.5F, the quality of the selected beam525may deteriorate due to various reasons such that the UE502may no longer be able to see or communicate using the selected beam525. Based on the BRS that is transmitted in the synchronization slot (e.g., continuously or periodically transmitted), the UE502may determine or find a new beam523on which to communicate with the BS504, For example, the UE502may determine that the beam523carrying a BRS is strongest, best, or preferable. The UE502may select a beam by measuring values for a received power or received quality associated with each of the set of course beams521,523,525,527, comparing respective values to one another, and selecting the beam that corresponds to the greatest or best value. The selected beam may correspond to a beam index at the BS504. The UE502may transmit a request570indicating this beam index to the BS504. In one example, the indication560may include a beam failure recovery signal. In various aspects of the disclosure, the UE502may determine a resource that corresponds to the selected beam index for transmitting the beam failure recovery signal. A resource may include one of a radio frame, a subframe, a slot, a symbol, a subcarrier region, or a preamble. Each resource may correspond to a value, for example, a radio frame index, a subframe index, a symbol index, or a subcarrier region. In one aspect of the disclosure, the UE may also transmit a beam adjustment request (BAR) to request the BS504to transmit a BRRS. In one aspect of the disclosure, the UE502may have stored therein or may have access to a mapping or table (e.g., a lookup table) that indicates a respective resource (e.g., a value or index) to which the beam index corresponds. For example, the UE502may determine the beam index and then access a lookup table to determine a resource index or region that corresponds to the determined beam index. In one aspect of the disclosure, the resource for transmitting the beam failure recovery request (e.g., request570) may be included in resources associated with PRACH. In one example, the resource may be included in a bandwidth or carrier reserved for RACH transmission in PRACH. In one example, the resource for transmitting the beam failure recovery request may be a resource orthogonal to the resources of PRACH transmissions. In another example, the resource for transmitting the beam failure recovery request may be a contention-based RACH resource. With respect toFIG.5G, the BS504may receive the request570with a beam failure recovery request from the UE502. The BS504may be configured to determine a beam index (e.g., a beam among the set of beams illustrated inFIG.5E) based on at least one of the request and/or the resource carrying the request. For example, the request570may be carried on a resource determined to correspond to the index of the selected beam523. In one example, the BS504may have stored therein or may have access to a mapping or table (e.g., a lookup table) that indicates a respective resource (e.g., a value or index) to which the beam index corresponds. For example, the BS504may determine the resource on which the request570is received and then access a lookup table to determine a beam index (e.g., the index corresponding to the selected beam523) or resource region that corresponds to the determined beam index. in an example, an uplink beam during reception of the request570may be one of the first set of beams521,523,525,527. In an aspect of the disclosure, the BS504may be configured to transmit a second set of beams522,523,524based on at least one of the request570and/or the resource on which the request570is carried. In an example, the BS504may be configured to determine, from the request570and/or the at least one resource carrying the request570, a range of indexes. In an example, the BS504determines the beam index based on at least one subcarrier of the at least one resource on which the request570is carried. In an aspect of the disclosure, the BS504determines, from within the range of indexes, the beam index based on a strength of a signal (e.g., reference signal) in different receive chains of the BS504through which the request570is received. For example, the BS504may receive the request570through a plurality of receive chains of the BS504. The BS504may determine a signal strength of the request570for each receive chain through which the request570is received. The BS504may determine that each receive chain is associated with at least one beam index (e.g., the beam index for beam523), and so the BS504may determine the beam index that corresponds to the receive chain in which the highest or strongest signal strength of the request570is detected. In an aspect of the disclosure, the BS504may transmit, to the UE502, an instruction to perform beam refinement. in one example, the instruction to perform beam refinement may be based on the selected beam523indicated to the BS504by the UE502. In one example, the BS504may transmit one or more BRRSs in one or more synchronization slots of the second set of beams522,523,524. The UE502may measure the BRRS in the scheduled slot(s) to determine the best beam of the BS504, such as by measuring a respective value for a received power and/or received quality of each beam of the second set of beams522,523,524, and comparing the measured values to one another to determine the highest values corresponding to a strongest beam of the second set of beams522,523,524. While the above described beam failure recovery processes are described with the UE transmitting the beam failure recovery request, without departing from the scope of the present disclosure, similar processes may be used by the base station to transmit a beam failure recovery request. In general, it should be appreciated that aspects disclosed herein are in accordance with various agreements reached by the wireless communication industry. For instance, aspects disclosed herein are in accordance with a first agreement directed towards a UE beam failure recovery mechanism, which includes having the UE perform 1) a beam failure detection; 2) a new candidate beam identification; 3) a beam failure recovery request transmission; and 4) a monitoring of a gNB response to the beam failure recovery request. With respect to beam failure detection, an agreement was reached that a UE shall monitor a beam failure detection reference signal (RS) to assess if a beam failure trigger condition has been met. it was further agreed that such beam failure detection RS at least includes a periodic channel state information reference signal (CSI-RS) for beam management e.g., a synchronization signal block (SS-block) within the serving cell can be considered, if the SS-block is also used in beam management as well). Trigger conditions for declaring beam failure was left for further study. With respect to new candidate beam identification, an agreement was reached that the UE shall monitor a beam identification RS to find a new candidate beam. To this end, it was further agreed that such beam identification RS shall include a periodic CSI-RS for beam management, if it is configured by the network. If an SS-block is also used in beam management, the beam identification RS shall include a periodic CSI-RS and SS-blocks within the serving cell. With respect to beam failure recovery request transmissions, an agreement was reached that information carried by a beam failure recovery request includes at least one of 1) explicit/implicit information identifying the UE and new gNB transmission beam information; 2) explicit/implicit information identifying the UE and whether or not a new candidate beam exists; or 3) for further study, information indicating a UE beam failure, additional information (e.g., new beam quality). This agreement further specifies that beam failure recovery request transmissions may comprise a down-selection between the following options: PRACH, PUCCH, a PRACH-like channel (e.g., having a different parameter for the preamble sequence from PRACH). This agreement also specifies that a beam failure recovery request resource/signal may be additionally used for a scheduling request. With respect to the monitoring of a gNB response to a beam failure recovery request, an agreement was reached that the UE shall monitor a control channel search space to receive a gNB's response to a beam failure recovery request. To this end, it was left for further study whether the control channel search space can be the same or different from the current control channel search space associated with the serving BPLs. It was also left for further study how a UE would react if the gNB does not receive a beam failure recovery request transmission. In a second agreement, the wireless communication industry identified various channels that may be used for beam failure recovery request transmissions. For instance, an agreement was reached to support beam failure recovery request transmissions via a non-contention based channel based on PRACH, which uses a resource orthogonal to resources of other PRACH transmissions, at least for the frequency division multiplexing (FDM) case. Other ways of achieving orthogonality, e.g., CDM/TDM with other PRACH resources, was left for further study. Also left for further study was whether or not to have a different sequence and/or format than those of PRACH for other purposes, and to what extent the retransmission behavior on this PRACH resource is similar to a regular RACH procedure. In this second agreement, support using PUCCH for beam failure recovery request transmission was also contemplated. Here, it was left for further study whether PUCCH is with beam sweeping or not, wherein it was noted that this may or may not impact PUCCH design. In this second agreement, it was also left for further study whether contention-based PRACH resources may be used as a supplement to contention-free beam failure recovery resources (e.g., from traditional RACH resource pool, whether a 4-step RACH procedure is used, etc.), wherein it was noted that contention-based PRACH resources may be used e.g., if a new candidate beam does not have resources for a contention-free PRACH-like transmission. In a third agreement, the wireless communication industry agreed that, in order to receive a gNB response to a beam failure recovery request, a UE shall monitor the New Radio (NR) PDCCH with the assumption that the corresponding PDCCH DMRS is spatial QCLed with the reference signal of the UE-identified candidate beam(s). For further study was whether the candidate beam(s) is/are identified from a preconfigured set or not. It was also agreed that detection of a gNB's response to a beam failure recovery request would be during a time window that is supported. Here, various details were left for further study including: whether the time window is configured or pre-determined; whether the number of monitoring occasions is within the time window; and the size/location of the time window. In this third agreement, it was also agreed that, if there is no response detected within the window, the UE may perform a re-transmission of the request. Moreover, if a gNB response is not detected after a certain number of transmission(s), it was agreed that the UE shall notify higher layer entities, wherein the number of transmission(s) was left for further study as well as possibly including the use of a timer. In a fourth agreement, the wireless communication industry agreed that the certain number of beam failure recovery request transmissions is network configurable by using any of various parameters. For instance, such parameters used by the network may include: the number of transmissions; whether the number is solely based on a timer; or a combination of a network-defined number of transmissions and a tinier. It was left for further study whether the beam failure recovery procedure is influenced by the radio link failure (RLF) event. In a fifth agreement, the wireless communication industry agreed that, in case of an unsuccessful recovery from beam failure, the UE shall send an indication to higher layers, and refrain from further beam failure recovery. Such indication may include an indication of the relationship between the RLF and the unsuccessful beam failure recovery, if any (e.g. whether the beam failure recovery procedure influences or is influenced by the RLF event). In a sixth agreement, the wireless communication industry agreed that a beam failure is declared only when all serving control channels fail. When a subset of serving control channels fail, it was agreed that this event should also be handled. In a seventh agreement, the wireless communication industry agreed that, in addition to the periodic CSI-RS, the SS-block within the serving cell can be used for new candidate beam identification. To this end, it was further agreed that the following options can be configured for new candidate beam identification: 1) CSI-RS only, wherein an SS block will not be configured for new candidate beam identification; 2) SS block only, wherein the CSI-RS will not be configured for new candidate beam identification; or 3) CSI-RS+SS block. Referring next toFIG.6, an exemplary beam recovery and scheduling request block in the RACH slot is illustrated in accordance with an aspect of the disclosure. 5G NR supports frequency division multiplexing of the beam recovery region and the RACH region.FIG.6thus shows a possible scenario frequency division multiplexing of the beam recovery region and the RACH region. If beam correspondence is available at the base station (BS), the BS may use a similar set of beams between transmitting downlink (DL) synchronization (SYNC) signals and receiving uplink (UL) RACH signals. If a UE loses its current working beam, it maps a good DL SYNC resource to the corresponding symbol index of the RACH slot. Namely, it selects one out of N subcarrier regions of the scheduling request (SR)/beam recovery request region and transmits in the selected symbol of the RACH slot. In an aspect of the disclosure, it is contemplated that UEs can select a PRACH type signal to transmit a beam recovery request to a gNB. Table 1 below shows a possible numerology of the beam recovery request channel. TABLE 1Beam Recovery Request Numerology in Multi-beam ScenarioNumber ofNumber ofSlotSubcarriersubcarrierSymbolcyclic shiftsdurationspacingSequenceregions indurationper subcarrier(us)(kHz)length50 MHz BW(us)region125301391033.33~100 It is contemplated that a BS can allow a much higher number of cyclic shifts to receive beam recovery requests in these slots. For example, if the delay spread is approximately 300 ns, the BS can allow approximately 100 orthogonal resources in each subcarrier region of the beam recovery request region because the sequence duration of the beam recovery request is 33.33 us. In a particular example where 50 MHz is proposed for minimum bandwidth in a multi-band scenario, since each beam recovery request region takes 4.32 MHz, there can be as high as 10 different subcarrier regions to transmit a beam recovery request. Some of these subcarrier regions may be used for a RACH message 1 (Msg1) preamble transmission and the BS can use some others for UL data transmission. For example, even if a gNB uses six subcarrier regions to communicate a scheduling request or beam recovery request, six hundred orthogonal resources could be fit into these regions to convey the beam recovery request. Here, each UE could be allotted two different resources to transmit the SR or beam recovery request, for example. In a first embodiment of the disclosure, it is thus contemplated that NR supports a RACH type sequence with higher number of cyclic shifts to convey beam recovery request to gNB through the non-contention based channel which is frequency division multiplexed with RACH. An exemplary beam failure recovery request procedure in accordance with the disclosure has a number of features. in a multi-beam operation. a UE detects a failure of an active PDCCH beam by monitoring a DL reference signal that is QCLed with a control channel. When a beam failure event occurs, the network cannot reach the UE. Upon detection of a failure event, the UE selects a beam from the candidate beam set to transmit beam failure recovery request to the gNB. As previously stated, in NR, the following channels are supported for the transmission of beam failure recovery request: (1) Non-contention based channel based on PRACH, which uses a resource orthogonal to resources of other PRACH transmissions, at least for the FDM case; (2) PUCCH; and (3) Contention-based PRACH resources as supplement to contention-free beam failure recovery resources. While contention-based PRACH incurs additional delay due to a four step RACH procedure, it may serve as a supplement to contention-free resources especially when the number of UEs in the system is large. Further, if the network does not configure any resources for beam failure recovery, then the UE may fall back to a contention-based PRACH to re-establish connection on the serving cell. In a second embodiment of the disclosure, it is thus contemplated that NR shall support contention-based PRACH resources for the transmission of beam failure recovery requests. Since NR supports multiple channels for the transmission of beam failure recovery requests, it is further contemplated that a non-contention based channel based on PRACH or contention-based PRACH resources as default be used. Namely, in a third embodiment of the disclosure, it is contemplated that NR shall support the configuration of a non-contention based channel based on PRACH or contention-based. PRACH resources for transmission of beam failure recovery request as default. Additionally, it is contemplated that the network may also configure PUCCH for the transmission of beam failure recovery requests. However, if all the active control beam(s) fail, then the UE cannot find a suitable beam to transmit a beam failure recovery request to the gNB in those directions. Therefore, the network may configure a beam swept PUCCH that is QCLed with either NR-SS or CSI-RS. In a fourth embodiment of the disclosure, it is thus contemplated that, in addition to non-contention based channel based on PRACH or contention-based PRACH, a base station can configure beam swept PUCCH that are QCLed with either NR-SS or CSI-RS for the transmission of beam failure recovery request. It is also contemplated that, because a network may configure PUCCH in addition to non-contention based channel based on PRACH and contention based. PRACH, a priority rule may be used for the UE, to send the request. Since the network configures dedicated PUCCH resources, the UE may be configured to access this before trying others. In a fifth embodiment of the disclosure, it is thus contemplated that, if gNB configures beam swept PUCCH resources in addition to non-contention based channel based on PRACH and contention based PRACH resources, then the UE prioritizes PUCCH over others. Furthermore, it is contemplated that is may be beneficial for the beam-swept PUCCH to carry multiple bits to allow the UE to: 1) Provide information of multiple candidate beam(s) to facilitate gNB to configure multiple beam pair links; 2) Send scheduling request over beam failure recovery request; and 3) Request additional training on the downlink over the newly identified candidate beams. Accordingly, in a sixth embodiment of the disclosure, it is thus contemplated that NR shall support multi-bit PUCCH to convey additional information during the transmission of beam failure recovery requests. Next, two scenarios are considered: UL synchronized and UL out-of-sync for transmitting beam recovery requests over beam recovery request regions. With UL synchronization, the time alignment (TA) timer (that specifies the length of time UE is considered uplink time aligned with the TRP) is still valid. Based on latest NR agreements, if UE receives beam failure indication from physical layer, it could send beam recovery request using beam-swept PUCCH or non-contention based channel based on PRACH. And gNB will monitor these regions for beam recovery request. In UL synchronized case, a UE can send a single beam failure recovery request over PUCCH or non-contention based channel based channel over the UE selected candidate beam and wait for the response in the response window. Accordingly, in a seventh embodiment of the disclosure, it is thus contemplated that the UE shall transmit one beam failure recovery request over PUCCH or non-contention based channel based on PRACH over a UE selected candidate beam before the end of monitored response window. In an eighth embodiment of the disclosure, it is contemplated that the UE shall assume single response to beam failure recovery request message before the end of a monitored response window. After the UE sends a beam failure recovery request, it may need to know whether this request has been successfully received by gNB. Thus, a set of UE monitoring mechanisms should be introduced. Like response window of RACH, the network could configure a response window for the UE to monitor a response for its recovery request. In a ninth embodiment of the disclosure, it is thus contemplated that the network can configure a response window where the UE monitors response to the beam failure recovery request transmission. It is possible that gNB fails to detect the request because of poor signal quality. Therefore, the UE may not receive a response within the response window. For robust operation, the retransmission mechanism of beam recovery request should be supported. Specifically, if the UE does not receive a response within the response window, it will send an indicator to L2 and the MAC will trigger the retransmission of the request. In a tenth embodiment of the disclosure, it is thus contemplated that, if the UE does not receive a response within the response window, then the UE can (re)transmit the beam failure recovery request. If the UE has retransmitted too many times, but still could not get the response within the response window, it may indicate that the UE is in poor radio condition or the UE has lost the synchronization with gNB. In this case, it will be radio resource inefficient if UE continue the retransmission of request. Therefore, the network may need to configure a maximum number of attempts for the beam failure recovery request transmissions (similar to RACH attempts in LTE). In an eleventh embodiment of the disclosure, it is thus contemplated that the network can configure the UE with a maximum number of attempts for the purpose of beam failure recovery request (re)transmissions: 1) the network can configure the UE to try a maximum of m1 attempts over beam-swept PUCCH (similar to SR procedure in LTE); or 2) the network shall configure the UE to try a maximum of m2 attempts over non-contention based channel based on PRACH and contention based PRACH resource (similar to regular RACH procedure). Exemplary Scheduling Entity Design FIG.7is a block diagram illustrating an example of a hardware implementation for a scheduling entity700employing a processing system714. For example, the scheduling entity700may be a base station (e.g., eNB, gNB) as illustrated in any one or more ofFIGS.1,2, and/or5A-5G. The scheduling entity700may be implemented with a processing system714that includes one or more processors704. Examples of processors704include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. In various examples, the scheduling entity700may be configured to perform any one or more of the functions described herein. That is, the processor704, as utilized in a scheduling entity700, may be used to implement any one or more of the processes and procedures described below and illustrated inFIGS.5A-5G, as well as the process illustrated inFIG.9. In this example, the processing system714may be implemented with a bus architecture, represented generally by the bus702. The bus702may include any number of interconnecting buses and bridges depending on the specific application of the processing system714and the overall design constraints. The bus702communicatively couples together various circuits including one or more processors (represented generally by the processor704), a memory705, and computer-readable media (represented generally by the computer-readable medium706). The bus702may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further. A bus interface708provides an interface between the bus702and a transceiver710. The transceiver710provides a communication interface or means for communicating with various other apparatus over a transmission medium. Depending upon the nature of the apparatus, a user interface712(e.g., keypad, display, speaker, microphone, joystick, touchscreen) may also be provided. In some aspects of the disclosure, the processor704may include a beam failure circuitry740configured for various functions, including, for example, to determine beam failure conditions associated with detecting a beam failure. For instance, beam failure circuitry740may include logic circuitry coupled to a memory component (e.g., memory705and/or computer-readable medium706), wherein the beam failure circuitry740may be configured to define and/or retrieve any of a plurality of parameters associated with detecting a beam failure (e.g., such parameters may be defined via user interface712). As illustrated, the processor704may also include network configuration circuitry742configured for various functions. For instance, network configuration circuitry742may be configured to ascertain a network configuration for a scheduled entity. For instance, network configuration circuitry742may include logic circuitry coupled to a memory component (e.g., memory705and/or computer-readable medium706), wherein the network configuration circuitry742may be configured to ascertain a network configuration based on any of a plurality of parameters (e.g., such parameters may be defined via user interface712). In a particular embodiment, it is contemplated that the network configuration may include the aforementioned parameters associated with the beam failure conditions, as well as parameters associated with determining one or more beam failure recovery resources to utilize to transmit a beam failure recovery request. The processor704may further include transmission circuitry744configured for various functions, including, for example, to transmit the network configuration to the scheduled entity. Here, it should be appreciated that transmission circuitry744may include logic circuitry coupled to transceiver710, wherein such logic circuitry may be configured to determine if and when to transmit the network configuration to one or more scheduled entities via transceiver710. Various other aspects for scheduling entity700are also contemplated. For instance, the transmission circuitry744may be configured to transmit the network configuration via radio resource control (RRC) signaling (e.g., the configuration may be enabled/disabled using Layers 1 and 2). The transmission circuitry744may also be configured to transmit the network configuration to a plurality of scheduled entities, and the network configuration circuitry742may be configured to ascertain a different network configuration for different scheduled entities. Such configuration may be traffic dependent, i.e., to reduce beam recovery delay, wherein the scheduling entity700may configure a subset of scheduled entities with uplink (UL) resources that are more frequent. It is also contemplated that such configuration may include configuring scheduled entities that have a high signal-to-noise ratio (SNR) to use any beam on the UL. Referring back to the remaining components of scheduling entity700, it should be appreciated that the processor704is responsible for managing the bus702and general processing, including the execution of software stored on the computer-readable medium706. The software, when executed by the processor704, causes the processing system714to perform the various functions described below for any particular apparatus. The computer-readable medium706and the memory705may also be used for storing data that is manipulated by the processor704when executing software. One or more processors704in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The software may reside on a computer-readable medium706. The computer-readable medium706may be a non-transitory computer-readable medium. A non-transitory computer-readable medium includes, by way of example, a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip), an optical disk (e.g., a compact disc (CD) or a digital versatile disc (DVD)), a smart card, a flash memory device (e.g., a card, a stick, or a key drive), a random access memory (RAM), a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a register, a removable disk, and any other suitable medium for storing software and/or instructions that may be accessed and read by a computer. The computer-readable medium706may reside in the processing system714, external to the processing system714, or distributed across multiple entities including the processing system714. The computer-readable medium706may be embodied in a computer program product. By way of example, a computer program product may include a computer-readable medium in packaging materials. Those skilled in the art will recognize how best to implement the described functionality presented throughout this disclosure depending on the particular application and the overall design constraints imposed on the overall system. In one or more examples, the computer-readable storage medium706may include beam failure software752configured for various functions, including, for example, to determine beam failure conditions associated with detecting a beam failure. As illustrated, the computer-readable storage medium706may also include network configuration software754configured for various functions. For instance, the network configuration software754may be configured to ascertain a network configuration for a scheduled entity. Here, it is contemplated that the network configuration may include the aforementioned parameters associated with the beam failure conditions, as well as parameters associated with determining one or more beam failure recovery resources to utilize to transmit a beam failure recovery request. The computer-readable storage medium706may further include transmission software756configured for various functions, including, for example, to transmit the network configuration to the scheduled entity. Various other aspects for computer-readable storage medium706are also contemplated. For instance, the transmission software756may be configured to transmit the network configuration via radio resource control (RRC) signaling (e.g., the configuration may be enabled/disabled using Layers 1 and 2). The transmission software756may also be configured to transmit the network configuration to a plurality of scheduled entities, and the network configuration software754may be configured to ascertain a different network configuration for different scheduled entities. In a particular configuration, it is also contemplated that the scheduling entity700includes means for determining beam failure conditions associated with detecting a beam failure; means for ascertaining a network configuration for a scheduled entity; and means for transmitting the network configuration to the scheduled entity. In one aspect, the aforementioned means may be the processor(s)704configured to perform the functions recited by the aforementioned means. In another aspect, the aforementioned means may be a circuit or any apparatus configured to perform the functions recited by the aforementioned means. Of course, in the above examples, the circuitry included in the processor704is merely provided as an example, and other means for carrying out the described functions may be included within various aspects of the present disclosure, including but not limited to the instructions stored in the computer-readable storage medium706, or any other suitable apparatus or means described herein and utilizing, for example, the processes and/or algorithms described in relation toFIG.9. Referring next toFIG.8, exemplary sub-components of network configuration circuitry742and network configuration software754are provided. As illustrated, network configuration circuitry742may comprise parameter sub-circuitry800and priority sub-circuitry810; whereas network configuration software754may comprise parameter instructions805and priority instructions815. In a particular implementation, it is contemplated that parameter sub-circuitry800and/or parameter instructions805are configured to determine at least one parameter to include in the network configuration. For instance, it is contemplated that the network configuration may specify at least one of a system frame number (SFN), a sub-frame indicator (SFI), a periodicity, or resource elements (REs) associated with the beam failure recovery resources. In a particular example, the number of REs configured per uplink beam may vary depending on the number of users in the beam. in another example, the network may configure more frequency or time resources in certain beams for larger payloads. In yet another example, it is contemplated that these resources may be in a region other than the random access channel (RACH). In a further aspect of the disclosure, it is contemplated that the network configuration may specify at least one of a quasi-coloration (QCL) or time relation between downlink beams and the beam failure recovery resources. For instance, it is contemplated that downlink beams may be based on one or more of a new radio synchronous signal (NR-SS), a mobility reference signal (MRS), or a channel state information reference signal (CSI-RS). In another aspect of the disclosure, the network configuration may specify link quality conditions in which the scheduled entity is to perform a forward handover or conditional handover to another cell. For instance, such handover may be performed if an estimated link quality corresponding to a hypothetical PDCCH BLER based on all or a subset of configured X RLM-RS resource(s) is below a Q_out threshold. It is further contemplated that parameter sub-circuitry800and/or parameter instructions805may be configured to determine various other parameters to include in the network configuration. For instance, parameter sub-circuitry800and/or parameter instructions805may be configured to have the network configuration include a timer parameter to facilitate a detection of the beam failure. In another embodiment, parameter sub-circuitry800and/or parameter instructions805may be configured to have the network configuration include a candidate beam threshold parameter to facilitate beam failure recovery, wherein the candidate beam threshold parameter corresponds to a received power threshold associated with a candidate beam. In yet another embodiment, parameter sub-circuitry800and/or parameter instructions805may be configured to have the network configuration include a time window parameter to facilitate beam failure recovery, wherein the time window parameter corresponds to a time window for monitoring a response to the beam failure recovery request. It is also contemplated that priority sub-circuitry810and/or priority instructions815may be configured to determine a priority to include in the network configuration. Here, such priority may facilitate a scheduled entity's determination of one or more beam failure recovery resources to utilize to transmit the beam failure recovery request. For instance, a first priority may be given to a non-contention based channel based on the physical layer random access channel (PRACH), which uses a resource orthogonal to resources of other PRACH transmissions (FDM/TDM/CDM). For this example, if beams in first priority channels are not suitable, the scheduled entity may find a suitable beam in second priority uplink (UL) resources, which may be in a contention-free region. And finally, as a lesser priority, the scheduled entity may select a contention-based channel for the transmission of beam failure recovery request. With respect to the particular priority included in the network configuration transmitted to the scheduled entity, it should be appreciated that such priority scheme may be based on any of various parameters. For instance, such priority may comprise selecting the beam failure recovery resources according to which of dedicated, contention-free, or common resources is first available. The priority may also comprise an exception if one or more beams belonging to a different priority are deemed to have a quality above a network configured threshold. Moreover, the scheduling entity700may configure the scheduled entity to (or the scheduled entity may be configured to autonomously) break a priority rule if one or more beams belonging to different priorities becomes significantly better, than the other beams by an offset or above a network configured threshold. It another aspect, priority sub-circuitry810and/or priority instructions815may be configured to have the network configuration specify a priority of using beams within the resources for beam failure recovery. The network configuration may also specify a threshold number of attempts for selecting a particular channel to transmit the beam failure recovery request (i.e., after which, the scheduled entity is allowed to select any channel or those in the next priority for a beam failure recovery request transmission). Similarly, the network configuration may specify a threshold amount of time for selecting a particular channel to transmit the beam failure recovery request (i.e., after the expiry of the timer, the scheduled entity is allowed to select any channel or those in the next priority for a beam failure recovery request transmission). The network configuration may also specify a threshold amount of time between retransmissions of the beam failure recovery request (i.e., after each transmission, the scheduled entity shall back-off based on a time pattern specified or provided by the network, for instance). Similarly, the network configuration may specify that the scheduled entity should slow down the (re)transmissions of such requests. InFIG.9, a flow chart is provided, which illustrates an exemplary scheduling entity process according to some aspects of the disclosure. As described below, some or all illustrated features may be omitted in a particular implementation within the scope of the present disclosure, and some illustrated features may not be required for implementation of all embodiments. In some examples, the process900may be carried out by the scheduling entity700illustrated inFIG.7. In some examples, the process900may be carried out by any suitable apparatus or means for carrying out the functions or algorithm described below. Process900begins at block910with the determining of beam failure conditions associated with detecting a beam failure, and continues at block920with the ascertaining of a network configuration for a scheduled entity which includes parameters associated with the beam failure conditions and parameters associated with determining one or more beam failure recovery resources. Process900then concludes at block930with the transmitting of the network configuration to the scheduled entity. Exemplary Scheduled Entity Design FIG.10is a conceptual diagram illustrating an example of a hardware implementation for an exemplary scheduled entity1000employing a processing system1014. In accordance with various aspects of the disclosure, an element, or any portion of an element, or any combination of elements may be implemented with a processing system1014that includes one or more processors1004. For example, the scheduled entity1000may be a user equipment (UE) as illustrated in any one or more ofFIGS.1,2, and/or5A-5G. The processing system1014may be substantially the same as the processing system714illustrated inFIG.7, including a bus interface1008, a bus1002, memory1005, a processor1004, and a computer-readable medium1006. Furthermore, the scheduled entity1000may include a user interface1012and a transceiver1010substantially similar to those described above inFIG.7. That is, the processor1004, as utilized in a scheduled entity1000, may be used to implement any one or more of the processes described below and illustrated in the various figures. In some aspects of the disclosure, the processor1004may include a detection circuitry1040configured for various functions, including, for example, to detect a beam failure of a beam used for communication between devices. For instance, detection circuitry1040may include sensors coupled to transceiver1010, wherein such sensors may be configured to detect when the signal quality or strength of a beam is below a predetermined threshold or not detected at all. As illustrated, the processor1004may also include determination circuitry1042configured for various functions. For instance, the determination circuitry1042may be configured to determine one or more beam failure recovery resources to utilize to transmit a beam failure recovery request, wherein the beam failure recovery resources are determined based at least partially on a network configuration of the scheduled entity1000. For instance, determination circuitry1042may include logic circuitry coupled to a memory component (e.g., memory1005and/or computer-readable medium1006), wherein the logic circuitry may be configured to determine one or more beam failure recovery resources based at least partially on a network configuration stored in memory1005and/or computer-readable medium1006. Here, it should be appreciated that determination circuitry1042may also include various other components (e.g., a timer, a counter, etc.) to facilitate additional aspects disclosed herein. The processor1004may further include transmission circuitry1044configured for various functions, including, for example, to transmit the beam failure recovery request via the beam failure recovery resources determined according to the network configuration. To this end, it should be appreciated that transmission circuitry1044may include logic circuitry coupled to transceiver1010, wherein such logic circuitry may be configured to determine if and when to transmit the beam failure recovery request via transceiver710in accordance with the network configuration. Various other aspects for scheduled entity1000are also contemplated. For instance, scheduled entity1000may be configured to receive the network configuration via radio resource control (RRC) signaling. Within such embodiments, the configuration may be enabled/disabled using Layers 1 and 2. Referring back to the remaining components of scheduled entity1000, similar to processor704, processor1004is responsible for managing the bus1002and general processing, including the execution of software stored on the computer-readable medium1006. The software, when executed by the processor1004, causes the processing system1014to perform the various functions described below for any particular apparatus. The computer-readable medium1006and the memory1005may also be used for storing data that is manipulated by the processor1004when executing software. One or more processors1004in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. The software may reside on a computer-readable medium1006. Similar to computer-readable medium706, computer-readable medium1006may be a non-transitory computer-readable medium comprising characteristics that are substantially similar. The computer-readable medium1006may reside in the processing system1014, external to the processing system1014, or distributed across multiple entities including the processing system1014. It should also be appreciated that, similar to computer-readable medium706, computer-readable medium1006may be embodied in a computer program product comprising characteristics that are substantially similar. In one or more examples, the computer-readable storage medium1006may include detection software1052configured for various functions, including, for example, to detect a beam failure of a beam used for communication between devices. As illustrated, the computer-readable storage medium1006may also include determination software1054configured for various functions. For instance, the determination software1054may be configured to determine one or more beam failure recovery resources to utilize to transmit a beam failure recovery request, wherein the beam failure recovery resources are determined based at least partially on a network configuration of the scheduled entity1000. The computer-readable storage medium1006may further include transmission software1056configured for various functions, including, for example, to transmit the beam failure recovery request via the beam failure recovery resources determined according to the network configuration. In a particular configuration, it is also contemplated that the scheduled entity1000includes means for detecting a beam failure of a beam used for communication between devices; means for determining one or more beam failure recovery resources to utilize to transmit a beam failure recovery request; and means for transmitting the beam failure recovery request via the beam failure recovery resources. In one aspect, the aforementioned means may be the processor(s)1004configured to perform the functions recited by the aforementioned means. In another aspect, the aforementioned means may be a circuit or any apparatus configured to perform the functions recited by the aforementioned means. Of course, in the above examples, the circuitry included in the processor1004is merely provided as an example, and other means for carrying out the described functions may be included within various aspects of the present disclosure, including but not limited to the instructions stored in the computer-readable storage medium1006, or any other suitable apparatus or means described herein and utilizing, for example, the processes and/or algorithms described in relation toFIG.12. Referring next toFIG.11, exemplary sub-components of determination circuitry1042and determination software1054are provided. As illustrated, determination circuitry1042may comprise parameter sub-circuitry1100and priority sub-circuitry1110; whereas determination software1054may comprise parameter instructions1105and priority instructions1115. In a particular implementation, it is contemplated that the network configuration may specify any of various parameters associated with the beam failure recovery resources. For instance, it is contemplated that parameter sub-circuitry1100and/or parameter instructions1105are configured to determine at least one of a system frame number (SFN), a sub-frame indicator (SFI), a periodicity, or resource elements (REs) associated with the beam failure recovery resources based on parameters indicated in the network configuration. In a particular example, the number of REs configured per uplink beam may vary depending on the number of users in the beam. In another example, the network may configure more frequency or time resources in certain beams for larger payloads. In yet another example, it is contemplated that these resources may be in a region other than the random access channel (RACH). In a further aspect of the disclosure, it is contemplated that the network configuration may specify at least one of a quasi-collocation (QCL) or time relation between downlink beams and the beam failure recovery resources. For instance, it is contemplated that downlink beams may be based on one or more of a new radio synchronous signal (NR-SS), a mobility reference signal (MRS), or a channel state information reference signal (CSI-RS). In another aspect of the disclosure, the network configuration may specify link quality conditions in which the scheduled entity1000is to perform a forward handover or conditional handover to another cell. For instance, such handover may be performed if an estimated link quality corresponding to a hypothetical PDCCH BUR based on all or a subset of configured X RLM-RS resource(s) is below a Q_out threshold. It is further contemplated that parameter sub-circuitry1100and/or parameter instructions1105may be configured to determine various other parameters included in the network configuration. For instance, parameter sub-circuitry1100and/or parameter instructions1105may be configured to determine a timer parameter to facilitate a detection of the beam failure. In another embodiment, parameter sub-circuitry1100and/or parameter instructions1105may be configured to determine a candidate beam threshold parameter to facilitate beam failure recovery, wherein the candidate beam threshold parameter corresponds to a received power threshold associated with a candidate beam. In yet another embodiment, parameter sub-circuitry1100and/or parameter instructions1105may be configured to determine a time window parameter to facilitate beam failure recovery, wherein the time window parameter corresponds to a time window for monitoring a response to the beam failure recovery request. It is also contemplated that priority sub-circuitry1110and/or priority instructions1115may be configured to determine a priority associated with determining the one or more beam failure recovery resources to utilize to transmit the beam failure recovery request, wherein the priority sub-circuitry1110and/or priority instructions1115may be configured to determine the priority based on a priority indicated in the network configuration. For instance, a first priority may be given to a non-contention based channel based on the physical layer random access channel (PRAM), which uses a resource orthogonal to resources of other PRAM transmissions (FDM/TDM/CDM). For this example, if beams in first priority channels are not suitable, the scheduled entity1000may find a suitable beam in second priority uplink (UL) resources, which may be in a contention-free region. And finally, as a lesser priority, the scheduled entity1000may select a contention-based channel for the transmission of beam failure recovery request. With respect to the particular priority included in the network configuration received by the scheduled entity1000, it should be appreciated that such priority may be based on any of various parameters. For instance, such priority may comprise selecting the beam failure recovery resources according to which of dedicated, contention-free, or common resources is first available. The priority may also comprise an exception if one or more beams belonging to a different priority are deemed to have a quality above a network configured threshold. Moreover, the scheduled entity1000may be configured to (or the scheduled entity may be configured to autonomously) break a priority rule if one or more beams belonging to different priorities becomes significantly better, than the other beams by an offset or above a network configured threshold. It another aspect, the priority of using beams within the resources for beam failure recovery may be specified by the network configuration. The network configuration may also specify a threshold number of attempts for selecting a particular channel to transmit the beam failure recovery request (i.e., after which, the scheduled entity1000is allowed to select any channel or those in the next priority for a beam failure recovery request transmission). Similarly, the network configuration may specify a threshold amount of time for selecting a particular channel to transmit the beam failure recovery request (i.e., after the expiry of the timer, the scheduled entity1000is allowed to select any channel or those in the next priority for a beam failure recovery request transmission). The network configuration may also specify a threshold amount of time between retransmissions of the beam failure recovery request (i.e., after each transmission, the scheduled entity1000shall back-off based on a time pattern specified or provided by the network, for instance). Similarly, the network configuration may specify that the scheduled entity1000should slow down the (re)transmissions of such requests. InFIG.12, a flow chart is provided, which illustrates an exemplary scheduled. entity process according to some aspects of the disclosure. As described below, some or all illustrated features may be omitted in a particular implementation within the scope of the present disclosure, and some illustrated features may not be required for implementation of all embodiments. In some examples, the process1200may be carried out by the scheduled entity1000illustrated inFIG.10. In some examples, the process1200may be carried out by any suitable apparatus or means for carrying out the functions or algorithm described below. Process1200begins at block1210with the detecting of a beam failure of a beam used for communication between devices, and continues at block1220with the determining of one or more beam failure recovery resources to utilize to transmit a beam failure recovery request. Here, the beam failure recovery resources are determined at block1220based at least partially on a network configuration of the scheduled entity. Process1200then concludes at block1230with the transmitting of the beam failure recovery request via the one or more beam failure recovery resources determined. according to the network configuration at block1220. Several aspects of a wireless communication network have been presented with reference to an exemplary implementation. As those skilled in the art will readily appreciate, various aspects described throughout this disclosure may be extended to other telecommunication systems, network architectures and communication standards. By way of example, various aspects may be implemented within other systems defined by 3GPP, such as Long-Term Evolution (LTE), the Evolved Packet System (EPS), the Universal Mobile Telecommunication System (UMTS), and/or the Global System for Mobile (GSM). Various aspects may also be extended to systems defined by the 3rd Generation Partnership Project 2 (3GPP2), such as CDMA2000 and/or Evolution-Data Optimized (EV-DO). Other examples may be implemented within systems employing IEEE 802.11 (Wi-Fi). IEEE 802.16 (WiMAX), IEEE 802.20, Ultra-Wideband (UWB), Bluetooth, and/or other suitable systems. The actual telecommunication standard, network architecture, and/or communication standard employed will depend on the specific application and the overall design constraints imposed on the system. Within the present disclosure, the word “exemplary” is used to mean “serving as an example, instance, or illustration,” Any implementation or aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term “aspects” does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term “coupled” is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another—even if they do not directly physically touch each other. For instance, a first object may be coupled to a second object even though the first object is never directly physically in contact with the second object. The terms “circuit” and “circuitry” are used broadly, and intended to include both hardware implementations of electrical devices and conductors that, when connected and configured, enable the performance of the functions described in the present disclosure, without limitation as to the type of electronic circuits, as well as software implementations of information and instructions that, when executed by a processor, enable the performance of the functions described in the present disclosure. One or more of the components, steps, features and/or functions illustrated inFIGS.1-12may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from novel features disclosed herein. The apparatus, devices, and/or components illustrated inFIGS.1-12may be configured to perform one or more of the methods, features, or steps described herein. The novel algorithms described herein may also be efficiently implemented in software and/or embedded in hardware. It is to be understood that the specific order or hierarchy of steps in the methods disclosed is an illustration of exemplary processes. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the methods may be rearranged. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented unless specifically recited therein. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. A phrase referring to “at least one of” a list of items refers to any combination of those items, including; single members. As an example, “at least one of: a, b, or c” is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”
108,234
11943642
Throughout the drawings, like reference numerals will be understood to refer to like parts, components and structures. DETAILED DESCRIPTION The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents. It is to be understood that the singular form “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces. In embodiments of the disclosure to be described below, a hardware approach will be described as an example. However, since the embodiments of the disclosure include a technology using both hardware and software, the embodiments of the disclosure do not exclude a software-based approach. Embodiments of the disclosure provide an apparatus and a method for performing beam failure recovery (BFR) in a wireless communication system. More specifically, the disclosure provides a technique for providing uplink information and prioritizing a logical channel for the BFR in the wireless communication system. Terms indicating signals, terms indicating messages, terms indicating channels, terms indicating control information, terms indicating network entities, and terms indicating components of a device, which are used in the following descriptions, are for the sake of explanations. Accordingly, the disclosure is not limited to the terms to be described, and may use other terms having technically identical meaning. The disclosure provides embodiments using terms used in some communication standards (e.g., 3rd Generation Partnership Project (3GPP)) by way of example. Embodiments of the disclosure may be easily used in other communication systems. FIG.1illustrates a wireless communication system according to an embodiment of the disclosure. Referring toFIG.1, a base station110, a terminal120, and a terminal130, as some of nodes which use a radio channel in the wireless communication system. WhileFIG.1depicts the single base station, the same or similar base station to the base station110may be further included. The base station110is a network infrastructure which provides radio access to the terminals120and130. The base station110has coverage defined as a geographical area, based on a signal transmission distance. The base station110may be referred to as an access point (AP), an enodeb (eNB), a 5th generation node (5G node), a 5G nodeb (gNodeB, gNB), a wireless point, a transmission/reception point (TRP), or other term having a technically equivalent meaning. The terminal120and the terminal130each are a device used by a user, and communicate with the base station110over a radio channel. In some cases, at least one of the terminal120and the terminal130may operate without user's involvement. For example, at least one of the terminal120and the terminal130performs machine type communication (MTC) and may not be carried by the user. The terminal120and the terminal130each may be referred to as a user equipment (UE), a mobile station, a subscriber station, a remote terminal, a wireless terminal, a user device, or other term having a technically equivalent meaning. The base station110, the terminal120, and the terminal130may transmit and receive radio signals in a millimeter wave (mmWave) band (e.g., 28 GHz, 30 GHz, 38 GHz, 60 GHz). To improve channel gain, the base station110, the terminal120, and the terminal130may conduct beamforming. Herein, the beamforming may include transmit beamforming and receive beamforming. For example, the base station110, the terminal120, and the terminal130may apply directivity to a transmit signal or a receive signal. For doing so, the base station110and the terminals120and130may select serving beams112,113,121, and131through beam search or beam management. After the serving beams112,113,121, and131are selected, communications may be performed using resources which are quasi co-located (QCL) with resources which carry the serving beams112,113,121, and131. If large-scale properties of a channel which carries a symbol on a first antenna port may be inferred from a channel which carries a symbol on a second antenna port, the first antenna port and the second antenna port may be said to be QCL. For example, the large-scale properties may include at least one of delay spread, Doppler spread, Doppler shift, average gain, average delay, and spatial receive parameter. FIG.2illustrates a base station in a wireless communication system according to an embodiment of the disclosure.FIG.2depicts a configuration of the base station110. A term, such as ‘portion’ or ‘˜er’ indicates a unit for processing at least one function or operation, and may be implemented using hardware, software, or a combination of hardware and software. Referring toFIG.2, the base station includes a wireless communication unit210, a backhaul communication unit220, a storage unit230, and a control unit240. The wireless communication unit210may transmit and receive signals over a radio channel. For example, the wireless communication unit210performs a conversion function between a baseband signal and a bit string according to a physical layer standard of the system. For example, in data transmission, the wireless communication unit210generates complex symbols by encoding and modulating a transmit bit string. In addition, in data reception, the wireless communication unit210restores a receive bit string by demodulating and decoding a baseband signal. The wireless communication unit210up-converts the baseband signal to a radio frequency (RF) band signal, transmits it via an antenna, and down-converts an RF band signal received via an antenna to a baseband signal. For doing so, the wireless communication unit210may include a transmit filter, a receive filter, an amplifier, a mixer, an oscillator, a digital to analog convertor (DAC), an analog to digital convertor (ADC), and the like. In addition, the wireless communication unit210may include a plurality of transmit and receive paths. Further, the wireless communication unit210may include at least one antenna array including a plurality of antenna elements. In view of hardware, the wireless communication unit210may include a digital unit and an analog unit, and the analog unit may include a plurality of sub-units according to an operating power and an operating frequency. The digital unit may include at least one processor (e.g., a digital signal processor (DSP)). As such, the wireless communication unit210transmits and receives the signals. Hence, whole or part of the wireless communication unit210may be referred to as a transmitter, a receiver, or a transceiver. In the following, the transmission and the reception over the radio channel embrace the above-stated processing of the wireless communication unit210. The backhaul communication unit220provides an interface for communicating with other nodes in the network. For example, the backhaul communication unit220converts a bit sting transmitted from the base station to another node, for example, to another access node, another base station, an upper node, or a core network, to a physical signal, and converts a physical signal received from the other node to a bit string. The storage unit230stores a basic program for operating the base station, an application program, and data, such as configuration information. The storage unit230may include a volatile memory, a non-volatile memory, or a combination of a volatile memory and a non-volatile memory. The storage unit230provides the stored data in response to a request of the control unit240. The control unit240controls general operations of the base station. For example, the control unit240transmits and receives signals through the wireless communication unit210or the backhaul communication unit220. In addition, the control unit240records and reads data in and from the storage unit230. The control unit240may execute functions of a protocol stack requested by a communication standard. According to other embodiment of the disclosure, the protocol stack may be included in the wireless communication unit210. For doing so, the control unit240may include at least one processor. According to various embodiments of the disclosure, the control unit240may control the base station to carry out operations to be explained according to various embodiments. FIG.3illustrates a configuration of a terminal in a wireless communication system according to an embodiment of the disclosure.FIG.3depicts a configuration of the terminal120. A term, such as ‘portion’ or ‘˜er’ indicates a unit for processing at least one function or operation, and may be implemented using hardware, software, or a combination of hardware and software. Referring toFIG.3, the terminal includes a communication unit310, a storage unit320, and a control unit330. The communication unit310may transmit and receive signals over a radio channel. For example, the communication unit310performs a conversion function between a baseband signal and a bit string according to a physical layer standard of the system. For example, in data transmission, the communication unit310generates complex symbols by encoding and modulating a transmit bit string. Moreover, in data reception, the communication unit310restores a receive bit string by demodulating and decoding a baseband signal. In addition, the communication unit310up-converts the baseband signal to an RF band signal, transmits it via an antenna, and down-converts an RF band signal received via the antenna to a baseband signal. For example, the communication unit310may include a transmit filter, a receive filter, an amplifier, a mixer, an oscillator, a DAC, an ADC, and the like. Furthermore, the communication unit310may include a plurality of transmit and receive paths. Further, the communication unit310may include at least one antenna array including a plurality of antenna elements. In view of the hardware, the communication unit310may include a digital circuit and an analog circuit (e.g., an RF integrated circuit (RFIC)). Herein, the digital circuit and the analog circuit may be implemented as a single package. In addition, the communication unit310may include a plurality of RF chains. Further, the communication unit310may perform the beamforming. As such, the communication unit310transmits and receives the signals. Hence, whole or part of the communication unit310may be referred to as a transmitter, a receiver, or a transceiver. Hereafter, the transmission and the reception over the radio channel embrace the above-stated processing of the communication unit310. The storage unit320stores a basic program for operating the terminal, an application program, and data, such as configuration information. The storage unit320may include a volatile memory, a non-volatile memory, or a combination of a volatile memory and a non-volatile memory. The storage unit320provides the stored data according to a request of the control unit330. The control unit330controls general operations of the terminal. For example, the control unit330transmits and receives signals through the communication unit310. In addition, control unit330records and reads data in and from the storage unit320. The control unit330may execute functions of a protocol stack required by a communication standard. For doing so, the control unit330may include at least one processor or microprocessor, or may be part of a processor. Part of the communication unit310and the control unit330may be referred to as a communication processor (CP). According to various embodiments of the disclosure, the control unit330may control the terminal to carry out operations, to be explained, according to various embodiments. FIGS.4A to4Cillustrate a configuration of a communication unit in a wireless communication system according to various embodiments of the disclosure.FIGS.4A,4B, and4Cdepict a detailed configuration of the wireless communication unit210ofFIG.2Aor the communication unit310ofFIG.3. More specifically,FIGS.4A,4B, and4Cdepict components for performing the beamforming, as part of the wireless communication unit210ofFIG.2Aor the communication unit310ofFIG.3. Referring toFIG.4A, the wireless communication unit210or the communication unit310includes an encoder and modulator402, a digital beamformer404, a plurality of transmit paths406-1through406-N, and an analog beamformer408. The encoder and modulator402performs channel encoding. For the channel encoding, at least one of low density parity check (LDPC) code, convolution code, and polar code may be used. The encoder and modulator402generates modulation symbols through constellation mapping. The digital beamformer404beamforms a digital signal (e.g., the modulation symbols). For doing so, the digital beamformer404multiplies the modulation symbols by beamforming weights. Herein, the beamforming weights are used to change an amplitude and a phase of the signal, and may be referred to as a precoding matrix or a precoder. The digital beamformer404outputs the digital-beamformed modulation symbols to the transmit paths406-1through406-N. In doing so, according to massive multiple-input multiple-output (MIMO) transmission, the modulation symbols may be multiplexed or the same modulation symbols may be fed to the transmit paths406-1through406-N. The transmit paths406-1through406-N convert the digital-beamformed digital signals to analog signals. For doing, the transmit paths406-1through406-N each may include an inverse fast fourier transform (IFFT) operator, a cyclic prefix (CP) adder, a DAC, and an up-converter. The CP adder is used for orthogonal frequency division multiplexing (OFDM), and may be excluded if another physical layer scheme (e.g., filter bank multi-carrier (FBMC)) is applied. For example, the transmit paths406-1through406-N provide an independent signal process for a plurality of streams generated through the digital beamforming. Notably, depending on the implementation, some of the components of the transmit paths406-1through406-N may be used in common. The analog beamformer408beamforms the analog signals. For doing so, the digital beamformer404multiplies the analog signals by the beamforming weights. Herein, the beamforming weights are used to change the amplitude and the phase of the signal. More specifically, the analog beamformer408may be configured as shown inFIG.4BorFIG.4C, according to a connection structure between the transmit paths406-1through406-N and the antennas. Referring toFIG.4B, signals inputted to the analog beamformer408are converted in phase/amplitude, amplified, and then transmitted via the antennas. In doing so, signals of each path are transmitted via different antenna sets, that is, antenna arrays. Signals inputted in a first path are converted by phase/amplitude converters412-1-1through412-1-M to signal strings having different or the same phase/amplitude, amplified by amplifiers414-1-1through414-1-M, and then transmitted via the antennas. In addition, signals inputted in a first path are converted by phase/amplitude converters412-N-1through412-N-M to signal strings having different or the same phase/amplitude, amplified by amplifiers414-N-1through414-N-M, and then transmitted via the antennas. Referring toFIG.4C, signals inputted to the analog beamformer408are converted in phase/amplitude, amplified, and then transmitted via antennas. In doing so, signals of each path are transmitted via the same antenna set, that is, the same antenna array. Signals inputted in the first path are converted by the phase/magnitude converters412-1-1through412-1-M to signal strings having different or the same phase/amplitude, and amplified by the amplifiers414-1-1through414-1-M. Next, to transmit via a single antenna array, the amplified signals are summed by adders416-1-1through416-1-M based on the antenna element and then transmitted via the antennas. The independent antenna array is used per transmit path inFIG.4B, and the transmit paths share the single antenna array inFIG.4C. However, according to another embodiment of the disclosure, some transmit paths may use the independent antenna array, and the rest transmit paths may share one antenna array. Further, according to yet another embodiment of the disclosure, by applying a switchable structure between the transmit paths and the antenna arrays, a structure which adaptively changes according to a situation may be used. With advent of smart phones, use of a wireless communication network and a portable electronic device by the user is exponentially growing, and, to accomplish a higher data rate, a 5G communication system considers implementation at a super-high frequency (mmWave) band (e.g., a 60 GHz band). In order to obviate a path loss and to increase a delivery distance of propagation at the super-high frequency band, beamforming, MIMO, full dimensional MIMO (FD-MIMO), array antenna, analog beam-forming, and large scale antenna techniques are discussed in the 5G communication system. Additionally, for an improvement in network of the system, the 5G communication system develops techniques, such as evolved small cell, advanced small cell, cloud radio access network (cloud RAN), ultra-dense network, device to device (D2D) communication, wireless backhaul, moving network, cooperative communication, coordinated multi-points (CoMP), and interference cancellation. Besides, the 5G communication system is working on hybrid frequency shift keying (FSK) and quadrature amplitude modulation (QAM) modulation (FQAM) and sliding window superposition coding (SWSC) as advanced coding modulation (ACM), filter bank multi carrier (FBMC), non-orthogonal multiple access (NOMA), and sparse code multiple access (SCMA) as an advanced access technology. Now, operations for improving terminal's performance are explained according to various embodiments of the disclosure, if beam performance degrades while a terminal and a base station transmit and receive information using multiple beams. Such beam performance degradation may be referred to as beam failure, and a process for recovering it may be referred to as BFR. A radio resource control (RRC) connected terminal may perform wireless communication with the base station using one or more serving beams. Such a serving beam may be measured/observed and reported in association with channel state information (CSI)-reference signal (RS) or synchronization signal (SS) block (SSB). If the CSI-RS based serving beam is used, the network may configure one or more (mapped to different beams) unique preambles and/or one or more contention free random access (RA) resources to the terminal for the sake of the serving BFR, and conduct signal transmission. Herein, such configuration information may be transmitted in a specific information element, for example, BFR config in a downlink RRC/media access control (MAC)/physical (PHY) signal provided from a specific network to the terminal. In doing so, the contention free RA resource may have beam association which is received at the base station using the same beam direction as the configured unique CSI-RS. If the SSB based serving beam is used, the network may set one or more (mapped to different beams) unique preambles and/or one or more contention free RA resources to the terminal for the sake of the serving BFR, and conduct signal transmission. Herein, such configuration information may be transmitted in a specific information element, for example, in BFR Config in a downlink RRC/MAC/PHY signal provided from a specific network to the terminal. In doing so, each contention free RA resource may have the beam association which is received at the base station using the same beam direction as the configured unique SSB. If a measured channel quality of any CSI-SS or SSB beams associated with the configured contention free RA resource does not exceed a specific threshold BFCandidateBeamThreshold which is preset by the network, the terminal may perform existing contention based RA using the contention based RA resource. If the measured channel quality of any CSI-SS or SSB beams associated with the configured contention free RA resource exceeds the specific threshold BFandidateBeamThreshold which is preset by the network, the terminal may perform the contention free RA by selecting resources as follows.The terminal may conduct the contention free RA by selecting resources which may use beams of the best channel quality.The terminal may conduct the contention free RA for K-ary resources by selecting the K-ary resources in order having the beam of the best channel quality. At this time, K may be a value included in a downlink signal (RRC/MAC/PHY) which is configured and provided by the network It is noted that the resources for the contention based RA and specific beams (e.g., SSBs) associated therewith may be different from the resources for the contention free RA and specific beams (e.g., CS-RSs) associated therewith. In this case, the terminal needs to measure only beams associated to the contention based RA resources and determine resources for transmitting the preamble for the contention based RA. As a result, if not using the contention free RA for the BFR (e.g., if the measured beam quality falls below the threshold or the resource is not allocated), the terminal may not provide the network with proper beam information (CSI-RS identifier (ID) and the measurement value) due to the different beam association although the contention based RA is used. Notably, this may be addressed if the SSB associated with the contention based RA resource has a one-to-one relationship with a candidate CSI-RS which is considered by a specific terminal and it is known to both of the network and the terminal. However, this case may not occur frequently, and it should be noted that, if the contention based RA resource based on the SSB and the contention free RA resource based on the CSI-RS exist together, the correlation between the resources and the beams may be a many-to-one relationship (many CSI-RSs to one SSB, or one CSI-RS to many SSBs), in addition to the one-to-one. Hence, if using the contention based RA, the terminal requires a method for providing accurate candidate beam information to the network. For doing so, the terminal may include its terminal information (e.g., C-RNTI) and beam information (e.g., CSI-RS ID, CSI-RS measurement) in an uplink message (e.g., Msg3) transmitted from the terminal, in response to a downlink RA preamble response message of the network in the RA procedure. Now, the disclosure provides embodiments for transmitting such information and embodiments for prioritizing a logical channel. FIG.5illustrates a flowchart of a terminal for performing a RA procedure in a wireless communication system according to an embodiment of the disclosure.FIG.5illustrates an operating method of the terminal120. Referring toFIG.5, in operation501the terminal may transmit a contention based RA channel (CB RACH) preamble. In operation503, the terminal may receive a RA response (RAR) in response to the transmitted preamble. In operation505, the terminal may include a C-RNTI MAC-CE in a message to transmit (e.g., Msg3) using an uplink resource designated by the RAR. In operation507, the terminal determines whether the ongoing CB RACH transmission is initiated by the BFR. If the CB RACH transmission is initiated by the BFR, the terminal determines whether there is available (having a measurement value over a specific threshold) CSI-RS resource in operation509. For example, the terminal determines that the ongoing CB RACH transmission is initiated by the BFR, and determines whether a suitable CSI-RS resource is available. If the CB RACH transmission is initiated by the BFR and the CSI-RS resource is available, the terminal includes a BFR MAC-CE including an ID of the CSI-RS resource in a subsequent uplink transmission in operation511. For example, the terminal may include the BFR MAC-CE including the ID of the CSI-RS resource in a Msg3 to transmit. In operation513, the terminal performs the uplink transmission. For example, the terminal may transmit the Msg3 using the resource allocated from the RAR. In the embodiment ofFIG.5, the Msg3 may have a structure ofFIG.6, including the C-RNTI MAC-CE and the BFR MAC-CE including the ID of the CSI-RS resource. FIG.6illustrates a Msg3 structure in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.6, the Msg3 may include a MAC subheader602, a C-RNTI MAC CE604, a MAC subheader606, and a BFR MAC CE608. At this time, logical channel prioritization for generating such a MAC CE may be defined as shown inFIG.7. FIG.7illustrates a flowchart of a terminal for generating a MAC CE in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.7, in operation701the terminal first adds a C-RNTI MAC-CE. In operation703, the terminal, if available, adds a BFR MAC CE. In operation705, the terminal adds other MAC CEs, if uplink grant remains. In other words, if capacity remains, the terminal may add other MAC-CEs. FIG.8illustrates a flowchart of a terminal for performing a RA procedure in a wireless communication system according to an embodiment of the disclosure.FIG.8illustrates an operating method of the terminal120. Referring toFIG.8, in operation801the terminal may transmit a CB RACH preamble. In operation803, the terminal may receive an RAR in response to the transmitted preamble. Next, the terminal may include a specific MAC-CE in a Msg3 to transmit using an uplink resource designated by the RAR. For doing so, the terminal determines whether the ongoing CB RACH transmission is initiated by the BFR in operation805. If the CB RACH transmission is initiated by the BFR, the terminal determines whether there is available (having a measurement value over a specific threshold) CSI-RS resource in operation807. For example, the terminal determines whether the ongoing CB RACH transmission is initiated by the BFR, and determines whether a suitable CSI-RS resource is available. If the CB RACH transmission is not initiated by the BFR or the CSI-RS resource is not available, the terminal includes a C-RNTI MAC-CE in the uplink transmission in operation809. By contrast, if the CB RACH transmission is initiated by the BFR or the CSI-RS resource is available, the terminal includes a BFR MAC-CE including the ID of the CSI-RS resource in the uplink transmission in operation811. In operation813, the terminal performs the uplink transmission. For example, the terminal may transmit a Msg3 using the resource allocated from the RAR. In the embodiment ofFIG.8, if all the conditions are satisfied, that is, in operation811, the BFR MAC-CE included in the Msg3 may be defined as shown inFIG.9. FIG.9illustrates a BFR MAC-CE structure included in a Msg3 in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.9, the MAC-CE may include a MAC subheader902and a BFR MAC CE904. In the embodiment ofFIG.8, if at least one condition is not satisfied, that is, in operation809, the C-RNTI MAC-CE included in the Msg3 may be defined as shown inFIG.10. FIG.10illustrates a C-RNTI MAC-CE structure included in a Msg3 in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.10, the MAC-CE may include a MAC subheader1002and a C-RNTI MAC CE1004. Logical channel prioritization for generating such a MAC CE may be defined as shown inFIG.11. FIG.11illustrates a flowchart of a terminal for generating a MAC CE in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.11, in operation1101the terminal first adds one of a C-RNTI MAC-CE or a BFR MAC-CE according to a condition. In operation1103, if uplink grant remains, the terminal adds other MAC CEs. In other words, if capacity remains, the terminal may add other MAC-CEs. FIG.12illustrates a flowchart of a terminal for performing a RA procedure in a wireless communication system according to an embodiment of the disclosure.FIG.12illustrates an operating method of the terminal120. Referring toFIG.12, in operation1201the terminal may transmit a CB RACH preamble. In operation1203, the terminal may receive an RAR in response to the transmitted preamble. Next, the terminal may include a specific MAC-CE in a Msg3 to transmit using an uplink resource designated by the RAR. For doing so, the terminal determines whether the ongoing CB RACH transmission is initiated by the BFR in operation1205. If the CB RACH transmission is initiated by the BFR, the terminal determines whether there is available (having a measurement value over a specific threshold) CSI-RS resource in operation1207. For example, the terminal determines whether the ongoing CB RACH transmission is initiated by the BFR, and determines whether a suitable CSI-RS resource is available. If the CB RACH transmission is not initiated by the BFR, the terminal includes a C-RNTI MAC-CE in uplink transmission in operation1209. For example, the terminal may add the C-RNTI MAC-CE in the Msg3. If the ongoing CB RACH transmission is initiated by the BFR and no CSI-RS resource is available (having the measurement value over the threshold), the terminal includes a BFR MAC-CE including C-RNTI without CSI-RS information to transmit, in the uplink transmission in operation1211. For example, the terminal may include the BFR MAC-CE including only the C-RNTI without the CSI-RS information, in the Msg3. If the ongoing CB RACH transmission is initiated by the BFR and the CSI-RS resource is available (having the measurement value over the threshold), the terminal includes a BFR MAC-CE including both of the CSI-RS information and the C-RNTI, in the uplink transmission in operation1213. For example, the terminal may include the BFR MAC-CE including both of the CSI-RS information and the C-RNTI, in the Msg3. In operation1215, the terminal performs the uplink transmission. For example, the terminal may transmit the Msg3 using a resource allocated from the RAR. In the embodiment ofFIG.12, the BFR MAC-CE included in the Msg3 while transmitting the RACH due to the BFR may be defined as shown inFIG.13. FIG.13illustrates a BFR MAC-CE structure included in a Msg3 in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.13, the Msg3 may include a MAC subheader1302and a BFR MAC-CE1304. Herein, the Msg3 may include a 1-bit indicator which indicates whether the corresponding BFR MAC-CE includes the CSI-RS information (e.g., ID). In the embodiment ofFIG.12, if any condition is not satisfied, that is, in operation1209, the C-RNTI MAC-CE included in the Msg3 may be defined as shown inFIG.14. FIG.14illustrates a C-RNTI MAC-CE structure included in a Msg3 in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.14, the Msg3 may include a MAC subheader1402and a C-RNTI MAC CE1404. At this time, the logical channel prioritization for generating such a MAC CE may be defined as shown inFIG.15. FIG.15illustrates a flowchart of a terminal for generating a MAC CE in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.15, in operation1501the terminal first adds one of a C-RNTI MAC-CE or a BFR MAC-CE according to a condition. In operation1503, if uplink grant remains, the terminal adds other MAC CEs. In other words, if capacity remains, the terminal may add other MAC-CEs. FIG.16illustrates a flowchart of a terminal for performing a RA procedure in a wireless communication system according to an embodiment of the disclosure.FIG.16illustrates an operating method of the terminal120. Referring toFIG.16, in operation1601the terminal may transmit a CB RACH preamble. In operation1603, the terminal may receive an RAR in response to the transmitted preamble. Next, the terminal may include a specific MAC-CE in a Msg3 to transmit using an uplink resource designated by the RAR. For doing so, the terminal determines whether the ongoing CB RACH transmission is initiated by the BFR in operation1605. If the CB RACH transmission is initiated by the BFR, the terminal determines whether there is available (having a measurement value over a specific threshold) CSI-RS resource in operation1607. For example, the terminal determines whether the ongoing CB RACH transmission is initiated by the BFR, and a suitable CSI-RS resource is available. If the CB RACH transmission is not initiated by the BFR, the terminal includes a C-RNTI MAC-CE in uplink transmission in operation1609. For example, the terminal may include the C-RNTI MAC-CE in the Msg3. If the ongoing CB RACH transmission is initiated by the BFR and no CSI-RS resource is available (having the measurement value over the threshold), the terminal includes a Type2 BFR MAC-CE in the uplink transmission in operation1611. For example, the terminal may include the Type2 BFR MAC-CE in the Msg3 to transmit. If the CB RACH transmission conducted again is initiated by the BFR and the CSI-RS resource is available (having the measurement value over the threshold), the terminal includes a Type1 BFR MAC-CE in the uplink transmission in operation1613. For example, the terminal may include the Type1 BFR MAC-CE in the Msg3. In operation1615, the terminal performs the uplink transmission. For example, the terminal may transmit the Msg3 using a resource allocated from the RAR. In the embodiment ofFIG.16, the Type1 BFR MAC-CE included in the Msg3 while transmitting the RACH may be defined as shown inFIG.17. FIG.17illustrates a Type1 BFR MAC-CE structure included in a Msg3 in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.17, the Msg3 may include a MAC subheader1702and a Type1 BFR MAC CE1704. Herein, the corresponding BFR MAC-CE may include CSI-RS information (e.g., ID) and a C-RNTI. In the embodiment ofFIG.16, the Type2 BFR MAC-CE included in the Msg3 while transmitting the RACH may be defined as shown inFIG.18. FIG.18illustrates a Type2 BFR MAC-CE structure included in a Msg3 in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.18, the corresponding BFR MAC-CE may not include CSI-RS information (e.g., ID), but include a C-RNTI. The Msg3 may include a MAC subheader1802and a Type2 BFR MAC CE1804. Herein, the corresponding BFR MAC-CE may include CSI-RS information (e.g., ID) and a C-RNTI. In the embodiment ofFIG.16, if any condition is not satisfied, that is, in operation1609, the C-RNTI MAC-CE included in the Msg3 may be defined as shown inFIG.19. FIG.19illustrates a C-RNTI MAC-CE structure included in a Msg3 in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.19, the Msg3 may include a MAC subheader1902and a C-RNTI MAC CE1904. At this time, the logical channel prioritization for generating such a MAC CE may be defined as shown inFIG.20. FIG.20illustrates a flowchart of a terminal for generating a MAC CE in a wireless communication system according to an embodiment of the disclosure. Referring toFIG.20, in operation2001the terminal first adds one of a BFR MAC-CE Type1 or a BFR MAC-CE Type2 according to a condition. In operation2003, if uplink grant remains, the terminal adds other MAC CEs. In other words, if capacity remains, the terminal may add other MAC-CEs. According to embodiments as described above, the terminal may transmit the message including the BFR related information during the RA procedure. In response, the base station may receive the message including the BFR related information. From the message including the BFR related information, the base station may obtain the BFR related information and perform the BFR procedure. According to other embodiment of the disclosure, the network may deliver specific timer information to the terminal in advance using a downlink signal (e.g., RRC signal, MAC signal, PHY signal, and the like). Hence, the terminal may determine whether to perform the BFR. The timer may initiate if the terminal detects a beam failure problem The detection of the beam failure problem may be recognized if one or more indications/incidents occur within a specific time at the MAC/PHY/RRC. The terminal may scan a new candidate beam before the timer expires. If the new candidate beam is scanned before the time expires, BFR attempt may be performed. Alternatively, even if the new candidate beam is scanned before the time expires, the terminal does not perform any BFR attempt, for example, the contention free RA and/or the contention based RA, and may scan a better beam for a defined time. If the new candidate beam is not scanned before the time expires, the terminal does not perform any BFR attempt, for example, the contention free RA and/or the contention based RA. If the timer expires, the terminal may operate as follows. If the new candidate beam is scanned before the time expires, the terminal may perform the contention free RA or the contention based RA using the new beam as stated above. If the new candidate beam is not scanned before the time expires, the terminal may terminate every BFR attempt, initialize related parameters, and terminate related operations. Alternatively, if the new candidate beam is not scanned before the time expires, the terminal may terminate every BFR attempt, declare radio link failure (RLF), and initiate a cell reselection procedure. Now, embodiments relating to discontinuous receive (DRX) operation paging occasion (PO) configuration are explained. During the DRX operation, the terminal may monitor one PO per DRX cycle. To determine such a PO, a rule for determining a frame to be used as a reference (hereinafter a ‘reference frame’) and the PO is required. The disclosure provides embodiments for defining the rule based on the number of SSBs and control resource set (CORESET) configuration information. The reference frame may be determined based on Equation 1. (SFN+offset)modT=(TdivN)*(UE_IDmodN) Offset: 0 fornB>=T,0 . . . 1 fornB=T/2; 0 . . . 3 fornB=T/4; 0 . . . 7 fornB=T/8; and 0 . . . 15 fornB=T/16;  Equation 1 In Equation 1, SFN denotes a system frame number, offset denotes an offset for the reference frame, T denotes the DRX cycle of the terminal, N denotes a minimum value among T and nB, UE_ID denotes identification information of the terminal, and nB denotes a parameter configured by system information. At least one of the parameters may be configured by the system information. For example, T may be determined to a smaller value among a terminal unique DRX value and a default DRX value which is broadcasted as the system information, if an upper layer grants. If the terminal unique DRX value is not configured, the terminal may determine T to the default DRX value which is broadcasted as the system information. UE_ID may be defined as ‘international mobile subscriber identify (IMSI) mod 1024’. The terminal may determine an index i_s based on Equation 2. i_s=floor(UE_ID|N)modNs Ns=max(1,nB|T);N:min(T,nB)  Equation 2 In Equation 2, i_s denotes the index for indicating the PO to be monitored by the terminal, UE_ID denotes the identification information of the terminal, N denotes the minimum value among T and nB, T denotes the DRX cycle of the terminal, and nB denotes the parameter configured by the system information. For example, if i_s is 0, the terminal monitors a first PO. If i_s is 1, the terminal monitors a second PO. If i_s is 2, the terminal monitors a third PO. If i_s is 3, the terminal monitors a fourth PO. The network may configure and broadcast a paging search space including monitoring-periodicity-physical downlink control channel (PDCCH)-slot, Monitoring-offset-PDCCH-slot, and Monitoring-symbols-PDCCH-within-slot, in the system information. The terminal determines a PDCCH monitoring occasion based on Monitoring-periodicity-PDCCH-slot, Monitoring-offset-PDCCH-slot, and Monitoring-symbols-PDCCH-within-slot of the slot. If Equation 3 is satisfied, the PDCCH monitoring occasion exists in a slot X in a radio frame Y. (y*(number of slots inaradio frame)+x−Monitoring−offset−PDCCH−slot)mod(Monitoring−periodicity−PDCCH−slot)=0  Equation 3 In Equation 3, y denotes a radio frame number including the PDCCH monitoring occasion, x denotes a slot number including the PDCCH monitoring occasion, Monitoring—offset—PDCCH—slot denotes an offset for the PDCCH monitoring, and Monitoring—periodicity—PDCCH—slot denotes a period for the PDCCH monitoring. A start symbol of the PDCCH monitoring occasion of the slot X is given as Monitoring-symbols-PDCCH-within-slot. A length (e.g., a symbol unit) of the PDCCH monitoring occasion may be given in CORESET in association with the search space. According to such paging search space configuration, the terminal determines a first PDCCH monitoring occasion. Herein, the PDCCH monitoring occasions are sequentially numbered the index from zero (0) within a corresponding reference frame. The first PO (e.g., the PO corresponding to i_s=0) are 0-th through S−1-th PDCCH monitoring occasions, where S denotes the number of SSBs. The second PO (e.g., the PO corresponding to i_s=1) are S-th through 2S−1-th PDCCH monitoring occasions. The third PO (e.g., the PO corresponding to i_s=2) are 2S-th through 3S−1-th PDCCH monitoring occasions. The index is numbered as stated above. FIG.21illustrates a configuration of a PDCCH monitoring occasion in a wireless communication system according to an embodiment of the disclosure.FIG.22illustrates a paging occasion (PO) monitored by a terminal in a wireless communication system according to an embodiment of the disclosure. Referring toFIGS.21and22, monitoring-periodicity-PDCCH-slot is 4, Monitoring-offset-PDCCH-slot is 0, Monitoring-symbols-PDCCH-within-slot is 01000000000000, and PDCCH CORESET Length in symbols is 4. Referring toFIGS.21and22, the reference frame determined by the base station is SFN #0, and the determined i_s is 0. In addition, the number of SSBs is 4 (i.e., S=4). In this case, a first PDCCH monitoring occasion ranges from symbols 3 to 5 in the slot 0. A next PDCCH monitoring occasion ranges from symbols 2 to 5 in a slot 4. Similarly, subsequent PDCCH monitoring occasions may be determined. Since the index i_s of the terminal is 0, the terminal observes the first PO including the PDCCH monitoring occasions 0, 1, 2, and 3 as shown inFIG.22. In brief, the PDCCH monitoring occasions may be defined as shown in Table 1. TABLE 1PDCCH monitor occasion 0 = SFN 0, slot 0, symbols 2 to 5PDCCH monitor occasion 1 = SFN 0, slot 4, symbols 2 to 5PDCCH monitor occasion 2 = SFN 0, slot 8, symbols 2 to 5PDCCH monitor occasion 3 = SFN 1, slot 2, symbols 2 to 5 FIG.23illustrates a configuration of a PDCCH monitoring occasion in a wireless communication system according to an embodiment of the disclosure.FIG.24illustrates a PO monitored by a terminal in a wireless communication system according to an embodiment of the disclosure. Referring toFIGS.23and24, as another example of the paging searching configuration, Monitoring-periodicity-PDCCH-slot is 1, Monitoring-offset-PDCCH-slot is 0, Monitoring-symbols-PDCCH-within-slot is 01000000000000, and PDCCH CORESET Length in symbols is 4. In the embodiment ofFIG.23, the reference frame determined by the base station is SFN #0, and the determined i_s is 0. In addition, the number of SSBs is 6 (i.e., S=6). In this case, a first PDCCH monitoring occasion ranges from symbols 2 to 5 in a slot 0. A next PDCCH monitoring occasion ranges from symbols 2 to 5 in a slot 1. Similarly, subsequent PDCCH monitoring occasions may be determined. Since the index i_s of the terminal is 0, the terminal observes the first PO including the PDCCH monitoring occasions 0, 1, 2, 3, 4, and 5 as shown inFIG.24. In brief, the PDCCH monitoring occasions may be defined as shown in Table 2. TABLE 2PDCCH monitor occasion 0 = SFN 0, slot 0, symbols 2 to 5PDCCH monitor occasion 1 = SFN 0, slot 1, symbols 2 to 5PDCCH monitor occasion 2 = SFN 0, slot 2, symbols 2 to 5PDCCH monitor occasion 3 = SFN 0, slot 3, symbols 2 to 5PDCCH monitor occasion 4 = SFN 0, slot 4, symbols 2 to 5PDCCH monitor occasion 5 = SFN 0, slot 5, symbols 2 to 5 Table 3 shows examples of the PO based on values of i_s and Ns TABLE 3Nsi_s = 01_s = 1i_s = 2i_s = 311st PO i.e., 0 toN/AN/AN/AS-1th PDCCHmonitoringoccasions.21st PO i.e., 0 to2nd PO i.e., S toN/AN/AS-1th PDCCH2S-1th PDCCHmonitoringmonitoringoccasionsoccasions31st PO i.e., 0 to2nd PO i.e., S to3rd PO i.e., 2S to2nd PO i.e., 3 toS-1th PDCCH2S-1th PDCCH3S-1th PDCCH4S-1th PDCCHmonitoringmonitoringmonitoringmonitoringoccasionsoccasionsoccasionsoccasions Now, embodiments for including information of on-demand system information reception in the system information are described. Hereafter, a resource allocation method for requesting the on-demand system information, to be recognized by the terminal to receive the on-demand system information, is explained. The terminal may request the on-demand system information from the network using the contention based or content free RA. The disclosure provides more general contention free terminal operations, and an efficient network configuration method. On-demand system information (SI) request based on RA preamble (Msg1) The terminal may request the on-demand system information which is required by the terminal, by transmitting a RA preamble to the network. In doing so, configuration parameter for the RA usable by the terminal may be configured as many as parameters maxSI-Message indicating a maximum number of on-demand system information messages provided from the network as shown in Table 4. The parameter maxSI-Message may be configured by the network using an RRC message, a MAC message, or a PHY message. TABLE 4SI-Request-Config ::= SEQUENCE(SIZE(1..maxSI-Message)) OFSI-Request-Resources If the list includes only one configuration entry, the corresponding configuration may be evenly used in all of the on-demand system information messages provided from the network. Otherwise, respective configurations may be sequentially applied to the on-demand system information messages one to one in schedulingInfoList. The schedulingInfoList includes a list of system information messages supported by the cell, and may include transmission configuration information, such as periods, mapped system information blocks (SIBs), on-demand SI message broadcast status. SchedulingInfoList and SI-request-config may be broadcast through SIB1, and may be included in a signal of other physical broadcast channel (PBCH). Alternatively, schedulingInfoList and SI-request-config may be included an RRC signal which is received and configured if the terminal accesses the network, and may be configured using other SIB or other MAC or PHY signal. SI-Request-Resources may be configured based on a preamble index list and an SSB occasion mask indexes as shown in Table 5. TABLE 5SI-Request-Resources ::=SEQUENCE {ra-PreambleIndexListSEQUENCE(SIZE(1.. maximumnumber of SSB per Rach Occasion)) OF INTEGER(0..63),ra-ssb-OccasionMaskIndexINTEGER(0..15) OPTIONAL} With the configured schedulingInfoList and SI-request-config, the terminal may request the on-demand system information from the network in a manner, to be described, based on ra-preambleindexlist of SI-request-resources and # of SSBs per RACH occasion in the RACH configuration of the system information. a) If # of SSBs per RACH occasion(N) is smaller than 1, i. a size of ra-preambleindexlist is 1, and the terminal recognizes one-to-one mapping relationships between preambles of ra-preambleindexlist and SSB indexes associated with the RACH occasions. b) If # of SSBs per RACH occasion(N) is greater than or equal to 1, i. the size of ra-preambleindexlist is equal to # of SSBs per RACH occasion(N), and the terminal recognizes one-to-N mapping relationships between the preambles of ra-preambleindexlist and the SSB indexes associated with the RACH occasions. At this time, an i-th preamble of ra-preambleindexlist which is a preamble index list may be linked to each SSB index as follows: mod (SSB_index, # of preambles in the list)=i−1 or, c) If # of SSBs per RACH occasion(N) is smaller than or equal to 1, i. the size of ra-preambleindexlist is 1, and the terminal recognizes one-to-one mapping relationships between the preambles of ra-preambleindexlist and the SSB indexes associated with the RACH occasions. d) If # of SSBs per RACH occasion(N) is greater than 1, i. the size of ra-preambleindexlist is equal to # of SSBs per RACH occasion(N), and the terminal recognizes one-to-N mapping relationships between the preambles of ra-preambleindexlist and the SSB indexes associated with the RACH occasions. At this time, the i-th preamble of ra-preambleindexlist which is the preamble index list may be linked to each SSB index as follows: mod (SSB index, # of preambles in the list)=i−1. Using such rules, the terminal may know the association between the preambles of ra-preambleindexlist and the SSB indexes as shown in Table 6. TABLE 61st preamble in list corresponds to SSB Index 0, N, 2N, 3N and so on.2nd preamble in list corresponds to SSB Index 1, N + 1, 2N + 1, 3N + 1and so on3rd preamble in list corresponds to SSB Index 2, N + 2, 2N + 2, 3N + 2and so onTo generalize: ithpreamble in the list corresponds to SSB index j * N +(i − 1) where j = 0, 1, 2, and so on. In other embodiment of the disclosure, the terminal may define the number of the preambles in ra-preambleindexlist as N, and apply the association of the SSB index. In other embodiment of the disclosure, the terminal may define the number of messages in maxSI-Message as N, and apply the association of the SSB index. It is provided that there are configurations which are greater than 1 in SI-Request-Config of Table 4, and smaller than the number of the maxSI-Messages. Provided that the number of on-demand system information messages in schedulinginfolist is N1 and the number of SI-Request-Resources in SI-Request-Config is N2, corresponding configurations may be applied to the on-demand system information in schedulingInfoList as follows.Method 1: The terminal may apply the configuration by N1/N2. For example, if N1 is 6 and N2 is 3, the terminal may sequentially group and apply the on-demand system information messages in schedulinginfolist by N1/N2=6/3=2, to SI-Request-Resources in SI-Request-Config.1fN1/N2 is not evenly divided to an integer, the terminal may round down. For example, if N1/N2=7/3=2.333, the terminal may apply the on-demand system information messages by two.Method 2: The terminal may sequentially apply SI-Request-Resources of N2-ary SI-Request-Config to N2-ary on-demand system information messages of on-demand system information messages in N1-ary schedulinginfolist, and commonly apply SI-Request-Resources of SI-Request-Config firstly or lastly to remaining (N1-N2)-ary on-demand system information messages.Method 3: The terminal may sequentially apply SI-request-resources of N2-ary SI-Request-Config to N2-ary on-demand system information messages of the on-demand system information messages in N1-ary schedulinginfolist, and sequentially apply SI-Request-Resources of N2-ary SI-Request-Config to N2-ary on-demand system information messages of the remaining (N1-N2)-ary on-demand system information messages, thus configuring N1 in total. For example, if N1=6 and N2=4, the terminal may sequentially apply SI-request-resources of SI-request-config to on-demand system information messages of first four schedulinginfolist, and sequentially apply first two SI-request-resources of SI-request-config to on-demand system information of the other two schedulinginfolist.Method 4: The terminal may include and provide a list of the on-demand system information, sImessageindexlist of schedulinginfolist to apply the corresponding configuration as shown in Table 7, in SI-request-config. TABLE 7SI-Request-Resources::=SEQUENCE {ra-PreambleIndexListSEQUENCE(SIZE(1.. maximum numberof SSB per Rach Occasion)) OF INTEGER(0..63),ra-ssb-OccasionMaskIndexINTEGER(0..15) OPTIONAL,sImessageIndexListSEQUENCE(SIZE(1.. maxSI-Message))OF INTEGER(0.. maxSI-Message-1),} According to an embodiment of the disclosure, the system information message may be received as below. The system information message may include system information of other SIB type than SIB1. SIB1 provides connections between such SIBs and the system information messages. Each SIB is included in only one system information message. The system information message is transmitted in a specific system information-window of a time axis which occurs on a periodic basis. The network may include and transmit a system information transmit window number, for example, Wn in each system information message. In an embodiment of the disclosure, different system information messages having the same system information transmit window number Wn may be transmitted in the same system information-window. According to another embodiment of the disclosure, the system information transmit window number may be provided implicitly. The network and the terminal may know the system information transmit window number n of an n-th system information message in SIB1, and transmit/receive the system information message in a system information transmit window corresponding to n. Periods of the system information message and the system information transmit window may be provided from the network. Such period information may be included in a broadcast message, such as minimum system information (MSI), SIB1, or may be included in a downlink dedicated signal transmission, such as a response to a terminal's request or a handover command. Using the system information transmit window number, the system information message period, and the system information transmit window length, the terminal may configure a system information receive window for receiving the system information message. In an advanced system, the terminal does not have to monitor the PDCCH to receive the system information message in the system information receive window. The information sharing of the network and the terminal are as follows. Step 1: The terminal determines the system information window number Wn of a specific system information message to receive. Wn is transmitted by the network in each system information message. In other embodiment of the disclosure, the system information transmit window number may be provided implicitly. The network and the terminal may know the system information transmit window number n of the n-th system information message in SIB1, and transmit/receive the system information message in the system information transmit window corresponding to n. Step 2: The terminal determines a length of the system information receive window to a positive integer X=(Wn−1)*w, where w denotes the length of the system information-window and is expressed in a slot unit. Step 3: The terminal determines a start point of the system information receive window. The corresponding start point is a slot N1 in a radio frame N2, and may be determined as [N2*(number of slots in a radio frame)+N1+Offset) mod T=X. T denotes the system information message period in the slot unit and is provided in remaining minimum system information, remained system information (RMSI) (e.g., SIB1). Offset is provided in the RMSI (e.g., SIB1) in the slot unit. The number of slots in the radio frame is determined in advance by a sub carrier spacing (SCS) used by a corresponding system, and SCS information may be provided in MIB/SIB1. The system information receive window lasts by the slot length w and then terminates. Step 4: The terminal in the system information receive window observes the PDCCH to receive other system information (OSI). The terminal determines a PDCCH observation occasion according to OSI search space configuration. If the OSI search space is not configured in a designated system information receive window or not received, the terminal may observe the PDCCH at a corresponding occasion using a PDCCH observation occasion which is configured for the RMSI. An apparatus and a method according to embodiments of the disclosure effectively achieve the BFR in the system. The methods according to the embodiments described in the claims or the specification of the disclosure may be implemented in software, hardware, or a combination of hardware and software. As for the software, a computer-readable storage medium storing one or more programs (software modules) may be provided. One or more programs stored in the computer-readable storage medium may be configured for execution by one or more processors of an electronic device. One or more programs may include instructions for controlling the electronic device to execute the methods according to the embodiments described in the claims or the specification of the disclosure. Such a program (software module, software) may be stored to a RA memory, a non-volatile memory including a flash memory, a read only memory (ROM), an electrically erasable programmable ROM (EEPROM), a magnetic disc storage device, a compact disc (CD)-ROM, digital versatile discs (DVDs) or other optical storage devices, and a magnetic cassette. Alternatively, the program may be stored to a memory combining part or all of those recording media. A plurality of memories may be equipped. The program may be stored in an attachable storage device accessible via a communication network, such as Internet, Intranet, local area network (LAN), wide LAN (WLAN), or storage area network (SAN), or a communication network by combining these networks. The storage device may access the electronic device through an external port. A separate storage device may access the device over the communication network. In the specific embodiments of the disclosure, the elements included in the disclosure are expressed in a singular or plural form. However, the singular or plural expression is appropriately selected according to a proposed situation for the convenience of explanation and the disclosure is not limited to a single element or a plurality of elements. The elements expressed in the plural form may be configured as a single element, and the elements expressed in the singular form may be configured as a plurality of elements. While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
60,110
11943643
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Detailed embodiments and implementations of the claimed subject matters are disclosed herein. However, it shall be understood that the disclosed embodiments and implementations are merely illustrative of the claimed subject matters which may be embodied in various forms. The present disclosure may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments and implementations set forth herein. Rather, these exemplary embodiments and implementations are provided so that description of the present disclosure is thorough and complete and will fully convey the scope of the present disclosure to those skilled in the art. In the description below, details of well-known features and techniques may be omitted to avoid unnecessarily obscuring the presented embodiments and implementations. Overview Implementations in accordance with the present disclosure relate to various techniques, methods, schemes and/or solutions pertaining to wide bandwidth transmission schemes in wireless communications. According to the present disclosure, a number of possible solutions may be implemented separately or jointly. That is, although these possible solutions may be described below separately, two or more of these possible solutions may be implemented in one combination or another. FIG.1illustrates an example network environment100in which various solutions and schemes in accordance with the present disclosure may be implemented.FIG.2˜FIG.16illustrate examples of implementation of various proposed schemes in network environment100in accordance with the present disclosure. The following description of various proposed schemes is provided with reference toFIG.1˜FIG.16. Referring toFIG.1, network environment100may involve at least a STA110and a STA120communicating wirelessly with each other in a wide-bandwidth BSS130in accordance with one or more IEEE 802.11 standards (e.g., IEEE 802.11be and beyond). Each of STA110(herein interchangeably denoted as “STA1”) and STA120(herein interchangeably denoted as “STA2”) may function as an access point (AP) STA or a non-AP STA. Moreover, each of STA110and STA120may be configured to perform wide bandwidth transmission schemes in wireless communications in accordance with various proposed schemes described below. It is noteworthy that, in the present disclosure, the term “primary channel” refers to a 20 MHz channel where medium access through channel contention is allowed. The term “non-primary channel” refers to a 20 MHz channel which is not a primary channel in an operating channel. The term “primary frequency segment” refers to a frequency segment (e.g., an 80 MHz frequency segment) within an operating bandwidth (e.g., 80 MHz, 160 MHz, 80+80 MHz, 240 MHz, 160+80 MHz, 320 MHz or 160+160 MHz) that contains the primary channel. The term “secondary segment” refers to a frequency segment (e.g., an 80 MHz frequency segment) within the operating bandwidth that does not contain the primary channel. In wide-bandwidth BSS130, an AP (e.g., STA110functioning as an AP STA) may assign or negotiate with a non-AP STA (e.g., STA120functioning as a non-AP STA) regarding which channel(s) to be used as the PD channel and SIG content channel. The PD channel and SIG content channel may be located in the same bandwidth segment (e.g., a 80 MHz segment) or different bandwidth segments. The PD channel and SIG content channel of a non-AP STA may be semi-statically assigned or negotiated. Alternatively, the PD channel and SIG content channel of a non-AP STA may be dynamically assigned by the associated AP. Under a proposed scheme in accordance with the present disclosure, the semi-static PD and SIG content channel assignment may be within a certain period of time such as, for example and without limitation, a target wakeup time (TWT) or a service period (SP). The assignment may be changed by re-assignment or re-negotiation through frame exchange (e.g., via management frame exchange and/or control frame exchange). Under the proposed scheme, the dynamic PD and SIG content channel assignment may be applied when there will be data transmission between an AP (e.g., STA110) and its associated non-AP STA(s) (e.g., STA120) within a TXOP. A control frame or control information may be used by the AP to indicate the position of the PD channel and SIG content channel for a corresponding non-AP STA which is the recipient of the data to be transmitted. The control frame or control information may be sent by the AP before the data transmission for the corresponding non-AP STA in order to assign the PD channel and/or SIG content channel for the non-AP STA to detect and decode the subsequent data within a current TXOP. The dynamic PD and SIG content channel assignment may be valid within the current TXOP. Under a proposed scheme in accordance with the present disclosure with respect to semi-static PD and SIG content channel assignment with various-bandwidth STAs, an AP (e.g., STA110) may be operating in a wide bandwidth (e.g., 320 MHz) with four 80 MHz segments, including a primary 80 MHz segment (herein interchangeably denoted as “P80”), a first secondary 80 MHz segment (herein interchangeably denoted as “S80_1”), a second secondary 80 MHz segment (herein interchangeably denoted as “S80_2”), and a third secondary 80 MHz segment (herein interchangeably denoted as “S80_3”). FIG.2illustrates an example scenario200of semi-static PD and SIG content channel assignment with STAs of various bandwidths under the proposed scheme. Scenario200may involve multiple non-AP STAs with different operating bandwidths such as EHT STA1, EHT STA2, EHT STA3, EHT STA4 and EHT STA5. In scenario200, EHT STA1 may be capable of operating in an 80 MHz bandwidth, EHT STA2 may be capable of operating in a 160 MHz bandwidth, EHT STA3 may be capable of operating in a 160 MHz bandwidth, EHT STA4 may be capable of operating in an 80 MHz bandwidth, and EHT STA5 may be capable of operating in a 160 MHz bandwidth. Under the proposed scheme, non-AP STAs associated with the AP may, by default, be monitoring P80 in which the PD channel and SIG content channel are located. For instance, each of EHT STA1, EHT STA2 and EHT STA3 may be monitoring P80 to detect preamble and decode SIG content in P80. Moreover, the SIG content channel of EHT STA2 and EHT STA3, both being 160 MHz STAs, may also be assigned in S80_1 for EHT STA2 and EHT STA3 to detect preamble in P80 and decode SIG content in S80_1 other than P80. Under the proposed scheme, non-AP STAs may be assigned by or negotiated with the associated AP using explicit signaling in the PD channel and SIG content channel. For instance, EHT STA4, an 80 MHz STA, may be monitoring S80_1 with PD and SIG content channel(s) in S80_1 while EHT STA5, a 160 MHz STA, may be monitoring S80_2 with PD and SIG content channel(s) in S80_2. Under the proposed scheme, non-AP STAs having PD channel in P80 may perform EDCA for uplink transmissions, and non-AP STAs having PD channel in other segment(s) other than P80 may only be triggered for uplink transmissions. Under a proposed scheme in accordance with the present disclosure with respect to semi-static PD and SIG content channel assignment with mixed types of STAs (e.g., legacy HE STAs and EHT STAs), legacy HE STAs in a BSS may park in the primary 80 MHz segment (P80). That is, legacy HE STAs may park in P80, with PD channel and SIG content channel also located in P80. Under the proposed scheme, an AP (e.g., STA110) may assign or negotiate with its associated EHT STAs the PD channel and SIG content channel. FIG.3illustrates an example scenario300of semi-static PD and SIG content channel assignment with STAs of mixed types under the proposed scheme. Scenario300may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and EHT STA4. In scenario300, each of EHT STA1, EHT STA2 and EHT STA4 may be capable of operating in a 160 MHz bandwidth, and EHT STA3 may be capable of operating in an 80 MHz bandwidth. EHT STA1 and EHT STA2 may be monitoring P80, with PD channel located in P80 and EHT SIG content channel located in S80_1. EHT STA3 may be monitoring S80_1, with PD channel and EHT SIG content channel located in S80_1. EHT STA4 may be monitoring S80_2, with PD channel and EHT SIG content channel located in S80_2. Under the proposed scheme, HE STAs, EHT STA1 and EHT STA2 may perform EDCA for uplink transmissions when their PD channel is in P80. Moreover, EHT STA3 and EHT STA4 may only be triggered for uplink transmissions when their PD channel is not in P80. In view of the above, with respect to semi-static PD and SIG content channel assignment, a legacy STA (e.g., an IEEE 802.11ax STA) may by default be monitoring the primary 80 MHz segment (P80) with both its PD channel and SIG content channel (e.g., HE SIG content channel) in P80. Additionally, an EHT STA may by default be monitoring P80 with both its PD channel and SIG content channel (e.g., EHT SIG content channel) in P80. Under a proposed scheme in accordance with the present disclosure, when there are legacy STAs coexisting in the system, an AP may announce the existence of legacy STAs. Moreover, the AP may semi-statically assign or negotiate with EHT STAs the PD channel and/or SIG content channel in any segment other than P80 (with PD channel and SIG content channel being in the same segment or different segments). Under the proposed scheme, in response to the AP announcing the existence of legacy STAs, each EHT STA having its SIG content channel in P80 by default may reconfigure its SIG content channel to another segment other than P80 that is within its operating bandwidth without signaling (e.g., without being assigned by or negotiated with the AP). Under the proposed scheme, a non-AP STA with its PD channel in the primary 80 MHz segment may perform EDCA for channel access. FIG.4illustrates an example scenario400of semi-static PD and SIG content channel assignment under the proposed scheme. Scenario400may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario400, HE STAs may be monitoring P80 and may monitor preamble in P80. EHT STA1 and EHT STA2 may monitor preamble in P80 and each may be configured to monitor its EHT SIG content channel in S80_1. EHT STA3 may monitor preamble in the assigned or negotiated PD channel in a secondary 160 MHz segment (herein interchangeably denoted as “S160”), and EHT STA3 may be configured to monitor its EHT SIG content channel in S80_2 within the secondary 160 MHz segment. In scenario400, in a TXOP obtained by the AP, the HE STAs, EHT STA1, EHT STA2 and EHT STA3 may perform certain operations. For instance, HE STAs may detect preamble and receive HE downlink (DL) PPDU(s) in P80. EHT STA1 and EHT STA2 may detect a preamble of a PPDU in a primary 20 MHz channel in P80 and then decode the remaining part of the PPDU in S80_1. EHT STA3 may detect a preamble of a PPDU in a 20 MHz channel in the secondary 160 MHz segment and then decode the remaining part of the PPDU in the secondary 160 MHz segment. In scenario400, a management frame exchange may be used for PD and SIG content channel assignment or re-assignment. Under a proposed scheme in accordance with the present disclosure with respect to dynamic SIG content channel assignment, during a TXOP, an AP (e.g., STA110) may assign the PD channel and/or SIG content channel for its associated non-AP STAs participating in transmissions during the TXOP based on at least one of a number of conditions including, for example and without limitation, aggregated PPDU with legacy STA support, dynamic preamble puncturing, and load balancing. Under the proposed scheme, the AP may initiate a TXOP and send a control frame or control information in a data frame to indicate the position of the PD channel and/or SIG content channel for a TXOP responder (e.g., a non-AP STA). The control frame or control information may be sent before data transmission for a specific TXOP responder non-AP STA. The AP may only assign the SIG content channel in case the non-AP STA does not need to switch its operating channel within the TXOP. The AP may assign the PD channel and SIG content channel to a specific TXOP responder non-AP STA in a new position in case the non-AP STA needs to switch its operating channel within the TXOP. For instance, the AP may need to add physical layer (PHY) padding (e.g., packet extension signal extension) to the PPDU carrying the control frame or the control information in order to provide extra channel switching time required for the non-AP STA. Under the proposed scheme, a non-AP STA which is a TXOP responder may follow the PD channel and/or SIG content channel assignment indicated in the control frame or control information. In case the SIG content channel, but not the PD channel, is changed within the current STA's operating channel, then after receiving the control information the STA may decode the SIG content channel in a subsequent PPDU in a new 80 MHz segment containing the assigned SIG content channel. In case both PD channel and SIG content channel are changed to be outside of the current STA's operating channel, then after receiving the control information the STA may switch to the new operating channel during a packet extension/signal extension time of the PPDU containing the control information. The STA may detect a preamble in a subsequent PPDU in the new 80 MHz segment containing the assigned PD channel and may also decode the SIG content channel in the corresponding 80 MHz segment. After the end of the TXOP, the PD channel and/or SIG content channel assignment by the control information may be canceled, and the TXOP responder non-AP STA may resume its original PD channel and/or SIG content channel. FIG.5illustrates an example scenario500of dynamic SIG content channel assignment under a proposed scheme in accordance with a proposed scheme in accordance with the present disclosure. Scenario500may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario500, HE PPDU(s) and EHT PPDU(s) may be aggregated in wide bandwidth transmissions. Each of EHT STA1, with a 160 MHz operating bandwidth, and EHT STA2, with a 320 MHz operating bandwidth, may be originally monitoring P80 to monitor preamble in a primary 20 MHz channel in P80, with its EHT SIG content channel originally being in P80. Each of the HE STAs may be monitoring P80 to monitor preamble in the primary 20 MHz channel, with its HE SIG content channel also being in P80. When the AP obtains a TXOP, a HE PPDU may be transmitted in P80 within the TXOP. Additionally, each of EHT STA1 and EHT STA2 may be indicated by AP to switch its EHT SIG content channel to S80_1 or S80_2, respectively. The AP may transmit an EHT PPDU1 to EHT STA1 and EHT STA2 in one of the secondary 80 MHz segments (e.g., S80_1, S80_2 or S80_3) except for P80 during the TXOP. FIG.6illustrates an example scenario600of dynamic SIG content channel assignment under a proposed scheme in accordance with the present disclosure. Scenario600may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2 and EHT STA3 which are associated with an AP. In scenario600, the AP may balance the SIG content load in wide bandwidth transmissions. Each of EHT STA1, EHT STA2 and EHT STA3 may be originally monitoring P80 to monitor preamble in a primary 20 MHz channel, with its EHT SIG content channel being originally in P80. When the AP obtains a TXOP, EHT STA1, with an 80 MHz operating bandwidth, may perform reception and transmission in P80. Additionally, EHT STA2, with a 160 MHz operating bandwidth, may be indicated by the AP to switch its EHT SIG content channel to S80_1. Accordingly, EHT STA2 may detect preamble in the primary 20 MHz channel and decode its SIG content in S80_1 within the TXOP. EHT STA3, with a 320 MHz operating bandwidth, may be indicated by the AP to switch its EHT SIG content channel to S80_2. Accordingly, EHT STA3 may detect preamble in the primary 20 MHz channel and decode its SIG content channel in S80_2 within the TXOP. FIG.7illustrates an example scenario700of frame exchange for dynamic SIG content channel assignment under a proposed scheme in accordance with the present disclosure. Scenario700may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario700, the AP may obtain a TXOP and send a control frame to indicate the EHT SIG content channel for the intended recipient(s) for the current TXOP. For instance, the control frame may assign S80_1 as the EHT SIG content channel for EHT STA1 and EHT STA2, and the control frame may also assign S80_2 as the EHT SIG content channel for EHT STA3. The control frame may be transmitted at least a short interframe space (SIFS) before a data PPDU is transmitted to the intended recipient(s). the EHT SIG content channels of EHT STA1, EHT STA2 and EHT STA3 may be switched back to P80 after the end of the TXOP. FIG.8illustrates an example scenario800of frame exchange for dynamic SIG content channel assignment under a proposed scheme in accordance with the present disclosure. Scenario800may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario800, a control frame may be acknowledged first by an intended recipient before the AP transmits a corresponding data PPDU to that recipient. Under the proposed scheme, the control frame may poll the intended recipient(s) to decode the EHT SIG content channel(s) in specific segment(s), and the control frame may allocate resource(s) for each intended recipient to respond its acknowledgement. Under the proposed scheme, different recipients may have different resource allocations to send acknowledgement in an orthogonal frequency-division multiplexing (OFDM) transmission format. For instance, each of EHT STA1 and EHT STA2 may send its acknowledgement in its allocated resource(s) in S80_1 without overlapping with one another. Under the proposed scheme, different recipients may have the same resource allocation to send acknowledgement in high-throughput (HT) or non-HT duplicated format. For instance, each of EHT STA1 and EHT STA2 may send its acknowledgement in non-HT duplicated format on multiple 20 MHz channels in S80_1. Under the proposed scheme, the AP may transmit data PPDU(s) at least a SIFS after receiving acknowledgement. FIG.9illustrates an example scenario900of dynamic channel switching under a proposed scheme in accordance with the present disclosure. Scenario900may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario900, channel switching may be performed by one or more non-AP STAs in order to aggregate HE PPDU(s) and EHT PPDU(s) in wide bandwidth transmissions. Each of EHT STA1, EHT STA2 and EHT STA3 may be originally monitoring P80 to monitor preamble in a primary 20 MHz channel, with its EHT SIG content channel being originally in P80. Each of the one or more HE STAs may be monitoring P80 to monitor preamble in the primary 20 MHz channel, with its HE SIG content channel also being in P80. When the AP obtains a TXOP, an HE PPDU may be transmitted by the AP in P80 within the TXOP. Additionally, EHT STA1, with an 80 MHz operating bandwidth, may be monitoring P80 originally and then indicated by the AP to switch to S80_1 within the TXOP. Moreover, EHT STA2, with a 160 MHz operating bandwidth, may be monitoring P80 originally and then indicated by the AP to move its EHT SIG content channel to S80_1. Accordingly, EHT STA2 may detect preamble in the primary 20 MHz channel and decode SIG content in S80_1 within the TXOP. Furthermore, EHT STA3, with a 160 MHz operating bandwidth, may be monitoring P80 originally and then indicated by the AP to switch to S80_2 and S80_3 within the TXOP. Accordingly, EHT STA3 may detect preamble in the PD channel assigned in S80_2 and decode SIG content in S80_2 within the TXOP. FIG.10illustrates an example scenario1000of frame exchange for dynamic switching under a proposed scheme in accordance with the present disclosure. Scenario1000may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario1000, the AP may obtain a TXOP and send a control frame (e.g., a multi-user request-to-send (MU-RTS) frame or a power-save poll (PS-poll) frame) to indicate channel switching to intended recipient(s) within the current TXOP. For instance, the control frame may assign S80_2 as the EHT SIG content channel for EHT STA3, and the control frame may also indicate the operating bandwidth to EHT STA3 (e.g., S80_2 and S80_3). Under the proposed scheme, the control frame may be acknowledged by the intended recipient(s) (e.g., using a clear-to-send (CTS) frame) before the AP sends data PPDU(s) to the intended recipient(s). The PPDU carrying the control frame may add packet extension or signal extension at the end of the PPDU to provide extra switching time (e.g., X microseconds). In scenario1000, EHT STA3 may start operating channel switching from P80 and S80_1 to S80_2 and S80_3 after receiving the PPDU carrying the control frame but before the packet extension or signal extension of the PPDU, and EHT STA3 may send acknowledgement on the new operating channel. Moreover, EHT STA3 may switch its operating channel back to P80 and S80_1 after the end of the TXOP. In scenario1000, each of the one or more HE STAs may acknowledge by transmitting a high-efficiency (HE) trigger-based (TB) acknowledgement, and each of EHT STA1, EHT STA2 and EHT STA3 may acknowledge by transmitting an EHT TB acknowledgement. FIG.11illustrates an example scenario1100of frame exchange for dynamic switching under a proposed scheme in accordance with the present disclosure. Scenario1100may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario1100, a PPDU carrying the control frame may not have packet extension or signal extension added at the end of the PPDU in case of intended recipient(s) indicating its/their capability of receiving channel switching to a different channel while transmitting on a current channel. For instance, EHT STA3 may start switching its receiving channel to a secondary 160 MHz segment after receiving the PPDU carrying the control frame, and EHT STA3 may send an acknowledgement on S80_1 within a current operating channel. Moreover, EHT STA3 may switch its operating channel back to P80 and S80_1 after the end of the TXOP. In scenario1100, each of the one or more HE STAs may acknowledge by transmitting a HE TB acknowledgement, and each of EHT STA1, EHT STA2 and EHT STA3 may acknowledge by transmitting an EHT TB acknowledgement. FIG.12illustrates an example scenario1200of frame exchange for dynamic switching under a proposed scheme in accordance with the present disclosure. Scenario1200may involve multiple non-AP STAs of mixed types such as EHT STA1, EHT STA2, EHT STA3 and one or more HE STAs which are associated with an AP. In scenario1200, the AP may obtain a TXOP and send a control frame to indicate channel switching to intended recipient(s) (e.g., EHT STA3) within the current TXOP. For instance, the control frame may be sent at least a SIFS before the AP sends data PPDU(s) to the STAs. The PPDU carrying the control frame may have packet extension or signal extension added at the end of the PPDU to provide extra switching time (e.g., X microseconds). In scenario1200, EHT STA3 may start operating channel switching to S80_2 and S80_3 after receiving the PPDU carrying the control frame but before the packet extension or signal extension of the PPDU, and EHT STA3 may then decode a subsequent PPDU on a new operating channel in a secondary 160 MH segment. Moreover, EHT STA3 may switch its operating channel back to the primary 160 MHz segment (e.g., P80 and S80_1, herein interchangeably denoted as “P160”) after the end of the TXOP. FIG.13illustrates an example scenario1300of implicit SIG content channel switching under a proposed scheme in accordance with the present disclosure. Scenario1300may involve multiple non-AP STAs such as EHT STA1, EHT STA2 and EHT STA3 which are associated with an AP. In scenario1300, each of EHT STA1 and EHT STA2 may be originally assigned the PD channel in P80 and EHT SIG content channel in S80_1, and EHT STA3 may be originally assigned the PD channel and SIG content channel in S80_2. When the AP obtains a TXOP, the AP may indicate that S80_1 is punctured. Accordingly, each of EHT STA1 and EHT STA2 may switch its EHT SIG content channel from S80_1 to P80 without signaling. The AP may also send channel puncturing information before the TXOP. In case S80_1 is indicated as being punctured, the EHT SIG content channel of each of EHT STA1 and EHT STA2 may be moved to P80 without EHT STA1 or EHT STA2 signaling to AP about such move until the channel puncturing information is changed. Furthermore, in case S80_2 is indicated as being punctured, EHT STA3 may perform channel switching to P80 to monitor preamble without signaling to AP about such switching until the channel puncturing information is changed. Illustrative Implementations FIG.14illustrates an example system1400having at least an example apparatus1410and an example apparatus1420in accordance with an implementation of the present disclosure. Each of apparatus1410and apparatus1420may perform various functions to implement schemes, techniques, processes and methods described herein pertaining to wide bandwidth transmission schemes in wireless communications, including the various schemes described above with respect to various proposed designs, concepts, schemes, systems and methods described above as well as processes described below. For instance, apparatus1410may be implemented in STA110and apparatus1420may be implemented in STA120, or vice versa. Each of apparatus1410and apparatus1420may be a part of an electronic apparatus, which may be a non-AP STA or an AP STA, such as a portable or mobile apparatus, a wearable apparatus, a wireless communication apparatus or a computing apparatus. When implemented in a non-AP STA, each of apparatus1410and apparatus1420may be implemented in a smartphone, a smart watch, a personal digital assistant, a digital camera, or a computing equipment such as a tablet computer, a laptop computer or a notebook computer. Each of apparatus1410and apparatus1420may also be a part of a machine type apparatus, which may be an IoT apparatus such as an immobile or a stationary apparatus, a home apparatus, a wire communication apparatus or a computing apparatus. For instance, each of apparatus1410and apparatus1420may be implemented in a smart thermostat, a smart fridge, a smart door lock, a wireless speaker or a home control center. When implemented in or as a network apparatus, apparatus1410and/or apparatus1420may be implemented in a network node, such as an AP in a WLAN. In some implementations, each of apparatus1410and apparatus1420may be implemented in the form of one or more integrated-circuit (IC) chips such as, for example and without limitation, one or more single-core processors, one or more multi-core processors, one or more reduced-instruction set computing (RISC) processors, or one or more complex-instruction-set-computing (CISC) processors. In the various schemes described above, each of apparatus1410and apparatus1420may be implemented in or as a non-AP STA or an AP STA. Each of apparatus1410and apparatus1420may include at least some of those components shown inFIG.14such as a processor1412and a processor1422, respectively, for example. Each of apparatus1410and apparatus1420may further include one or more other components not pertinent to the proposed scheme of the present disclosure (e.g., internal power supply, display device and/or user interface device), and, thus, such component(s) of apparatus1410and apparatus1420are neither shown inFIG.14nor described below in the interest of simplicity and brevity. In one aspect, processor1412and processor1422may be implemented in the form of one or more single-core processors, one or more multi-core processors, one or more RISC processors or one or more CISC processors. That is, even though a singular term “a processor” is used herein to refer to processor1412and processor1422, processor1412and processor1422may include multiple processors in some implementations and a single processor in other implementations in accordance with the present disclosure. In another aspect, processor1412and processor1422may be implemented in the form of hardware (and, optionally, firmware) with electronic components including, for example and without limitation, one or more transistors, one or more diodes, one or more capacitors, one or more resistors, one or more inductors, one or more memristors and/or one or more varactors that are configured and arranged to achieve specific purposes in accordance with the present disclosure. In other words, in at least some implementations, processor1412and processor1422is a special-purpose machine specifically designed, arranged and configured to perform specific tasks including those pertaining to wide bandwidth transmission schemes in wireless communications in accordance with various implementations of the present disclosure. In some implementations, apparatus1410may also include a transceiver1416coupled to processor1412. Transceiver1416may include a transmitter capable of wirelessly transmitting and a receiver capable of wirelessly receiving data. In some implementations, apparatus1420may also include a transceiver1426coupled to processor1422. Transceiver1426may include a transmitter capable of wirelessly transmitting and a receiver capable of wirelessly receiving data. In some implementations, apparatus1410may further include a memory1414coupled to processor1412and capable of being accessed by processor1412and storing data therein. In some implementations, apparatus1420may further include a memory1424coupled to processor1422and capable of being accessed by processor1422and storing data therein. Each of memory1414and memory1424may include a type of random-access memory (RAM) such as dynamic RAM (DRAM), static RAM (SRAM), thyristor RAM (T-RAM) and/or zero-capacitor RAM (Z-RAM). Alternatively, or additionally, each of memory1414and memory1424may include a type of read-only memory (ROM) such as mask ROM, programmable ROM (PROM), erasable programmable ROM (EPROM) and/or electrically erasable programmable ROM (EEPROM). Alternatively, or additionally, each of memory1414and memory1424may include a type of non-volatile random-access memory (NVRAM) such as flash memory, solid-state memory, ferroelectric RAM (FeRAM), magnetoresistive RAM (MRAM) and/or phase-change memory. Each of apparatus1410and apparatus1420may be a communication entity capable of communicating with each other using various proposed schemes in accordance with the present disclosure. For illustrative purposes and without limitation, a description of capabilities of apparatus1410, as STA110(e.g., an AP STA), and apparatus1420, as STA120(e.g., a non-AP STA), is provided below. It is noteworthy that, although the example implementations described below are provided in the context of WLAN, the same may be implemented in other types of networks. It is also noteworthy that, although examples described below are provide in the context of apparatus1410, the examples may also be applicable to apparatus1420or otherwise implemented by apparatus1420. Under a proposed scheme pertaining to wide bandwidth transmission schemes in wireless communications in accordance with the present disclosure, with apparatus1410implemented in or as STA110as an AP and apparatus1420implemented in or as STA120as a non-AP STA (denoted as “STA” in the description below for brevity) which initially monitors an initial PD channel and an initial SIG content channel in a same frequency segment or different frequency segments of a plurality of frequency segments in an operating bandwidth of the AP in network environment100, processor1412of apparatus1410and processor1422of apparatus1420may communicate with each other via transceiver1416and transceiver1426, respectively, with the AP assigning to the STA either or both of a PD channel and a SIG content channel for a TXOP, the PD channel and the SIG content channel assigned by the AP being different from the initial PD channel and the initial SIG content channel, respectively. In case that no PD channel is assigned for the TXOP, processor1422, implemented in or as the STA, may monitor the initial PD channel. Additionally, processor1412and processor1422may perform a frame exchange (e.g., involving a DL and/or triggered uplink (UL) transmission) between the AP and the STA during the TXOP such that: (i) the STA monitors a preamble on the PD channel and decodes a SIG content on the SIG content channel; and (ii) after an end of the TXOP, the STA switches to a primary frequency segment of the plurality of frequency segments to monitor the initial PD channel and the initial SIG content channel. In some implementations, the PD channel and the SIG content channel may be on a same frequency segment or different frequency segments of the plurality of frequency segments. In such cases, the PD channel may remain on the primary frequency segment and the SIG content channel may be assigned to a secondary frequency segment of the plurality of frequency segments within the operating bandwidth of the STA. Alternatively, the PD channel may be assigned to a secondary frequency segment of the plurality of frequency segments and the SIG content channel may be assigned to a different frequency segment of the plurality of frequency segments within the operating bandwidth of the STA. Still alternatively, the PD channel and the SIG content channel may be assigned to a same secondary frequency segment of the plurality of frequency segments. In some implementations, in communicating between the AP and the STA, processor1412may perform either or both of a PD channel assignment and a SIG content channel assignment to the STA to have the frame exchange with the STA using a PPDU format different than a format used on the primary frequency segment. In some implementations, in communicating between the AP and the STA, processor1412may assign either or both different PD channels and SIG content channels to different STAs to which different PPDU formats are applied or different frequency segments are assigned. In some implementations, in communicating between the AP and the STA, processor1412may perform either or both of a PD channel assignment and a SIG content channel assignment to the STA to aggregate PPDUs of different formats (e.g., HE PPDU(s) and EHT PPDU(s)) into one transmission on different frequency segments in the operating bandwidth of the AP. Alternatively, in communicating between the AP and the STA, processor1412may perform a dynamic SIG content channel assignment to the STA to balance a SIG content load in the operating bandwidth of the AP. In some implementations, in communicating between the AP and the STA, processor1412may perform either or both of a PD channel assignment and a SIG content channel assignment to the STA to allow segment-specific SIG content in one or more frequency segments in the operating bandwidth of the AP. In some implementations, in communicating between the AP and the STA, processor1412may transmit, via transceiver1416, a control frame or control information to the STA to assign either or both of the PD channel and the SIG content channel to the STA with either or both of the PD channel and the SIG content channel being in a secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. Moreover, processor1422may receive, via transceiver1426, the control frame or the control information from the AP that assigns either or both of the PD channel and the SIG content channel to the STA with either or both of the PD channel and the SIG content channel being in the secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. In some implementations, the STA may receive the control frame or the control information on the primary frequency segment. In some implementations, the control frame may be transmitted by the AP and received by the STA at least a SIFS before the DL transmission. In some implementations, a PPDU carrying the control frame may be padded in a medium access control (MAC) payload or with a packet extension or a signal extension at an end of the PPDU to allow additional switching time. In some implementations, the control frame or the control information may be transmitted at a beginning of the TXOP. In such cases, in communicating between the AP and the STA, processor1422may further perform certain operations. For instance, processor1422may switch either or both of the PD channel and the SIG content channel to the secondary frequency segment. Then, processor1422may perform the frequency exchange with the AP. In some implementations, in communicating between the AP and the STA, processor1412may transmit, via transceiver1416, a control frame to the STA to assign either or both of the PD channel and the SIG content channel to the STA with either or both of the PD channel and the SIG content channel being in a secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. Moreover, processor1422may transmit, via transceiver1426, an acknowledgement on at least the secondary frequency segment to the AP responsive to receiving the control frame. In some implementations, the acknowledgement may be transmitted by the STA at least a SIFS before the DL transmission. In some implementations, the control frame may include a MU-RTS frame or a PS-poll)frame. Additionally, the acknowledgement may include a CTS frame or an EHT TB acknowledgement. In some implementations, a PPDU carrying the control frame may be padded with a packet extension or a signal extension at an end of the PPDU to allow additional switching time. In some implementations, in communicating between the AP and the STA, processor1412may transmit, via transceiver1416, channel puncturing information to the STA. Furthermore, processor1422may switch to decode the SIG content (e.g., in the primary frequency segment but not necessarily) from the SIG content channel in the primary frequency segment responsive to the channel puncturing information indicating that the initial SIG content channel is punctured. Under a proposed scheme pertaining to wide bandwidth transmission schemes in wireless communications in accordance with the present disclosure, with apparatus1410implemented in or as STA110as an AP and apparatus1420implemented in or as STA120as a first non-AP STA in network environment100, processor1412of apparatus1410and processor1422of apparatus1420may establish a wireless communication between the AP and the first STA with the first STA initially monitoring a primary frequency segment of a plurality of frequency segments in an operating bandwidth of the AP in a BSS which is associated with a plurality of STAs including the first STA. Moreover, processor1412and processor1422may communicate, via transceiver1416and transceiver1426, respectively, to result in the first STA being assigned either or both of a first PD channel and a first SIG content channel such that the first STA monitors a preamble on the first PD channel and decodes a SIG content on the first SIG content channel during at least a predetermined period of time. In response to a first bandwidth of the first STA being different than a second bandwidth of a second STA of the plurality of STAs, at least one of a second PD channel and a second SIG content channel assigned to the second STA and at least one of the first PD channel and the first SIG content channel may be in different segments of the plurality of frequency segments. In response to a first type of the first STA being different than a second type of the second STA, the first SIG content channel may be in one of the plurality of frequency segments other than the primary frequency segment. In some implementations, in communicating between the AP and the first STA, processor1412may assign or negotiate with the first STA to assign either or both of the first PD channel and the first SIG content channel to the first STA with either or both of the first PD channel and the first SIG content channel being in a secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. Moreover, processor1422may negotiate with or be assigned by the AP either or both of the first PD channel and the first SIG content channel with either or both of the first PD channel and the first SIG content channel being in the secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. In some implementations, in communicating between the AP and the first STA, processor1412and processor1422may communicate via a management frame exchange between the AP and the first STA in either or both of a PD channel and SIG content channel assignment and a re-assignment. In some implementations, the predetermined period of time may include a TWT or a SP. In some implementations, in an event that the first type of the first STA is different than the second type of the second STA, the first STA may be an EHT STA and the second STA may be a HE STA. Illustrative Processes FIG.15illustrates an example process1500in accordance with an implementation of the present disclosure. Process1500may represent an aspect of implementing various proposed designs, concepts, schemes, systems and methods described above. More specifically, process1500may represent an aspect of the proposed concepts and schemes pertaining to wide bandwidth transmission schemes in wireless communications in accordance with the present disclosure. Process1500may include one or more operations, actions, or functions as illustrated by one or more of blocks1510and1520. Although illustrated as discrete blocks, various blocks of process1500may be divided into additional blocks, combined into fewer blocks, or eliminated, depending on the desired implementation. Moreover, the blocks/sub-blocks of process1500may be executed in the order shown inFIG.15or, alternatively in a different order. Furthermore, one or more of the blocks/sub-blocks of process1500may be executed repeatedly or iteratively. Process1500may be implemented by or in apparatus1410and apparatus1420as well as any variations thereof. Solely for illustrative purposes and without limiting the scope, process1500is described below in the context of apparatus1410implemented in or as STA110and apparatus1420implemented in or as STA120of a wireless network such as a WLAN in network environment100in accordance with one or more of IEEE 802.11 standards. It is noteworthy that, although examples described below are provide in the context of apparatus1410, the examples may also be applicable to apparatus1420or otherwise implemented by apparatus1420. Process1500may begin at block1510. At1510, process1500may involve processor1412of apparatus1410, implemented in STA110as an AP, and processor1422of apparatus1420, implemented in STA120as a non-AP STA (denoted as “STA” in the description below for brevity) which initially monitors an initial PD channel and an initial SIG content channel in a same frequency segment or different frequency segments of a plurality of frequency segments in an operating bandwidth of the AP, communicating with each other via transceiver1416and transceiver1426, respectively, with the AP assigning to the STA either or both of a PD channel and a SIG content channel for a TXOP, the PD channel and the SIG content channel assigned by the AP being different from the initial PD channel and the initial SIG content channel, respectively. In case that no PD channel is assigned for the TXOP, process1500may also involve processor1412monitoring the initial PD channel. Process1500may proceed from1510to1520. At1520, process1500may involve processor1412and processor1422performing a frame exchange between the AP and the STA during the TXOP such that: (i) the STA monitors a preamble on the PD channel and decodes a SIG content on the SIG content channel; and (ii) after an end of the TXOP, the STA switches to a primary frequency segment of the plurality of frequency segments to monitor the initial PD channel and the initial SIG content channel. In some implementations, the PD channel and the SIG content channel may be on a same frequency segment or different frequency segments of the plurality of frequency segments. In such cases, the PD channel may remain on the primary frequency segment and the SIG content channel may be assigned to a secondary frequency segment of the plurality of frequency segments within the operating bandwidth of the STA. Alternatively, the PD channel may be assigned to a secondary frequency segment of the plurality of frequency segments and the SIG content channel may be assigned to a different frequency segment of the plurality of frequency segments within the operating bandwidth of the STA. Still alternatively, the PD channel and the SIG content channel may be assigned to a same secondary frequency segment of the plurality of frequency segments. In some implementations, in communicating between the AP and the STA, process1500may involve processor1412performing either or both of a PD channel assignment and a SIG content channel assignment to the STA to have the frame exchange with the STA using a PPDU format different than a format used on the primary frequency segment. In some implementations, in communicating between the AP and the STA, process1500may involve processor1412assigning either or both different PD channels and SIG content channels to different STAs to which different PPDU formats are applied or different frequency segments are assigned. In some implementations, in communicating between the AP and the STA, process1500may involve processor1412performing either or both of a PD channel assignment and a SIG content channel assignment to the STA to aggregate PPDUs of different formats (e.g., HE PPDU(s) and EHT PPDU(s)) into one transmission on different frequency segments in the operating bandwidth of the AP. Alternatively, in communicating between the AP and the STA, process1500may involve processor1412performing a dynamic SIG content channel assignment to the STA to balance a SIG content load in the operating bandwidth of the AP. In some implementations, in communicating between the AP and the STA, process1500may involve processor1412performing either or both of a PD channel assignment and a SIG content channel assignment to the STA to allow segment-specific SIG content in one or more frequency segments in the operating bandwidth of the AP. In some implementations, in communicating between the AP and the STA, process1500may involve processor1412transmitting, via transceiver1416, a control frame or control information to the STA to assign either or both of the PD channel and the SIG content channel to the STA with either or both of the PD channel and the SIG content channel being in a secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. Moreover, process1500may involve processor1422receiving, via transceiver1426, the control frame or the control information from the AP that assigns either or both of the PD channel and the SIG content channel to the STA with either or both of the PD channel and the SIG content channel being in the secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. In some implementations, the STA may receive the control frame or the control information on the primary frequency segment. In some implementations, the control frame may be transmitted by the AP and received by the STA at least a SIFS before the DL transmission. In some implementations, a PPDU carrying the control frame may be padded in a MAC payload or with a packet extension or a signal extension at an end of the PPDU to allow additional switching time. In some implementations, the control frame or the control information may be transmitted at a beginning of the TXOP. In such cases, in communicating between the AP and the STA, process1500may involve processor1422performing certain operations. For instance, process1500may involve processor1422switching either or both of the PD channel and the SIG content channel to the secondary frequency segment. Then, process1500may involve processor1422performing the frame exchange with the AP. In some implementations, in communicating between the AP and the STA, process1500may involve processor1412transmitting, via transceiver1416, a control frame to the STA to assign either or both of the PD channel and the SIG content channel to the STA with either or both of the PD channel and the SIG content channel being in a secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. Moreover, process1500may involve processor1422transmitting, via transceiver1426, an acknowledgement on at least the secondary frequency segment to the AP responsive to receiving the control frame. In some implementations, the acknowledgement may be transmitted by the STA at least a SIFS before the DL transmission. In some implementations, the control frame may include a MU-RTS frame or a PS-poll)frame. Additionally, the acknowledgement may include a CTS frame or an EHT TB acknowledgement. In some implementations, a PPDU carrying the control frame may be padded with a packet extension or a signal extension at an end of the PPDU to allow additional switching time. In some implementations, in communicating between the AP and the STA, process1500may involve processor1412transmitting, via transceiver1416, channel puncturing information to the STA. Furthermore, process1500may involve processor1422switching to decode the SIG content (e.g., in the primary frequency segment but not necessarily) from the SIG content channel in the primary frequency segment responsive to the channel puncturing information indicating that the initial SIG content channel is punctured. FIG.16illustrates an example process1600in accordance with an implementation of the present disclosure. Process1600may represent an aspect of implementing various proposed designs, concepts, schemes, systems and methods described above. More specifically, process1600may represent an aspect of the proposed concepts and schemes pertaining to wide bandwidth transmission schemes in wireless communications in accordance with the present disclosure. Process1600may include one or more operations, actions, or functions as illustrated by one or more of blocks1610and1620. Although illustrated as discrete blocks, various blocks of process1600may be divided into additional blocks, combined into fewer blocks, or eliminated, depending on the desired implementation. Moreover, the blocks/sub-blocks of process1600may be executed in the order shown inFIG.16or, alternatively in a different order. Furthermore, one or more of the blocks/sub-blocks of process1600may be executed repeatedly or iteratively. Process1600may be implemented by or in apparatus1410and apparatus1420as well as any variations thereof. Solely for illustrative purposes and without limiting the scope, process1600is described below in the context of apparatus1410implemented in or as STA110and apparatus1420implemented in or as STA120of a wireless network such as a WLAN in network environment100in accordance with one or more of IEEE 802.11 standards. It is noteworthy that, although examples described below are provide in the context of apparatus1410, the examples may also be applicable to apparatus1420or otherwise implemented by apparatus1420. Process1600may begin at block1610. At1610, process1600may involve processor1412of apparatus1410, implemented in STA110as an AP, and processor1422of apparatus1420, implemented in STA120as a first non-AP STA (denoted as “first STA” in the description below for brevity), establishing a wireless communication between the AP and the first STA with the first STA initially monitoring a primary frequency segment of a plurality of frequency segments in an operating bandwidth of the AP in a BSS which is associated with a plurality of STAs including the first STA. Process1600may proceed from1610to1620. At1620, process1600may involve processor1412and processor1422communicating, via transceiver1416and transceiver1426, respectively, to result in the first STA being assigned either or both of a first PD channel and a first SIG content channel such that the first STA monitors a preamble on the first PD channel and decodes a SIG content on the first SIG content channel during at least a predetermined period of time. In response to a first bandwidth of the first STA being different than a second bandwidth of a second STA of the plurality of STAs, at least one of a second PD channel and a second SIG content channel assigned to the second STA and at least one of the first PD channel and the first SIG content channel may be in different segments of the plurality of frequency segments. In response to a first type of the first STA being different than a second type of the second STA, the first SIG content channel may be in one of the plurality of frequency segments other than the primary frequency segment. In some implementations, in communicating between the AP and the first STA, process1600may involve processor1412assigning or negotiating with the first STA to assign either or both of the first PD channel and the first SIG content channel to the first STA with either or both of the first PD channel and the first SIG content channel being in a secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. Moreover, process1600may involve processor1422negotiating with or being assigned by the AP either or both of the first PD channel and the first SIG content channel with either or both of the first PD channel and the first SIG content channel being in the secondary frequency segment of the plurality of frequency segments different than the primary frequency segment. In some implementations, in communicating between the AP and the first STA, process1600may involve processor1412and processor1422communicating via a management frame exchange between the AP and the first STA in either or both of a PD channel and SIG content channel assignment or a re-assignment. In some implementations, the predetermined period of time may include a TWT or a SP. In some implementations, in an event that the first type of the first STA is different than the second type of the second STA, the first STA may be an EHT STA and the second STA may be a HE STA. Additional Notes The herein-described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components. Further, with respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. Moreover, it will be understood by those skilled in the art that, in general, terms used herein, and especially in the appended claims, e.g., bodies of the appended claims, are generally intended as “open” terms, e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc. It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to implementations containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an,” e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more;” the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number, e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations. Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention, e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc. In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention, e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc. It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” From the foregoing, it will be appreciated that various implementations of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various implementations disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
60,927
11943644
DETAILED DESCRIPTION The traditional method to test network traffic, which is intrusive to user network service, is referred to “active probing.” The term “active probing” herein generally refers to testing of a communication network by sending test pattern/data over the network from one communication device to another communication device, and then measuring the response from the sent test pattern. The response data is also referred herein as “active data” or “active measurement data” which is data associated with active probing of a communication network. Traditional active probing software such as iperf, netperf, ttcp, etc, is run at application layers, where a data transmission application software and a data reception application software are used together for accurately measuring performance between the two transmission and reception devices. Traditional active probing is accurate because actual test data is transmitted in the same way as user traffic would be transmitted over the network. Frequent active probing can be annoying to the user because it may delay user traffic. It is possible to run active probing without stopping user traffic, but such a measurement is not accurate because the testing traffic competes with the user traffic, and furthermore active probing can significantly impair the user experience due to lower throughput and/or higher latency. To overcome this and other limitations, method and system for measuring performance without impacting customer's traffic are described herein. An advanced active probing method is described in (PCT Application No. entitled “Method and System for Performance Measurement of a Communication Link” filed concurrently with this application on Jul. 13, 2012, incorporated by reference herein in its entirety, and co-owned by ASSIA Inc. of Redwood City, California, 94065, USA.), can avoid the user traffic issue, by considering operational data that account for the user traffic as well as the test traffic. Another mechanism to gauge performance of a communication link and/or communication device is to monitor operational data associated with a communication device. The operational data is generated for several purposes. For example, operational data is sometimes generated as a by-product of normal operation of the communication device. In another example, operational data is generated to provide basic performance or operation information associated with the communication device. Reading or collecting of such operational data is not intrusive to user network service. Monitoring or reading of such communication data (operational data) is sometimes referred as “passive probing,” herein. Usually, operational data of communication devices do not contain the most important and advanced performance metrics such as throughput or latency, but a rough estimation of advanced metrics can be possible using operational data. For instance, throughput may be roughly estimated from typical operational data such as packet error counts and PHY-layer constellation information that indicate how many bits are being transmitted per data symbol. Such an estimate, however, might not be accurate because the used operational data might not contain sufficient information about throughput and because the relation between the operational data and throughput is often dependent on noise (including interference) and channel characteristics that quickly change for different locations and different time. In the embodiments of this disclosure, operational data are used together with active-probing data to get a reliable estimate of performance of a communication link. In one embodiment, while active-probing data is used, the operational data can be collected together. With the complete set of active-probing data and operational data, active-probing data results are considered as an accurate estimation of performance of the communication link and are used for training operational-data-only estimation algorithms. In one embodiment, once the training is complete and the accuracy of operational-data-only estimation is fully understood, the system is monitored with operational data without frequent active-probing that is service intrusive. In one embodiment, active-probing is invoked infrequently, or even dynamically depending on the need for accurate performance estimation and need for training data for updating operational-data-only estimator. The embodiments of the disclosure can be used in a few different ways. For example, at a higher level abstraction, active-probing and operational data may be collected from a large (e.g., 100 or more communication devices forming a network) communication network and analysis can be performed over the entire data to develop passive estimators with a good accuracy. In one embodiment, such passive estimations are performed with any well known machine learning techniques such as SVM (Support Vector Machine). In another example, at a lower level abstraction, passive estimator can be adaptively tuned for each communication link in the communication network. Each environment is unique and the best estimator can be dependent on the environment. In one embodiment, machine learning or any learning is performed for each communication device in the communication system such that the passive estimator provides the best performance for the given environment. In one embodiment, the performance estimation algorithm performs updates as follows. First, an initial step size is defined. If the throughput estimation using passive data is determined to be too low by the active probing data, then this throughput estimation is increased proportional to the step size. If the throughput estimation using passive data is determined to be too high by the active probing data, then this throughput estimation is decreased proportional to the step size. The terms “low” and “high” refer to programmable or predetermined thresholds distinct from one another. If the throughput estimation is decreased and then increased at the next iteration, or if the throughput estimation is increased and then decreased at the next iteration, then the step size is lowered. In one embodiment, the operational data are read from counters (also referred herein as operational counters associated with the communication device) that increase in count value for successfully delivered packets. The term “successful” herein refers to an indication suggesting safe receipt of a packet by a communication device that is often confirmed by ACK (acknowledge) message packet. In another embodiment, operational data such as error counts, retransmission counts, modulation, signal strength, etc. are used to estimate the throughput of the communication link. During the process of passive probing, i.e., reading of operational data, customer network service is not interrupted. Operational data is generally user visible or accessible data and is generally used for debugging and basic performance monitoring of communications systems, but generally not for advanced performance estimation because the data was not designed for the performance monitoring, does not carry sufficient information related to performance and there is no known estimation algorithms with high accuracy. Therefore, passive probing alone may not be enough to determine advanced performance of a communication system and operational data generally includes counter values that are only weakly associated with the current performance of a communication system. The embodiments herein disclose a method and system for improving performance estimation of a communication device by using operational data together with active probing data to train a performance estimation algorithm. In one embodiment, after training the performance estimation algorithm using both active probing data that is accurate and passive probing data that is not intrusive, operational data is monitored regularly and used to accurately update the performance estimation without interrupting customer traffic over the network. In one embodiment, active probing is initiated when there is a need to update the performance estimation algorithm. Thereafter, the performance estimation algorithm is trained via passive operational probing data. In another embodiment, active probing is initiated periodically (i.e., at regular intervals) to check if the performance estimation algorithm that uses passive probing data only is estimating performance with comparable accuracy to the active probing data's algorithm. The embodiments herein provide an efficient and nearly non-intrusive method for estimating performance of a communication device, and for managing a network system with little or no interruption to the users of the network. In the following description, numerous details are discussed to provide a more thorough explanation of embodiments of the present disclosure. It will be apparent, however, to one skilled in the art, that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present disclosure. Note that in the corresponding drawings of the embodiments, signals are represented with lines. Some lines may be thicker, to indicate more constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. Such indications are not intended to be limiting. Rather, the lines are used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit or a logical unit. Any represented signal, as dictated by design needs or preferences, may actually comprise one or more signals that may travel in either direction and may be implemented with any suitable type of signal scheme. In the following description and claims, the term “coupled” and its derivatives may be used. The term “coupled” herein refers to two or more elements which are in direct contact (physically, electrically, magnetically, optically, etc.). The term “coupled” herein may also refer to two or more elements that are not in direct contact with each other, but still cooperate or interact with each other. As used herein, unless otherwise specified the use of the ordinal adjectives “first,” “second,” and “third,” etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner. The term “substantially,” “approximately,” “nearly,” “about,” “close,” and such similar terms refer to a quantity being within +/−20% of a target value. FIG.1is a communication network100which is operable to estimate and improve communication system performance estimation algorithm, according to one embodiment of the disclosure. In one embodiment, the communication network comprises an optimization center101(e.g., server) communicatively coupled to one or more communication devices1031-N, where ‘N’ is a positive integer. In one embodiment, communication device1032is coupled to a Customer Premises Equipment (CPE) modem104via a Digital Subscriber Line (DSL) link. In one embodiment, the CPE modem104is coupled to an access point (AP)105. In one embodiment, the AP105is coupled to one or more stations (STAs)1061-M, where ‘M’ is a positive integer. In one embodiment, performance estimation algorithm102is an equation with input variables being the passive probing data. In one embodiment, performance estimation algorithm102either increases or decreases in proportion to passive probing data. In one embodiment, instructions for updating and/or developing a performance estimation algorithm102are stored on the optimization server101and/or one or more of the communication devices1031-N. While the embodiment ofFIG.1does not show that the other devices104,105, and1061-M include instructions for updating and/or developing a performance estimation algorithm102, in one embodiment any communication device coupled directly or indirectly to the network (wired or wireless) may have instructions for updating and/or developing a performance estimation algorithm102. In one embodiment, the performance estimation algorithm102can be tuned per each communication device according to the communication device's data and environments. In one embodiment, the resulting performance estimation algorithm102can be different over the communication devices1031-N. In one embodiment, the communication devices1031-N include an access point (AP); a base station; a wireless local area network (LAN) device; a Digital subscriber line access multiplexer (DSLAM); a gateway; a performance enhancement device; a Digital Subscriber Line (DSL) CPE (Customer premises equipment) modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless WiFi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; an set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; and an Ethernet connected network switch. In one embodiment, the one or more communication devices1031-N are operable to execute active probing to determine active probing data. In this embodiment, the one or more communication devices1031-N flood traffic on their respective communication links1071-N to the optimization center101. In this embodiment, response received by the one or more communication devices1031-N from the optimization center101over the communication links1071-N is the active data, which is used by the respective performance estimation algorithms102in the corresponding one or more communication devices1031-N to train the performance estimation algorithms. In one embodiment, the one or more communication devices1031-N are operable to execute active probing by transmitting active probing data from one communication device to another communication device. For example, communication device1031transmits active probing data to communication device1061and/or communication device1032transmits active probing data to CPE104over a DSL link. In another example, communication device1061transmits active probing data to optimization center101via communication links including1071. In one embodiment, the one or more communication devices1031-N are further operable to wait for a predetermined time before reading the operational data including counter values related to user data traffic on the communication links1071-N. In one embodiment, the predetermined time is in the range of 0.001 seconds to 60 seconds. In other embodiments other waiting periods may be used. In one embodiment, the waiting period is programmable by software or hardware. So as not to obscure the embodiments of the disclosure, communication devices1031,1032,104, and optimization center101are discussed. The same discussion is applicable to other communication devices. In one embodiment, the communication device1031is further operable to receive a report indicating amount of data or data received by the other communication device (e.g., optimization center101, and/or communication device1032). In one embodiment, the one or more communication devices1031-N are operable to read operational data which includes data related to channel (e.g., links1071-N, links between105and1061-M, links between1031and1061-M, and/or DSL links between1032and104) and its noise condition, data relevant to the current setting of the communication devices1031-N, and counter values related to user data traffic between the communication devices1031-N and another communication device (e.g., optimization center101,105,1061-M,104, etc), wherein the operational data is relevant to the current settings of the communication device. Examples of such operational data are successful transmit packet counts, successful receive packet counts, ACK packet counts, error packet counts, discarded packet counts, retransmission counts, etc. In one embodiment, the one or more communication devices are operable to execute active probing fewer times than to execute passive probing. For example, active probing is executed at most 5 times per day because it is an intrusive process, and passive probing is executed 1440 times per day (e.g., every one minute). In one embodiment, the one or more communication devices1031-N are operable to train their respective performance estimation algorithms102according to the active probing data and the operational data. In one embodiment, the one or more communication devices1031-N are operable to, prior to executing active probing, read operational data (i.e., passive probing) from counter values related to the user data traffic on communication links. For example, links1071-N, links between105and1061-M, links between1031and1061-M, and/or DSL links between1032and104. In one embodiment, the counter values include at least one of packet error counts, packet retransmission counts, successful ACK message counts, etc. In one embodiment, the one or more communication devices1031-N are operable to read operational data (i.e., execute passive probing) during or after executing active probing. The accuracy of the performance estimation algorithm may be dependent on the characteristics of the user's traffic patterns and the characteristics of the noise and channel environments. In an environment, noise and channel might vary frequently. In another environment, noise and channel might vary very infrequently. In yet another environment, noise and channel might vary frequently but mostly between two states only. In one embodiment, the performance estimation algorithm102for each device is adaptively tuned. In one embodiment, the one or more communication devices1031-N are operable to train the performance estimation algorithm102by updating the performance estimation algorithm102as a function of one or more criteria including at least one of: time of day, time of the week, type of communication device, manufacturer and model of equipment, equipment characteristics, firmware, backbone limitations, user's network usage pattern, radio-frequency (RF) characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices adjacent to the communication device, wherein the data includes at least one of interference channels and levels. In one embodiment, the one or more communication devices1031-N are operable to compute throughput of the communication devices1031-N using active probing data for training the performance estimation algorithm. In one embodiment, the one or more communication devices1031-N are operable to transmit the active probing data and read operational data over the communication links1071-N to the optimization center101(e.g., a server), where the operational data is related to user data traffic from the one or more communication devices1031-N before, during and/or after executing active probing. In one embodiment, the optimization center101is operable to train the performance estimation algorithm102for the communication device according to active probing data and read operational data from the one or more communication devices1031-N. In one embodiment, the optimization center101is operable to apply machine learning algorithm for training the performance estimation algorithm for the communication device. In this embodiment, the accurate active probing data is used together with passive probing data for machine learning, and performance estimation algorithm102, that uses only the passive data as input, is determined accordingly. For example, the optimization center101(or any other communication device) may apply one or more of: decision tree learning, associated rule learning, artificial neural networks learning algorithm, genetic programming algorithm, inductive logic programming approach, support vector machine approach, clustering, Bayesian network based probabilistic graphical model, reinforcement learning, representation learning, sparse dictionary learning, etc. In other embodiments, other machine learning algorithms may be used. While the embodiments herein describe the machine learning algorithm applied by the optimization center101, any communication device may have executable instructions and associated hardware to apply and perform machine learning for training performance estimation algorithm. In one embodiment, after completing the training process for the performance estimation algorithm, the network100can be monitored with operational data (data from passive probing) without any interruption to user traffic. In one embodiment, active probing can be initiated by any communication device infrequently and/or dynamically depending on the need for accurate performance estimation and the need for training data for updating operational data estimator. For example, when the performance of the network falls below a threshold and the performance estimation does not provide accurate data, the communication device1032may invoke active probing to train the performance estimation algorithm so that the network100can be monitored via operational data in future. FIG.2is a flowchart200for training the performance algorithm, according to one embodiment of the disclosure. Although the blocks in the flowcharts with reference toFIG.2are shown in a particular order, the order of the actions can be modified. Thus, the illustrated embodiments can be performed in a different order, and some actions/blocks may be performed in parallel. The flowchart ofFIG.2is illustrated with reference to the embodiments ofFIG.1. So as not to obscure the embodiment of this flowchart, details of each method step is not reiterated. In one embodiment, the method comprises recording running values of counters related to data traffic on communication links, for example, links1071-N, links between105and1061-M, links between1031and1061-M, and/or DSL links between1032and104. In one embodiment, the running values of the counters include at least one of packet error counts, packet retransmission counts, successful ACK message counts, etc. For example, B1is the total transmitted bytes recorded by the counters. In such an embodiment, the operational counters increase in count value for successfully delivered packets. In one embodiment, the communication device (e.g.,1031or the optimization center101) begins to execute active probing. In such an embodiment, active probing data is transmitted from the communication device (e.g.,1031,105,1032or the optimization center101) to another communication device (e.g.,101,1061-M, or104) via respective communication links (e.g., links1071-N, links between105and1061-M, and/or links between1032and104). In one embodiment, after waiting for ‘t’ seconds (e.g., 0.001 seconds to 60 seconds) the operational counter values are read again, for example, a total of B2 transmitted bytes are now recorded from the operational counters. In one embodiment, throughput is calculated, where throughput=(B2−B1)/t in bytes/second. The calculated throughput may not be accurate due to any bias in the reported bytes from operational data compared to the actual user data bytes that were used. Another reason for inaccurate calculated throughput may be the reported bytes being much lower than the capacity of the link simply because user did not use the link heavily enough and did not generate enough traffic to cause the counters to increase their values with full speed. In one embodiment, such bias and inaccuracy in the calculated throughput may be detected by comparing the throughput calculated form operational data with throughput calculated with active probing data. In such an embodiment, the method discussed herein can be used to come up with a more accurate throughput estimation algorithm compared to the straightforward but inaccurate method of using (B2−B1)/t. At block201, the communication device (e.g., one or more of1031-N,105, and/or the optimization center101) reads operational data associated with the physical or Media Access Control (MAC) address layer (e.g., gateway) of the communication device. For example, the communication device1032reads operational data associated with the DSL link between the communication device1032and the CPE104. At block202, the communication device executes active probing. For example, test data is transmitted and received over links1071-N, links between devices105and1061-M, links between1031and1061-M. In another example, test data is transmitted and received over DSL links between1032and104. In other embodiments, test data from active probing is transmitted and received over other links and other communication devices. At block203, the communication device1032reads operational data again followed by executing active probing. In this embodiment, the counter values that correspond to the passive data or operational data are read again and now their content (counter values) represent a snapshot of network performance. The counter values may not provide an accurate snapshot of network performance using active probing data in the absence of a trained performance estimation algorithm for the link. At block204, the Optimization Center101uses the counter values (passive data i.e., operational data) along with active data determined by executing active probing to train the performance estimation algorithm102. While the embodiments herein are explained using the Optimization Center101for training the performance estimation algorithm102, any other communication device (ofFIG.1) in the network may be used for training the performance estimation algorithm102. In one embodiment, the communication device1032can be using the data to train the performance algorithm102. In one embodiment, the Optimization Center101continues to refine the performance estimation algorithm102using the operational data because the operational data now has more relevant data after having executed active probing that normally generates full traffic (e.g., by flooding the links). In such an embodiment, execution of active probing can be limited so that data traffic is not interrupted. For example, the performance estimation algorithm102is updated using operational data which now provides an accurate estimation of the network performance. FIG.3is a flowchart300for training the performance algorithm for a communication device by a server, according to one embodiment of the disclosure. As mentioned before, any one of the communication devices1031-N may be the server as well. Although the blocks in the flowcharts with reference toFIG.3are shown in a particular order, the order of the actions can be modified. Thus, the illustrated embodiments can be performed in a different order, and some actions/blocks may be performed in parallel. The flowchart ofFIG.3is illustrated with reference to the embodiments ofFIGS.1-2. The flowchart300is illustrated with reference to activities performed at the server end301and activities performed at the communication end302. At block303the communication device1032executes active probing. For example, the communication device1032sends test data over the communication link1072to the server101and then receives the active data from the server101. In another example, the communication device1032sends test data over the DSL link to the CPE104, which behaves like a server, and then receives active data from the CPE104. In the embodiments discussed herein, any of the communication device may behave as a server to process data (active and/or passive) for updating the performance estimation algorithm. At block304, the communication device1032executes passive probing i.e., reads operational data. At block305, the communication device1032transmits operational data to the sever end301. For example, the communication device1032transmits operational data over the communication link1072to the server101. In another example, the communication device1032transmits operational data over the DSL link to the CPE104which behaves like a server. At block306, operational data is received by at the server end301. For example, operational data is received by the server101. In another example, operational data is received over the DSL link to the CPE104which behaves as a server. At block307, the communication device at the server end301trains the performance estimation algorithm102according to data received from active probing and/or operational data (passive probing data). At block308, the trained algorithm is sent to the communication device1032which may use that training algorithm to gauge the performance of the communication device1032. As discussed herein, performance estimation using operational data is not intrusive as opposed to using traditional network monitoring utilities (NMUs) with active probing. Operational data are generally readily available and can be used for continuous updating or training of the performance estimation algorithm and for evaluating network performance. In one embodiment, accurate NMUs are used intermittently (e.g., once a week) to calibrate, enhance, or fine-tune operational data based performance estimation methods. In such an embodiment, operational data is used to continuously monitor the network while NMUs are used intermittently to calibrate the performance estimation methods. The results obtained from the NMUs and the operational data can be combined together using a learning-based algorithm. For example, throughput estimates of the network obtained using operational data can be calibrated by active probing of the network using NMU based techniques. In the situation where each communication link, for example, links1071-N, links between105and1061-M, links between1031and1061-M, and/or DSL links between1032and104are unique, results from the NMUs and the operational data can be used in link-tailored algorithms. For example, a particular link e.g., links1071-N, links between105and1061-M, links between1031and1061-M, and/or DSL link between1032and104, may have very high data traffic which does not allow for frequent calibrations using NMUs because executing NMUs interfere with user traffic. In such an embodiment, the learning algorithm may combine the occasional result from the NMU and the more frequent results from the operational data (passive data from passive probing) to tune the performance estimation algorithm to suite the particular links' operational data characteristics. In some embodiments, relevant operational data fields may be unavailable but their absence is accommodated by the occasional per-link calibration using NMU measurement to overcome any limitation from the unavailability of relevant operational data fields. In one example, patterns in the transmission and reception characteristics may be identified using operational data (i.e., passive probing data) and confirmed (or calibrated) using NMUs (i.e., active probing). In one embodiment, such patterns in the transmission and reception characteristics may be based on time, traffic, channel, application, etc. These patterns can also be used for performance estimation. In another example, performance estimation or performance evaluation of a network may be performed in real-time using real-time data by a user of the communication device1032. For example, a user wants to perform self-diagnosis of the communication device1032may initiate performance estimation which executes active probing and reads operational data. In another example, a service provider may monitor performance of a network and diagnose a communication link in the network in response to a help request from a customer. FIG.4is a processor-based system400having machine-readable storage medium404with computer executable instructions102/404aoperable to estimate and improve communication system performance algorithm, according to one embodiment of the disclosure. The storage medium and associated computer executable instructions may be in any of the communication devices and/or servers discussed herein. The computer-machine-readable/executable instructions102/404aare executed by a processor401. Elements of embodiments are provided as machine-readable medium for storing the computer-executable instructions (e.g., instructions to implement the flowcharts ofFIGS.2-3and other processes discussed in the description). In one embodiment, the processor-based system400further comprises a database402to store data used by the instructions102/404a. In one embodiment, the processor-based system400includes a network interface405to communicate with other devices. In one embodiment, the components of the processor-based system400communicate with one another via a network bus403. The machine-readable storage medium404may include, but is not limited to, flash memory, optical disks, hard disk drive (HDD), Solid State Drive (SSD), CD-Read Only Memory (CD-ROMs), DVD ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, or other type of machine-readable media suitable for storing electronic or computer-executable instructions. For example, embodiments of the disclosure may be downloaded as a computer program (e.g., BIOS) which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals via a communication link (e.g., a modem or network connection). Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. If the specification states a component, feature, structure, or characteristic “may,” “might,” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the elements. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, a first embodiment may be combined with a second embodiment anywhere the particular features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive. While the disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of such embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. The embodiments of the disclosure are intended to embrace all such alternatives, modifications, and variations as to fall within the broad scope of the appended claims. The following examples pertain to further embodiments. Specifics in the examples may be used anywhere in one or more embodiments. All optional features of the apparatus described herein may also be implemented with respect to a method or process. For example, in one embodiment a method for performance estimation of a communication device, the method comprises: executing active probing to determine active probing data; reading operational data which includes data related to channel and its noise condition and counter values related to user data traffic between the communication device and another communication device, wherein the operational data is relevant to the current settings of the communication device; and training a performance estimation algorithm for the communication device according to the active probing data and the operational data. In one embodiment, the method further comprises: prior to executing active probing, reading operational data. In one embodiment, reading operational data is performed during or after executing active probing. In one embodiment, training the performance estimation algorithm comprises: updating the performance estimation algorithm as a function of one or more criteria including at least one of: time of day, time of the week, type of communication device, manufacturer and model of equipment, equipment characteristics, firmware, backbone limitations, user's network usage pattern, RF characteristics including at least one of: signal power, path loss, noise level, frequency bands and mode of operation, environment statistics, or data on operation of communication devices adjacent to the communication device, wherein the data includes at least one of interference channels and levels. In one embodiment, executing active probing comprises: transmitting active probing data from the communication device to the other communication device; and waiting for a predetermined time before reading the operational data. In one embodiment, executing active probing comprises: transmitting active probing data from the communication device to the other communication device; and receiving a report indicating amount of data or data received by the other communication device. In one embodiment, executing active probing comprises: transmitting traffic from the communication device to the other communication device; and recording measured data associated with the transmitted traffic. In one embodiment, the method further comprises: computing at least one of throughput of the communication device, connectivity, latency, jitter, or error rate using active probing data for training the performance estimation algorithm. In one embodiment, executing active probing is performed fewer times than executing passive probing. In one embodiment, the method further comprises: transmitting the active probing data and read operational data to a server, before, during and/or after executing active probing. In one embodiment, the server to train the performance estimation algorithm for the communication device according to active probing data and read operational data from the communication device and other communication devices. In one embodiment, the server to apply a machine learning algorithm for training the performance estimation algorithm for the communication device. In one embodiment, the communication device comprises at least one of: an access point (AP); a base station; a wireless local area network (LAN) device; a digital subscriber line access multiplexer (DSLAM); a gateway; a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless WiFi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; and an Ethernet connected network switch. In another example, in one embodiment there is a machine-readable storage medium for storing machine-executable instructions that when executed cause a processor to perform a method according to the method discussed herein. In another example, a system comprises: an optimization center communicatively coupled to one or more communication devices, wherein the one or more communication devices are operable to: execute active probing to determine active probing data; read operational data which includes data related to channel and its noise condition and counter values related to user data traffic between the communication device and another communication device, wherein the operational data is relevant to the current settings of the communication device; and train a performance estimation algorithm for the communication device according to the active probing data and the operational data. In one embodiment, the optimization center is implemented as a server or as a communication device from among the one or more communication devices. In one embodiment, the one or more communication devices are operable to, prior to executing active probing, read operational data. In one embodiment, the one or more communication devices are operable to read operational data during or after executing active probing. In one embodiment, the one or more communication devices are operable to train the performance estimation algorithm by updating the performance estimation algorithm as a function of one or more criteria including at least one of: time of day, time of the week, type of communication device, manufacturer and model of equipment, equipment characteristics, firmware, backbone limitations, user's network usage pattern, RF characteristics including at least one of: signal power, path loss, noise level, frequency bands and mode of operation, environment statistics, or data on operation of communication devices adjacent to the communication device, wherein the data includes at least one of interference channels and levels. In one embodiment, the one or more communication devices are operable to execute active probing by: transmitting active probing data from the communication device to the other communication device; and waiting for a predetermined time before reading the operational data. In one embodiment, the one or more communication devices are operable to execute active probing by: transmitting active probing data from the communication device to the other communication device; and receiving a report indicating amount of data or data received by the other communication device. In one embodiment, the one or more communication devices are operable to execute active probing by: transmitting traffic from the communication device to the other communication device; and recording measured data associated with the transmitted traffic. In one embodiment, the one or more communication devices are operable to compute at least one of throughput of the communication device, connectivity, latency, jitter, or error rate using active probing data for training the performance estimation algorithm. In one embodiment, the one or more communication devices are operable to execute active probing fewer times than to execute passive probing. In one embodiment, the one or more communication devices are operable to: transmit the active probing data and read operational data to a server, before, during and/or after executing active probing. In one embodiment, the server is operable to train the performance estimation algorithm for the communication device according to active probing data and read operational data from the communication device and other communication devices. In one embodiment, the server is operable to apply a machine learning algorithm for training the performance estimation algorithm for the communication device. In one embodiment, the communication device comprises at least one of: an access point (AP); a base station; a wireless local area network (LAN) device; a digital subscriber line access multiplexer (DSLAM); a gateway; a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless WiFi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; and an Ethernet connected network switch. In another example, in one embodiment a method for performance estimation of a communication device, the method comprises: receiving operational data including counter values from the communication device after executing active probing and passive probing, the counter values related to user data traffic from the communication device to another communication device; and training a performance estimation algorithm for the communication device according to the operational data before or after executing active probing. In one embodiment, the method further comprises: prior to executing active probing, receiving operational data. In one embodiment, the operational data is received during or after executing active probing. In one embodiment, training the performance estimation algorithm comprises: updating the performance estimation algorithm as a function of one or more criteria including at least one of: time of day, time of the week, type of communication device, manufacturer and model of equipment, equipment characteristics, firmware, backbone limitations, user's network usage pattern, RF characteristics including at least one of: signal power, path loss, noise level, frequency bands and mode of operation, environment statistics, or data on operation of communication devices adjacent to the communication device, wherein the data includes at least one of interference channels and levels. In one embodiment, executing active probing comprises: transmitting active probing data from the communication device to the other communication device; and waiting for a predetermined time before reading the operational data. In one embodiment, executing active probing comprises: transmitting active probing data from the communication device to the other communication device; and receiving a report indicating amount of data or data received by the other communication device. In one embodiment, the method further comprises: computing at least one of throughput of the communication device, connectivity, latency, jitter, or error rate using active probing data for training the performance estimation algorithm. In one embodiment, executing active probing is performed fewer times than executing passive probing. In one embodiment, the method further comprises: receiving the active probing data and read operational data, the operational data related to user data traffic from the communication device before, during and/or after executing active probing. In one embodiment, training the performance estimation algorithm for the communication device is performed according to active probing data and read operational data from the communication device and other communication devices. In one embodiment, training the performance estimation algorithm comprises applying a machine learning algorithm. In one embodiment, the communication device comprises at least one of: an access point (AP); a base station; a wireless local area network (LAN) device; a digital subscriber line access multiplexer (DSLAM); a gateway; a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless WiFi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; and an Ethernet connected network switch. In yet another example, there is a machine-readable storage medium for storing machine-executable instructions that when executed cause a processor to perform the method discussed herein. An abstract is provided that will allow the reader to ascertain the nature and gist of the technical disclosure. The abstract is submitted with the understanding that it will not be used to limit the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment.
50,211
11943645
DESCRIPTION OF EMBODIMENTS FIG.1shows a relationship between a desired signal and an interference signal, and a relationship between an interference band rate and a mapping function.FIG.2shows a configuration example of a wireless communication characteristics evaluation device of the present invention.FIGS.1and2correspond toFIGS.8and9shown as a conventional method. InFIGS.1and2, an interference band rate calculation unit1inputs interference power information such as power and a band of an interference signal to calculate an interference band rate L showing a rate of the band of the interference signal that overlaps with a band of a desired signal. Note that, if the interference band rate L is 1 when the interference signal completely overlaps with the whole channel band occupied by the desired signal, and 0 when the interference signal does not overlap at all, then L=¾ is satisfied in examples ofFIGS.1and8because bands that the interference signal overlaps with are c2 to c4 among bands c1 to c4 of the desired signal. A steady noise power mapping unit2specifies interference power P2[dBm] and a mapping function corresponding to the interference band rate L. Since L=¾ is satisfied in the example ofFIG.1, a mapping function M1(P2) is used to convert the interference power P2[dBm] to steady noise power Ns [dBm]. A real SINR calculation unit3determines a real SINRefffrom this steady noise power Ns [dBm] and received power P1[dBm] of the desired signal and furthermore, a PER determination unit4determines PER from the real SINReff. Note that it is assumed that the mapping function is created using measurement data in order to enhance calculation accuracy even in a simplified calculation method, which is the present invention. FIG.3shows an interference calculation flow of the wireless communication characteristics evaluation method of the present invention. InFIG.3, when interference calculation is started, interference power information such as power and a band of an interference signal is acquired first (S11). Next, an interference band rate L, which is a rate of the band of the interference signal that overlaps with the channel band occupied by a desired signal, is calculated (S12). Next, an interference power rate R is calculated from the interference band rate L and actual interference power Nr (P2in the example ofFIG.1) based on data prepared in advance (details are shown inFIGS.4to6), and furthermore, steady noise power Ns (=R·Nr) is calculated from the interference power rate R and the actual interference power Nr (S13). Note that, in the example ofFIG.2, the interference power Nr is changed to the steady noise power Ns using a mapping function corresponding to the interference band rate L instead of using the interference power rate R. Next, a real SINReffis determined from received power of the desired signal and the steady noise power Ns calculated at S13(S14). Next, a PER of a packet that the desired signal carries is determined from the real SINReff(S15), and the interference calculation is ended. Note that, since this interference calculation flow can significantly reduce the number of calculations through the whole computer simulation and furthermore, does not have to repeatedly perform calculation for each channel block, it is possible to simplify the calculation itself and reduce calculation costs. Therefore, the interference calculation flow is advantageous when calculation is executed each time an interference event occurs or when fixed values are used for prerequisites of computer simulation. Three procedures for calculating the interference power rate R for calculating the steady noise power Ns from measurement data will be described below. FIG.4shows a first procedure for calculating the interference power rate R from measurement data. InFIG.4, measurement data is acquired first (S20). Note that measurement data required for the present procedure is an RSSI, a modulation/demodulation scheme (hereinafter referred to as an MCS), a retransmission rate for each MCS, and a noise factor or an NF (Noise Floor) determined from the noise factor. Next, an SINRain a state in which there is no interference is calculated (S21). As a method for the calculation, for example, the following two methods are conceivable. (a) The SINRain the state in which there is no interference is calculated from an RSSI in the state in which there is no interference and a noise factor of a receiving terminal. (b) A used MCS and a retransmission rate thereof are examined from packet capture data acquired in the state in which there is no interference; a PER in the MCS is checked; and a corresponding SINR is determined as the SINRain the state in which there is no interference. Note that, in order to determine an SINR from a corresponding PER, map functions of the SINR and the PER and data shown in Non-Patent Literature 2 are used. Otherwise, an average value among SINRs examined by a plurality of MCSs or an expected value weighted by a frequency rate is calculated. Next, an SINRbin a state in which there is interference at the interference band rate L is calculated (S22). As a method for the calculation, for example, the following method is conceivable. A used MCS and a retransmission rate thereof are acquired from packet capture data acquired in the state in which there is interference, and an SINR corresponding to a retransmission rate (PER) in a case where transmission is performed in the MCS is determined as the SINRbin the state in which there is interference. Note that, in order to determine an SINR from a corresponding PER, map functions of the SINR and the PER and data shown in Non-Patent Literature 2 are used. Otherwise, an average value among SINRs examined by a plurality of MCSs or an expected value weighted by a frequency rate is calculated. Next, increased real interference power N is calculated from the two SINRs, the SINRaand the SINRb(S23). For example, if the SINRawithout interference and the SINRbwith interference are assumed as follows: SINRa: RSSI/NF SINRb: RSSI/(NF+N) then, the real interference power N is as follows: N=(SINRa/SINRb−1)·NF Next, a ratio between the real interference power N and the actual interference power Nr (N/Nr) is calculated and set as the interference power rate R (S24). Otherwise, a mapping function may be created from a plurality of pieces of data. At S13inFIG.3, the steady noise power Ns=R·Nr is calculated from the interference power Nr and the interference power rate R. FIG.5shows a second procedure for calculating the interference power rate R from measurement data. InFIG.5, measurement data is acquired first (S30). Note that measurement data required for the present procedure is an RSSI, a throughput, and PER-to-MCS data. Next, an SINRain the state in which there is no interference is calculated (S31). A method for the calculation is the same as S21shown inFIG.3. Next, a corresponding MCS is determined from throughput values in the state in which there is interference at the interference band rate L (S32). Here, throughput values when transmission is performed by MCSs are kept as data, and a value that is the closest to a measured value, or the closest value among values higher/lower than the measured value is selected. Next, an SINR at which transmission can be performed by the MCS is referred to on the datasheet and the like, and the SINR is determined as an SINRbin the state in which there is interference at the interference band rate L (S33). Here, for example, such an SINR that the PER is below a predetermined value may be specified, or one value among SINRs at which it is thought that the transmission can be performed by the MCS, such as an intermediate value between such an SINR that the PER is below a predetermined value (sinrf) and such an SINR that the PER is below the predetermined value in an MCS higher than the above MCS by one (sinru), can be determined. Next, increased real interference power N is calculated from the two SINRs, the SINRaand the SINRb(S34). This process is the same as that of S23shown inFIG.4. Next, a ratio between the real interference power N and the actual interference power Nr (N/Nr) is calculated and set as the interference power rate R (S35). Otherwise, a mapping function may be created from a plurality of pieces of data. This process is the same as that of S24shown inFIG.4. FIG.6shows a third procedure for calculating the interference power rate R from measurement data. InFIG.6, measurement data is acquired first (S40). Note that measurement data required for the present procedure is an RSSI, a throughput, and PER-to-MCS data. Next, an SINRain the state in which there is no interference is calculated (S41). A method for the calculation is the same as S21shown inFIG.3. Next, an SINR for a throughput value in the state in which there is interference at the interference band rate L is referred to on the datasheet and the like, and the SINR is determined as an SINRbin the state in which there is interference at the interference band rate L (S42). From SINR-to-throughput data or the like obtained by theoretical calculation, computer simulation, or pre-measurement, throughputs and SINRs are mapped and made into a datasheet, and the SINRbin the state in which there is interference is selected from the datasheet.FIG.7shows a graph example of the SINR-to-throughput data.FIG.7shows a relationship between theoretical calculation of throughput and an SINR in the case of IEEE802.11ac (20 MHz, 1 ss) as a graph. For each SINR, an MCS having the largest throughput can be mapped one to one as shown by thick lines. Next, increased real interference power N is calculated from the two SINRs, the SINRaand the SINRb(S43). This process is the same as that of S23shown inFIG.4. Next, a ratio between the real interference power N and the actual interference power Nr (N/Nr) is calculated and set as the interference power rate R (S44). Otherwise, a mapping function may be created from a plurality of pieces of data. This process is the same as that of S24shown inFIG.4. REFERENCE SIGNS LIST 1Interference band rate calculation unit2Steady noise power mapping unit3Real SINR calculation unit4PER determination unit
10,274
11943646
DETAILED DESCRIPTION In the present specification, “A or B” may mean “only A”, “only B” or “both A and B”. In other words, in the present specification, “A or B” may be interpreted as “A and/or B”. For example, in the present specification, “A, B, or C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, C”. A slash (/) or comma used in the present specification may mean “and/or”. For example, “A/B” may mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”. For example, “A, B, C” may mean “A, B, or C”. In the present specification, “at least one of A and B” may mean “only A”, “only B”, or “both A and B”. In addition, in the present specification, the expression “at least one of A or B” or “at least one of A and/or B” may be interpreted as “at least one of A and B”. In addition, in the present specification, “at least one of A, B, and C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, and C”. In addition, “at least one of A, B, or C” or “at least one of A, B, and/or C” may mean “at least one of A, B, and C”. Technical features described individually in one figure in the present specification may be individually implemented, or may be simultaneously implemented. The following example of the present specification may be applied to various wireless communication systems. For example, the following example of the present specification may be applied to a wireless local area network (WLAN) system. For example, the present specification may be applied to the IEEE 802.11ad standard or the IEEE 802.11ay standard. In addition, the present specification may also be applied to the newly proposed WLAN sensing standard or IEEE 802.1 lbf standard. Hereinafter, in order to describe a technical feature of the present specification, a technical feature applicable to the present specification will be described. A WLAN sensing technology is a sort of radar technologies which can be implemented without a standard, but it is conceived that more powerful performance can be obtained through standardization. The IEEE 802.1 lbf standard defines an apparatus/device participating in wireless LAN sensing for each function as shown in the following table. According to the function thereof, the apparatus may be classified into an apparatus initiating WLAN sensing and an apparatus participating in the sensing, an apparatus transmitting a sensing physical layer protocol data unit (PPDU) and an apparatus receiving the PPDU. TABLE 1TerminologyFunctionSensing Initiatorapparatus/device initiating sensingSensing Responderapparatus/device participating in sensingSensing Transmitterapparatus/device transmitting sensing PPDUSensing Receiverapparatus/device receiving sensing PPDU FIG.1illustrates an example of a WLAN sensing scenario using multiple sensing transmitting apparatuses/devices. FIG.2illustrates an example of a WLAN sensing scenario using multiple sensing receiving apparatuses/devices. FIG.1andFIG.2illustrate a sensing scenario based on a function and deployment of a WLAN sensing apparatus/device. In an environment assuming one sensing initiation apparatus and multiple sensing participating apparatuses,FIG.1is a scenario using multiple sensing PPDU transmitting apparatuses, andFIG.2is a scenario using multiple sensing PPDU receiving apparatuses. Assuming that the sensing PPDU receiving apparatus includes a sensing measurement signal processing apparatus, in case ofFIG.2, a procedure for transmitting (feeding back) a sensing measurement result to the sensing initiation apparatus (STA5) is additionally required. FIG.3illustrates an example of a WLAN sensing procedure. A procedure of WLAN sensing is performed as discovery, negotiation, measurement exchange, tear down, or the like between WLAN sensing initiation apparatus/device and participating apparatuses/devices. The discovery is a process of identifying sensing capability of WLAN apparatuses. The negotiation is a process of determining a sensing parameter between the sensing initiation apparatus and participating apparatus. The measurement exchange is a process of transmitting a sensing PPDU and transmitting a sensing measurement result. The tear down is a process of terminating the sensing procedure. FIG.4is an example of classifying WLAN sensing. The WLAN sensing may be classified into CSI-based sensing which uses channel state information of a signal arrived at a receiver through a channel and radar-based sensing which uses a signal received after a transmission signal is reflected by an object. In addition, each sensing technology is classified again into a scheme (a coordinated CSI, active radar) in which a sensing transmitter directly participates in a sensing process and a scheme (un-coordinated CSI, passive radar) in which the sensing transmitter does not participate in the sensing process, i.e., there is no dedicated transmitter participating in the sensing process. FIG.5illustrates indoor positioning which uses CSI-based WLAN sensing. InFIG.5, the CSI-based WLAN sensing is utilized in the indoor positioning. An angle of arrival and a time of arrival are obtained by using CSI, and then are converted into an orthogonal coordinate to obtain indoor positioning information. FIG.6is an example of implementing a WLAN sensing apparatus/device. InFIG.6, the WLAN sensing apparatus/device is implemented using a MATLAB toolbox, Zynq, and USRP. An IEEE 802.11ax WLAN signal is generated in the MATLAB toolbox, and an RF signal is generated using a Zynq software defined radio (SDR). A signal passing through a channel is received using a USRP SDR, and sensing signal processing is performed in the MATLAB toolbox. Herein, one reference channel (a channel which can be directly received from a sensing transmitter) and one surveillance channel (a channel which can be received by being reflected by an object) are assumed. As a result of analysis using the WLAN sensing apparatus/device, it is possible to obtain a unique feature capable of identifying a motion or a body action. The IEEE 802.1 lbf WLAN sensing standardization is in an initial stage of development at present, and it is expected that a cooperative sensing technology for improving sensing accuracy will be treated to be important in the future. It is expected that a synchronization technology of a sensing signal for cooperative sensing, a CSI management and usage technology, a sensing parameter negotiation and sharing technology, a scheduling technology for CSI generation, or the like will be a core subject for standardization. In addition, it is also expected that a long-distance sensing technology, a low-power sensing technology, a sensing security and privacy protection technology, or the like will be reviewed as a main agenda. IEEE 802.1 lbf WLAN sensing is a sort of radar technologies using a WLAN signal which exists anywhere anytime. The following table shows a typical case of using IEEE 802.1 lbf, which may be utilized in a wide range of daily life such as indoor detection, motion recognition, health care, 3D vision, in-vehicle detection, or the like. Since it is mainly used indoors, an operating range is usually within 10 to 20 meters, and distance accuracy does not exceed up to 2 meters. TABLE 2MaxMaxKeyRangeVelocityangularrangePerformanceAccuracy(m/s)/VelocityAccuracyNamedetails(m)Indicator(m)Accuracy(deg)Room Sensingpresence15Number of0.5-22/0.1detection,Persons incounting theRoomnumber of peoplein the roomSmart meeting roompresence10Location of0.5-21/0.1-0.3detection,persons incounting theroomnumber of peoplein the room,localization ofactive peopleMotion detectionDetection of10in a roommotion of in aroom (of Human)Home securityDetection of10Detection of0.5-23/0.1-0.3mediumpresence ofa person in aintruders in aroomhomeAudio with userTracking persons6Localization0.20.5/0.053trackingin a room andof persons topointing thewithin 0.2 msound of an audiosystem at thosepeopleStore SensingCounting number20Number and0.5-21/0.1-0.33of people in alocation ofstore, theirpersons inlocation, speed ofstoremovement.Accuracy lessimportantHome ApplianceTracking person10Gesture<1Controland motion/Detectiongesture detectionGesture recognition -Identification of a0.5Gesture73short range (fingergesture from a setDetectionmovement)of gestures -range <0.5 mGesture recognition -Identification of a2Gesturemedium range (handgesture from a setDetectionmovement)of gestures -range >0.5 mGesture recognition -Identification of a7Gesture0.22/0.15large range (full bodygesture from a setDetectionmovement)of gestures -range >2 mA liveliness detectionDetermination1A liveliness0.05whether a closeDetectionby object is aliveor notFace/BodySelection of the1Identity0.02Recognitionidentity of adetectionperson from a setof known personsProximity DetectionDetection of0.5Object0.02-21.5/0.2noneobject in closeDetectionproximity ofdeviceHome ApplianceGesture3Gesture<13/0.1ControlDetectionDetectionhealth care - FallFall detection -100.23/0.1detectionabnormalposition detectionHealth case - remotemeasurements of5Breathing0.52/0.1diagnosticsbreathing rate,rateheart rate etc.accuracy/PulseAccuracySurveillance/Tracking person10Detection0.2-23/0.1Monito ring ofand presenceandelder peopledetectionlocalizationand/or childrenof personSneeze sensingDetecting and10Detection0.2-0.520/0.1localizing theandtarget human andlocalizationsneeze dropletof person andvolumesneeze dropletvolume3d visionbuilding a 3d10accuracy of0.015/0.12picture of an3d mapenvironment,(range,using multipleangle)STAIn car sensing -detection of5Presence of0.11/0.13detectionhumans in carHuman in carIn car sensingDriver sleepiness3Fast0.011/0.13detection/detectiondetection ofaiddriversleepiness In IEEE 802.11, a technology that is capable of sensing movement (or motion) or gesture of an object (person or object) by using Wi-fi signals of various bands is being discussed. For example, it is possible to sense the movement (or motion) or gesture of an object (person or object) by using Wi-fi signals (e.g., 802.11ad or 802.11ay signals) of a 60 GHz band. Additionally, it is also possible to sense the movement (or motion) or gesture of an object (person or object) by using Wi-fi signals (e.g., 802.11ac, 802.11ax, 802.11be signals) of a sub-7 GHz band. Hereinafter, technical features of a PPDU according to the 802.11ay standard, which is one of Wi-fi signals of the 60 GHz band that may be used for WLAN sensing, will be described in detail. FIG.7briefly illustrates a PPDU structure supported in an 802.11ay WLAN system. As shown inFIG.7, the PPDU format applicable to the 11ay system may include L-STF, L-CEF, L-Header, EDMG-Header-A, EDMG-STF, EDMG-CEF, EDMG-Header-B, Data, and TRN fields, and the aforementioned fields may be selectively included in accordance with the format of the PPDU (e.g., SU PPDU, MU PPDU, etc.). Herein, a portion including the L-STF, L-CEF, and L-header fields may be referred to as a non-EDMG portion, and the remaining portion may be referred to as an EDMG portion. Additionally, the L-STF, L-CEF, L-Header, and EDMG-Header-A fields may be referred to as pre-EDMG modulated fields, and the remaining portions may be referred to as EDMG modulated fields. The EDMG-Header-A field includes information required to demodulate an EDMG PPDU. The definition of the EDMG-Header-A field is the same as those of the EDMG SC mode PPDU and the EDMG OFDM mode PPDU, but is different from the definition of the EDMG control mode PPDU. A structure of EDMG-STF depends on the number of consecutive 2.16 GHz channels through which the EDMG PPDU is transmitted and an index iSTSof an iSTS-th space-time stream. For single space-time stream EDMG PPDU transmission using an EDMG SC mode through one 2.16 GHz channel, an EDMG-STF field does not exist. For EDMG SC transmission, the EDMG-STF field shall be modulated using pi/(2-BPSK). A structure of EDMG-CEF depends on the number of consecutive 2.16 GHz channels through which the EDMG PPDU is transmitted and the number of space-time streams iSTS. For single space-time stream EDMG PPDU transmission using the EDMG SC mode through one 2.16 GHz channel, an EDMG-CEF field does not exist. For EDMG SC transmission, the EDMG-CEF field shall be modulated using pi/(2-BPSK). A (legacy) preamble part of the PPDU may be used for packet detection, automatic gain control (AGC), frequency offset estimation, synchronization, indication of modulation (SC or OFDM) and channel estimation. A format of the preamble may be common to both an OFDM packet and an SC packet. In this case, the preamble may be constructed of a short training field (STF) and a channel estimation (CE) field located after the STF field. Hereinafter, an example of a sensing frame format that is proposed for performing sensing at a 60 GHz band or WLAN sensing will be described in detail. A frame, packet, and/or data unit that is used for performing the sensing proposed in the present specification or the WLAN sensing may also be referred to as a sensing frame. The sensing frame may also be referred to by using other various terms, such as sensing measurement frame, sensing operation frame, and/or measurement frame, and so on. FIG.8shows an example of a sensing frame format. A Wi-Fi Sensing signal may be transmitted/received for channel estimation between an AP/STA and an STA by using a Wi-Fi signal of 60 GHz. At this point, in order to support backward capability with the existing 60 GHz Wi-Fi signal 802.11ad and 802.11ay, a sensing frame may be configured of a frame format that is shown inFIG.8, which include a non-EDMG preamble portion (i.e., L-STF, L-CEF, L-Header). As shown inFIG.8, a sensing frame may be configured of L-STF, L-CEF, L-Header, EDMG-Header A, EDMG-STF, EDMG-CEF. That is, since the sensing frame performs sensing on an STA or object by estimating a change in channel between Point to point (P2P) or point to multipoint (P2MP), unlike the conventional EDMG frame, the sensing frame may be configured without including a data field. Since an EDMG frame may be transmitted by using one or more channels of a 60 GHz band (i.e., various channel bandwidths), as shown inFIG.8, the sensing frame may be configured to include EDMG-STF and EDMG-CEF fields. An STA/AP may perform accurate channel information measurement in a sensing transmission/reception bandwidth (BW) by using the EDMG-STF and EDMG-CEF fields. Information on the BW that is used for the sensing may be transmitted through EDMG-header A. And, at this point, the corresponding information may be transmitted by using various BWs as shown below in the following table. TABLE 3IndexBW12.16 GHz24.32 GHz36.48 GHz48.64 GHz52.16 + 2.16 GHz (non-contiguous)64.32 + 4.32 GHz (non-contiguous) FIG.9shows another example of a sensing frame format. Unlike what is described above, a sensing signal may be transmitted by using only a fixed BW (e.g., 2.16 GHz). And, in this case, since additional AGC, and so on, is/are not needed, the EDMG-STF may be omitted. When performing sensing by using only a predetermined BW, the EDMG-STF may be omitted, thereby configuring a sensing frame format, as shown inFIG.9. Additionally, since only a predetermined BW is used, when performing sensing, unlike the conventional format, the EDMG-header may not include a BW field. FIG.10shows yet another example of a sensing frame format. At 60 GHz, an 802.11ay transmission basically transmits a signal by using beamforming. And, at this point, in order to configure an optimal beam between Tx and Rx, an antenna weight vector (AWV) is configured by using a training (i.e., TRN) field. Therefore, since the sensing frame transmits a signal by using a predetermined AWV, it is difficult for the sensing frame to accurately apply the changed channel situation. Therefore, in order to more accurately measure any change in the channel, the sensing frame may be configured to include the TRN field, as shown below. At this point, the information on the channel may be measured through the TRN field. InFIG.10, the sensing frame does not include a data field, and since the sensing frame performs channel measurement for the sensing by using the TRN, the above-described EDMG-CEF field for performing channel estimation may be omitted. Therefore, the sensing frame format may be configured as described below inFIG.11. FIG.11shows yet another example of a sensing frame format. Hereinafter, the technical features of a PPDU according to a Wi-fi signal of sub-7 GHz that may be used for WLAN sensing will be described in detail. Hereinafter, an example of a sensing frame format that is proposed for sensing in a sub-7 GHz band or WLAN sensing will be described. For example, for the sensing according to the present specification, various PPDUs of 2.4 GHz, 5 GHz, 6 GHz bands may be used. For example, PPDUs according to the IEEE 802.11ac, 802.11ax, and/or 802.11be standard(s) may be used as the sensing frame. FIG.12shows another example of a sensing frame format. A sensing frame according to the present specification may use only part of the fields shown inFIG.12. For example, a Data field shown inFIG.12may be omitted. Additionally, or alternatively, VHT-SIG B and/or HE-SIG B field(s) shown inFIG.12may be omitted. FIG.13shows another example of a sensing frame format. A sensing frame according to the present specification may use only part of the fields of an Extreme High Throughput (EHT) PPDU shown inFIG.13. For example, a Data field shown inFIG.13may be omitted. The PPDU ofFIG.13may represent part or all of a PPDU type that is used in an EHT system. For example, the example ofFIG.13may be used for both single-user (SU) mode and multi-user (MU) mode. In other words, the PPDU ofFIG.13may be a PPDU for one receiving STA or a PPDU for multiple receiving STAs. When the PPDU ofFIG.13is used for a Trigger-based (TB) mode, an EHT-SIG ofFIG.13may be omitted. In other words, an STA that has received a Trigger frame for Uplink-MU (UL-MU) communication may transmit a PPDU, from which the EHT-SIG is omitted in the example ofFIG.13. Subcarrier spacing of the L-LTF, L-STF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields ofFIG.13may be determined as 312.5 kHz, and subcarrier spacing of the EHT-STF, EHT-LTF, Data fields may be determined as 78.125 kHz. That is, tone indexes (or subcarrier indexes) of the L-STF, L-LTF, L-SIG, RL-SIG, U-SIG, and EHT-SIG fields may be indicated in 312.5 kHz units, and tone indexes (or subcarrier indexes) of the EHT-STF, EHT-LTF, Data fields may be indicated in 78.125 kHz units. In the PPDU ofFIG.13, L-LTF and L-STF may be the same as the fields of the prior art (or related art). The L-SIG field ofFIG.13may, for example, include 24 bits of bit information. For example, the 24-bit information may include a 4-bit Rate field, 1 Reserved bit, a 12-bit Length field, 1 bit of Parity bit, and 6 bits of Tail bits. For example, the 12-bit Length field may include information related to a PPDU length or time duration. For example, a value of the 12-bit Length field may be determined based on a type of the PPDU. For example, when the PPDU is a non-HT PPDU, an HT PPDU, a VHT PPDU, or an EHT PPDU, the value of the Length field may be determined as a multiple of 3. For example, when the PPDU is an HE PPDU, the value of the Length field may be determined as “a multiple of 3+1” or “a multiple of 3+2”. In other words, a value of the Length field for a non-HT PPDU, an HT PPDU, a VHT PPDU, or an EHT PPDU may be determined as a multiple of 3, and a value of the Length field for an HE PPDU may be determined as “a multiple of 3+1” or “a multiple of 3+2”. The transmitting STA may generate an RL-SIG, which is generated identically as the L-SIG. The receiving STA may know that the received PPDU is an HE PPDU or EHT PPDU based on the presence (or existence) of an RL-SIG. A Universal SIG (U-SIG) may be inserted after the RL-SIG ofFIG.13. The U-SIG may also be referred to by using various terms, such as a first SIG field, a first SIG, a first-type SIG, a control signal, a control signal field, a first (type) control signal, and so on. The U-SIG may include N-bit information and may also include information for identifying the EHT PPDU type. For example, the U-SIG may be configured based on 2 symbols (e.g., two contiguous OFDM symbols). Each symbol (e.g., OFDM symbol) for the U-SIG may have a duration of 4 us. Each symbol of the U-SIG may be used for transmitting 26-bit information. For example, each symbol of the U-SIG may be transmitted/received based on 52 data tones and 4 pilot tones. The U-SIG may be configured of 20 MHz units. For example, when an 80 MHz PPDU is configured, the U-SIG may be duplicated. That is, 4 identical U-SIGs may be included in the 80 MHz PPDU. A PPDU that exceeds the 80 MHz bandwidth may include different U-SIGs. The EHT-SIG ofFIG.13may include control information for the receiving STA. For example, the EHT-SIG may include a common field and a user-specific field. The common field may be omitted, and a number of user-specific fields may be determined based on a number of users. The common field may include RU allocation information. The RU allocation information may mean information related to the location of an RU to which multiple users (i.e., multiple receiving STAs) are allocated. The RU allocation information may be configured of 9-bit units. The user-specific field may include information for decoding at least one specified RU (e.g., STA ID information that is allocated to the corresponding RU, MCS index that is applied to the corresponding RU, LDPC/BCC coding type information that is applied to the corresponding RU, and so on) through the common field. The EHT-STF ofFIG.13may be used for enhancing automatic gain control estimation in a multiple input multiple output (MIMO) environment or OFDMA environment. And, the EHT-LTF ofFIG.13may be used for estimating a channel in a MIMO environment or OFDMA environment. FIG.14shows a modified example of a transmitting device and/or receiving device of the present specification. The device ofFIG.14may be referred to by using other various terms, such as mobile terminal, wireless device, Wireless Transmit/Receive Unit (WTRU), User Equipment (UE), Mobile Station (MS), Mobile Subscriber Unit, or, simply, user, and so on. Additionally, the device ofFIG.14may also be referred to by using other various terms, such as Base Station, Node-B, Access Point (AP), repeater, router, relay, and so on. A processor610ofFIG.14may instruct (or indicate) and control operations that are performed by the STA, transmitting STA, receiving STA, AP, non-AP, and/or user-STA according to the present specification. For example, the processor610may receive a signal from a transceiver630, process the received signal (Rx signal), generate a transmission signal (Tx signal), and perform a control operation for transmitting the signal. The illustrated processor, memory, and transceiver may be implemented individually as separate chips, or at least two blocks/functions may be implemented through a single chip. A memory620ofFIG.14may store a signal that is received (i.e., Rx signal) through the transceiver630and may store a signal that is to be transmitted (i.e., Tx signal) through the transceiver630. Additionally, the memory620ofFIG.14may store a signal that is received (i.e., Rx signal) through the transceiver630and may store a signal that is to be transmitted (i.e., Tx signal) through the transceiver630. Referring toFIG.14, a power management module611manages power for the processor610and/or the transceiver630. A battery612supplies power to the power management module611. A display613outputs a result processed by the processor610. A keypad614receives inputs that are to be used by the processor610. The keypad614may be displayed on the display613. A SIM card615may be an integrated circuit that is used to securely store an international mobile subscriber identity (IMSI) and its related key, which are used to identify and authenticate subscribers on mobile telephony devices, such as mobile phones and computers. Referring toFIG.14, a speaker640may output a result related to a sound processed by the processor610. And, a microphone641may receive an input related to a sound that is to be used by the processor610. Hereinafter, the methods proposed herein are described. The development of a standard technology for sensing STA or human movement or gestures using Wi-Fi signals operating in the sub-7 GHz band is being considered. This specification proposes a method for transmitting NDP frames to a non-AP STA to perform WLAN sensing using Wi-Fi signaling, or for transmitting NDP frames to a non-AP STA, including information about the NDP frames to indicate that the transmitted NDPA frames are NDPA frames for sensing (i.e., Sensing NDPA). In order to improve the accuracy and resolution of WLAN sensing, WLAN sensing utilizing signal transmission and reception channels between multiple sensing STAs is considered. The sensing STAs may include STAs and APs. Therefore, in order to efficiently perform WLAN sensing using signal transmission and reception channels between a sensing initiator/initiator and multiple sensing responders/respondents, channel estimation for each transmission and reception channel may be required. In order to efficiently perform channel measurement for multiple transmit and receive channels used for sensing, channel estimation using the transmission of null data packet (NDP) frames may be used in the sensing procedure. Information about the transmission of the NDP frames may be included in the NDPA frame. The channel measurement for the sensing operation may be performed as follows.FIG.15illustrates an example of a sensing operation.FIG.16illustrates another example of a sensing operation. The examples ofFIGS.15and16are examples of sensing measurements, and in these examples, the initiator may be an AP or non-AP STA. Referring toFIG.15, if n responders are present, the initiator may transmit an NDPA frame to the n responders. After a SIFS elapses from the time of transmitting the NDPA frame, the initiator may transmit an NDP frame to the n responders. Alternatively, referring toFIG.16, if n responders are present, the initiator may transmit a trigger frame (TF) sensing poll frame to the n responders. Some of the n responders may transmit a response frame to the TF sensing poll frame to the initiator. The initiator may transmit an NDPA frame and an NDP frame to the some of the responders that transmitted the response frames. Here, the interval between transmissions of the frames may be SIFS. As shown inFIG.15and/orFIG.16, for the transmission of the NDP frame, the sensing STA may transmit an NDPA frame to inform about the transmission of the NDP frame. In this case, the NDPA frame may be configured as follows to indicate that the NDP frame transmitted for channel measurement is transmitted for a sensing operation. Hereinafter, technical features applicable to the construction of the NDPA frame proposed herein are described. The following technical features may be applied alone or in combination. Technical Feature 1. The Subtype field of the frame control field may be used to indicate the Sensing NDPA. Technical Feature 1. A. The NDPA frame may be transmitted via a control frame. In this case, the reserved bit of the Subtype field of the control frame may be used for the indication of the Sensing NDPA frame. Technical Feature 1. A. i. The four bits of B4to B7of the Frame control field may be used for the indication of the Sensing NDPA frame. In this case, the reserved values/bits0000,0001or1111may be used for the indication of the Sensing NDPA frame. Technical Feature 1. A. ii. Considering the indication of the NDPA frame, the Frame control field may be configured as follows. Technical Feature 1. A. ii. 1. For example, if the reserved bit0001is used for the indication of the Sensing NDPA frame, the following table may be defined for the Frame control field. TABLE 4TypeSubtypevalueTypevalueSubtype(B3 B2)description(B7 B6 B5 B4)description01Control0000Reserved01Control0001Sensing NDPA Announcement01Control0010Trigger01Control0101VHT/HE NDP Announcement Technical Feature 1. A. ii. 2. The above Table 4 is only an example, and the Sensing NDPA frame (in this specification, the Sensing NDPA frame may be interchangeable with SNDPA frame) may be indicated via the other reserved values 0000 or 1111. Technical Feature 1. A. iii. The Sensing NDPA frame can be clearly distinguished from NDPA for other purposes because the presence of a Sensing NDPA frame is indicated by a separate Frame control field as described above. Also, the NDPA frame can be redesigned for sensing. Technical Feature 1. A. iii. 1. For example, the Sensing NDPA frame may be configured as follows. Technical Feature 1. A. iii. 1. A. The Sounding Dialog Token field contained in the Sensing NDPA frame may be redesigned. Technical Feature 1. A. iii. 1. A. i. Unlike before, bits B0and B1do not need to be allocated for the NDP Announce variant field, so they can be used for other information. Technical Feature 1. A. iii. 1. A. i. 1. The 2-bit field may be used to indicate identification (ID) information about the sensing measurement. Technical Feature 1. A. iii. 1. A. i. 2. The information about the identification of the sensing measurement may indicate a sensing measurement ID/sensing measurement setup ID/sensing session ID. Technical Feature 1. A. iii. 1. A. ii. Except for the above B0and B1, the remaining B2to B7(6 bits) may be used to indicate multiple measurement instances/bursts in one sensing measurement procedure. Technical Feature 1. A. iii. 1. A. ii. 1. For example, if a channel measurement is performed using three measurement instances/bursts in one sensing measurement procedure, the six bits may be used to indicate each measurement instance/burst. That is, for the first measurement instance/burst, the B2to B7values may be set to 0, for the second measurement instance/burst, the B2to B7values may be set to 1, and for the third measurement instance/burst, the B2to B7values may be set to 2. The device/STA that receives the measurement instance/burst can use the above information to determine which measurement frame it has received. Technical Feature 1. A. iii. 1. A. iii. As described above, the measurement ID and the measurement instance/burst ID are indicated using a 2-bit+6-bit combination. The bits/fields for the above information may be configured differently than described above. Technical Feature 1. A. iii. 1. A. iii. 1. In one example, the bits/fields for the measurement ID and measurement instance/burst ID may be configured differently from the above, such as a combination of 3 bits+5 bits/4 bits+4 bits. Technical Feature 1. A. iii. 1. B. Upon receiving the Sounding Dialog Token field transmitted via the Sensing NDPA frame, the sensing STA/AP may include the information contained in the Sounding Dialog Token field and the measurement information in the reporting frame when transmitting a reporting frame (in this specification, the reporting frame may be interchangeable with the feedback frame) to feedback the measurement information (in this specification, the measurement information may be interchangeable with the feedback information). Technical feature 1. A. iii. 1. B. i. The information contained in the Sounding Dialog Token field may be included in the Measurement report control field and the Measurement report field for sensing measurement feedback. Technical Feature 1. A. iii. 1. B. ii. Using the information contained in the above Sounding Dialog Token field, the sensing STA/AP receiving the feedback information can determine which measurement and for which measurement instance/burst the feedback information relates. Technical Feature 1. A. iii. 1. B. iii. Unlike the above, when transmitting feedback information, only part of the information contained in the previously received Sounding Dialog Token field may be transmitted with the feedback information. Technical Feature 1. A. iii. 1. B. iii. 1. For example, if the Sounding Dialog Token field consists of 2 bits for the measurement ID and 6 bits for the measurement instance/burst ID, then when the sensing STA/AP transmits a reporting frame/feedback frame containing feedback information, only 6 bits of the Sounding Dialog Token field for the measurement instance/burst ID may be included in the reporting frame/feedback frame. Technical Feature 1. A. iii. 1. B. iii. 1. A. For example, only 6 bits for the measurement instance/burst ID may be included in the measurement reporting control field and the measurement reporting field. Technical Feature 1. A. iii. 1. C. The above information, e.g., the Sounding Dialog Token field, may also be used when the sensing STA/AP that transmitted the sensing measurement frame requests post-measurement feedback information from the responder STA/AP. Technical Feature 1. A. iii. 1. C. i. Here, the information may be included in the feedback request frame and/or the Trigger Frame. Technical Feature 1. A. iii. 1. C. i. 1. For example, the information may be included in the measurement reporting control field of the feedback request frame. Technical Feature 1. A. iii. 1. C. ii. The Sensing STA/AP may use the above information to request feedback information for desired/intended measurements or feedback information that has not been received. Technical Feature 1. B. Reserved Subtype field, i.e., the reserved bit of the Subtype field may be used for the indication of an enhanced NDPA frame. In this case, the NDPA version field or the NDPA type field may be used in conjunction with the above Subtype field to indicate the Sensing NDPA frame. Technical Feature 1. B. i. The enhanced NDPA frame may be an NDPA frame that is different from a conventional NDPA frame, i.e., an enhanced NDPA frame may be an NDPA frame supported by a specification beyond the previously defined specifications such as 11ac, 11ax, 11az, 11be, etc. Technical Feature 1. B. ii. The NDPA frame may be transmitted via a control frame. In this case, a reserved value in the Subtype field of the control frame may be used to indicate the enhanced NDPA frame. Technical Feature 1. B. iii. Bits B4through B7(i.e., fifth through eighth bits) of the Frame control field may be used for the indication of the enhanced NDPA frame. In this case, the reserved value 0000, 0001 or 1111 may be used for the indication of the enhanced NDPA frame. Technical Feature 1. B. iv. When an STA receives an enhanced NDPA frame, the STA may determine that the frame is not a conventional NDPA frame through the Subtype field. Technical Feature 1. B. iv. 1. For example, if the reserved value 0001 is used for the indication of the enhanced NDPA frame, the following table may be defined for the Frame control field. TABLE 5TypeSubtypevalueTypevalueSubtype(B3 B2)description(B7 B6 B5 B4)description01Control0000Reserved01Control0001Enhanced NDPA Announcement01Control0010Trigger01Control0101VHT/HE NDP Announcement Technical Feature 1. B. iv. 2. The above table is an example, and other reserved values for enhanced NDPA, such as 0000 or 1111, may be used for the indication of the enhanced NDPA frame. Technical Feature 1. B. v. Since the indication of the enhanced NDPA frame are carried out through a separate Frame control field as described above, the enhanced NDPA frame can be distinguished from NDPA frames for other purposes. In addition, the enhanced NDPA frame can be designed as follows to distinguish it according to the next generation wireless LAN system standards. Technical Feature 1. B. v. 1. For example, the enhanced NDPA frame may be configured as follows. Technical Feature 1. B. v. 1. A. The enhanced NDPA frame may include an enhanced NDPA type subfield to indicate the type of enhanced NDPA frame.FIG.17illustrates an example of an enhanced NDPA frame constructed based on Technical Feature 1. B. v. 1. A. Technical feature 1. B. v. 1. B. In one example, the length of the enhanced NDPA type subfield may be 3 bits. In this case, the remaining bits may be reserved. Technical Feature 1. B. v. 1. B. i. In this case, the 3 bits, i.e. the value indicated by the enhanced NDPA type subfield, may be defined/set as follows. Technical Feature 1. B. v. 1. B. i. 1. If the value is 0, the enhanced NDPA frame may be an NDPA frame for sensing or an NDPA frame for 1 lbf specification. Technical Feature 1. B. v. 1. B. i. 2. The remaining values (i.e., 1 through 7) may be reserved for specifications of next generation wireless LAN systems. Technical Feature 2. The Sounding Dialog Token field may be used to indicate the Sensing NDPA frame. Technical Feature 2. A. The Sounding Dialog Token field included in the NDPA frame may consist of one octet.FIG.18shows an example of the Sounding Dialog Token field format. For example, the following table may be defined for the indication of NDPA frames according to PPDU type. TABLE 6RangingHEDescription00VHT NDPA01HE NDPA10Ranging NDPA11EHT NDPA Technical Feature 2. B. Bits B0and B1of the Sounding Dialog Token field for a Sensing NDPA frame may be set to the same as for a Ranging NDPA frame. In this case, to distinguish it from the Ranging NDPA frame, the indication of the Sensing NDPA frame may be performed using 1 bit of the Sounding Dialog Token Number field (e.g., B2of the Sounding Dialog Token field). Technical Feature 2. B. i. The 1 bit allocated for the indication of the Sensing NDPA frame (e.g., B2above) may be used exclusively for the indication of the Sensing NDPA frame. Alternatively, the one bit (e.g., B2above) allocated for the indication of the Sensing NDPA frame may be used for the indication of the Sensing NDPA frame in conjunction with B0and B1of the Sounding Dialog Token field. Technical Feature 2. B. ii. The 1 bit allocated for the indication of the Sensing NDPA frame may be one of the 6 bits (B2to B7) of the Sounding Dialog Token field. For example, the 1 bit may be set to B2, which is the most significant bit (MSB) of B2to B7. Technical Feature 2. B. iii. The Sounding Dialog Token field for the Sensing NDPA frame may be set as follows. Technical Feature 2. B. iii. 1. Two bits [B0B1] of the Sounding Dialog Token field may be set to [10], and B2of the Sounding Dialog Token field may be set to 1. Technical Feature 2. B. iii. 1. A. The values of B0and B1above are exemplary, and [B0B1] may be set to [01] or [11]. Technical Feature 2. B. iii. 1. B. The indication in the Sensing NDPA frame may be performed with fixed values for B0and B1and B2set to 1. Technical Feature 2. B. iii. 2. The B2may be set to either the Sensing field or the Sensing NDPA field. Technical feature 2. B. iii. 3. The Sounding Dialog Token field of the Sensing NDPA frame, considering B0, B1and B2above, may be configured as shown inFIG.19.FIG.19illustrates an example format of the Sounding Dialog Token field of a Sensing NDPA frame. The Sounding Dialog Token Number fields (B3through B7) inFIG.19may be used as identification information for multiple sensing measurements transmitted in a sensing measurement. Namely, sensing measurement instances/bursts may be distinguished by these fields. When feedbacking information measured via a sensing measurement instance/burst, the value indicated by the Sounding Dialog Token Number field received via the Sensing NDPA frame may be included in the feedback frame or the reporting frame. In this case, the value indicated by the Sounding Dialog Token Number field may be used to indicate which feedback frame or reporting frame is for which sensing measurement instance/burst. Technical Feature 2. B. iii. 4. A. In one example, the Sounding Dialog Token field of a Sensing NDPA frame may be configured as follows. Technical Feature 2. B. iii. 4. A. i. [B0B1] may be set to [01]. Technical Feature 2. B. iii. 4. A. ii. B2may be set to 1. Technical Feature 2. B. iii. 4. A. iii. B3through B7, i.e., the Sounding Dialog Token Number field, may indicate a value from 0 to 31. Technical feature 2. B. iii. 4. A. iii. 1. When the sensing STA/AP receiving the Sounding Dialog Token Number field receives an NDPA frame and a feedback frame for the transmission of the NDP frame, the value indicated by the Sounding Dialog Token Number field may be included in the feedback frame, i.e., the Sounding Dialog Token Number field may be used to distinguish which sensing measurement instance/burst the feedback information is for. Technical Feature 2. B. iii. 4. A. iv. The value indicated by the Sounding Dialog Token Number field may be replaced by the Sensing Measurement Instance/Burst ID. Technical Feature 2. B. iii. 4. B. With the above information, the sensing STA/AP that receives the feedback information can determine which measurements were used to acquire/generate the received feedback information. Technical Feature 2. C. Unlike the method where the values of B0and B1and B2are set to 1 to indicate the Sensing NDPA frame, the indication of the Sensing NDPA frame can be performed through the value of B2only. Technical Feature 2. C. i. For example, the indication of the Sensing NDPA frame may be based on B2being set to 1. In this case, B0and B1may be used to indicate the frame format of the NDP frame used for the Sensing NDPA measurement. Technical Feature 2. D. B7of the Sounding Dialog Token field may be used as a flag bit for indication of the Sensing NDPA frame. Technical Feature 2. D. i. If the indication of the Sensing NDPA frame is carried out via B7in the Sounding Dialog Token field of the Sensing NDPA frame, the Sounding Dialog Token field may be set as follows. Technical Feature 2. D. i. 1. Bits B0and B1may be configured as follows. Technical Feature 2. D. i. 1. A. Bits B0and B1may be determined by the capabilities of the STA participating in the sensing measurement or by the 802.11 specification/protocol used. For example, if the STA participating in the sensing is an 11ax device, the above [B0B1], i.e., the NDPA variant field, may be set to [01], a value indicating the HE NDPA frame form (variant). Technical Feature 2. D. i. 1. B. Bits B0and B1may be set to values indicating a specific NDPA frame form. For example, for the Sensing NDPA frame, [B0B1] may be fixed to [01], indicating a ranging NDPA frame type. Technical Feature 2. D. i. 2. The B7may be set to a Sensing NDPA field. For a Sensing NDPA frame, the bit (B7) may be set to 1. Technical Feature 2. D. i. 3. The Sounding Dialog Token field of the Sensing NDPA frame considering B0, B1and B7may be configured as shown inFIG.20.FIG.20illustrates another example of the format of the Sounding Dialog Token field of the Sensing NDPA frame. Technical Feature 2. D. i. 3. A. The Sounding Dialog Token Number fields (B2through B6) inFIG.20may be used as identification information/identifiers for multiple sensing measurements transmitted in a sensing measurement, i.e., each of the sensing measurement instances/bursts may be distinguished by these fields. If the STA reports back (feedbacks) the information measured by the sensing measurement instances/bursts, the STA may include in the feedback frame or reporting frame the value of the Sounding Dialog Token Number field (or Sounding Dialog Token Number) received in the Sensing NDPA frame at the time of feedback. The value may be used to indicate which feedback frame or reporting frame is for which sensing measurement instance/burst. Technical Feature 3. A Special User Field may be defined/used for indication in the Sensing NDPA frame. In this specification, the Special User Field may be used interchangeably with a Special STA Information Field and/or a Special User Information Field. Technical Feature 3. A. The Sensing NDPA Frame may be configured to include the Special User Field. Technical Feature 3. B. The Sensing NDPA frame may contain the same Sounding Dialog Token field as the Ranging NDPA frame. The field may be one byte in length. Technical Feature 3. C. The NDP Announce variant field in a Sensing NDPA frame may be set to the same as in a Ranging NDPA frame. Technical Feature 3. C. i. For example, the NDP Announce variant field, i.e., [B0B1] in the Sounding Dialog Token field, may be set to [10]. Technical Feature 3. C. ii. As an example of Technical Feature 3. C. i. above, [B0B1] in the Sounding Dialog Token field may be set to a value indicating a different variant (e.g., [01] indicating an HE variant). Technical Feature 3. D. The value indicated by the Sounding Dialog Token Number field in the Sounding Dialog Token field contained in the Sensing NDPA frame may be used as the measurement instance ID. Technical Feature 3. D. A. A Sounding Dialog Token Number field with a length of 6 bits may be used to represent the measurement instance ID. The name of this field may remain the same or may be changed to Measurement Instance ID. Technical Feature 3. E. To distinguish between Ranging and Sensing NDPA frames, a Special User Field included in the Sensing NDPA frame may be placed before the user field (in this specification, the user field may be interchangeably replaced by the STA Information Field). Technical Feature 3. E. A. The Special User Field may include a specific Association Identifier (AID) to distinguish it from other user fields. Technical Feature 3. E. B. The Special User Field may be configured to contain common information about the sensing measurement. Technical Feature 3. F. The indication of the Sensing NDPA frame may be based on the AID value in the Special User Field. The AID value may be set as follows Technical Feature 3. F. i. For example, the specific AID of the Special User Field for sensing indication may be set to one of the following values. Technical feature 3. F. i. 1. Among the AID values, one of the reserved values, e.g., 2008 to 2044, 2047 to 4094, may be used as the specific AID. In one example for the specific AID, the value of the specific AID may be 2007, 2008, 2044, or 2047. Technical Feature 3. G. The Special User Field may include one or more of the following information. Technical Feature 3. G. i. Information about the Sensing Group ID/Measurement Setup ID. Technical Feature 3. G. i. 1. The group ID of the sensing STA performing the sensing operation may be included in the above Special User Field. Technical Feature 3. G. i. 2. Identification information for determining the attributes of the sensing measurement instance may be included in the Special User Field. Technical Feature 3. G. ii. Information about the number of sensing measurement instances/bursts. Technical Feature 3. G. ii. 1. The information may be used to indicate a number of sensing measurement instances/bursts in which one sensing measurement is performed. Technical Feature 3. G. iii. Information about the sensing feedback type indication. Technical Feature 3. G. iii. 1. The information may refer to information about the type of feedback received from the sensing measurement. Technical feature 3. G. iii. 1. A. The information may comprise one bit or two bits. The information may be used to indicate the information described later. Technical Feature 3. G. iii. 1. A. i. Channel status information (CSI). Technical Feature 3. G. iii. 1. A. ii. Compressed channel status information. Technical Feature 3. G. iii. 1. A. iii. Channel quality information (CQI). Technical Feature 3. G. iii. 1. A. iv. Threshold CSI. Technical Feature 3. G. iv. Feedback Information. Technical Feature 3. G. iv. 1. The information may include information about measurement feedback. The information may comprise the information described later. Technical Feature 3. G. iv. 1. A. Information about Ng. Technical Feature 3. G. iv. 1. A. i. The information may be information about feedback granularity (tone spacing). As an example of the tone interval, 1, 2, 4, or 8 tones may be used. Technical Feature 3. G. iv. 1. A. ii. The information may be information about the granularity (tone spacing) of the tones to be measured. As an example of the tone spacing, 1, 2, 4 or 8 tones may be used. Technical Feature 3. G. iv. 1. B. Compressed matrix information (e.g., Nc, Nr) or compressed (quantized) information. Technical Feature 3. G. iv. 1. B. i. If the measured information is compressed, e.g., CSI, the information may be information about the size of the matrix with respect to the compression. Technical feature 3. G. iv. 1. B. ii. For the information, the compressed or quantized information may be indicated. Technical feature 3. G. iv. 1. B. ii. 1. A compressed or quantized level for the measured CSI information may be indicated by the information. Technical feature 3. G. iv. 1. B. ii. 2. The indicated value may be defined as one of 0, 1, 2, 3, or 4. In this case, the compressed or quantized level may be indicated/set as a power of two. Technical Feature 3. G. iv. 1. B. ii. A. For example, where the value is 2, the compressed or quantized level used for CSI measurement may be 22. Technical Feature 3. G. iv. 1. C. Angular information (e.g., Phi, Psy, etc.). Technical Feature 3. G. iv. 1. C. i. The information may represent bit size information for the angles. Technical Feature 3. G. iv. 1. D. Information about the size of the codebook. Technical Feature 3. G. iv. 1. D. i. The information may represent bit size information for the tone information being fed back. Technical Feature 3. G. iv. 1. D. ii. The information may represent information about the amplitude, angle, or phase of each tone or the tone corresponding to the Ng. Technical Feature 3. G. iv. 1. D. Information about Nss. Technical Feature 3. G. v. 1. The information may be information about the number of spatial streams. Technical Feature 3. G. v. 2. The information may indicate the value of Nss for each of NDPA sounding and TF sounding. Technical Feature 3. G. v. 2. A. The Nss information for NDPA and the Nss information for TF may be indicated separately using a plurality of fields, or may be indicated using a single field. Technical Feature 3. G. v. 2. A. i. If individually indicated using multiple fields, the Nss field for the TF and the Nss field for the NDPA may be configured. Technical Feature 3. G. v. 2. A. ii. When indicated using a single field, the information may be indicated based on partial bits comprising the field. For example, the field may consist of 6/8 bits. In this case, the Nss information for the NDPA and the Nss information for the TF may each be indicated using 3/4 of the bits in the field. Technical Feature 3. G. vi. Information about delayed feedback. Technical Feature 3. G. vi. 1. For example, the information may be information to indicate whether feedback information can be transmitted or received immediately after receiving the measurement frame. Technical Feature 3. G. vi. 2. In one example, the information may be information to indicate whether delayed feedback is supported. Technical Feature 3. G. vii. Information about the sensing measurement order/measurement configuration. Technical Feature 3. G. vii. 1. Both UL (trigger frame-based sounding, TF) and DL (NDPA-based sounding, NDPA) sounding can be present within one sensing measurement. Therefore, the information may be intended to indicate which of the UL/DL soundings is performed first. Technical Feature 3. G. vii. 2. The information may be used to indicate whether performing a measurement for a sensing measurement instance/burst is related to DL sounding (NDPA sounding) or UL sounding (TF sounding). Technical Feature 3. G. vii. 3. The information may be indicated per (sensing) measurement instance/burst. In this case, the information may be used to indicate which sounding was performed first, UL sounding/DL sounding, for each (sensing) measurement instance/burst. Alternatively, the information may be used to indicate whether the sounding for which the measurement was performed was related to UL sounding or DL sounding. Technical Feature 3. G. vii. 4. The sensing measurement instance/burst may exist for both TF sounding and NDPA sounding. Therefore, the information may be used to indicate which TF sounding/NDPA sounding is configured within one sense measurement instance. For example, the information may consist of 2/3 bits. In this case, the information may indicate the following information. Technical Feature 3. G. vii. 4. A. For example, if the information consists of two bits, the information may be configured as shown in Table 7. In Table 7, the TF+NDPA indication may mean that, within a measurement instance, between TF sounding and NDPA sounding, TF sounding is performed first and NDPA sounding is performed next. TABLE 7Value for 2 bitConfiguration of measurement instance0NDPA only1TF only2TF + NDPA3NDPA + TF Technical Feature 3. G. vii. 4. B. For example, if the information consists of three bits, the following technical features may apply. Technical Feature 3. G. vii. 4. B. i. In one example, multiple soundings may be performed within one measurement instance. In this case, the information may be configured as shown in Table 8. Table 8 is an example considering repeated transmissions. In another example, the configuration of values 5 and 7 in Table 8 may be varied, such as (TF+NDPA+NDPA+TF, NDPA+TF+TF+NDPA) and (TF+NDPA+TF, NDPA+TF+NDPA). TABLE 8Value for 3 bitConfiguration of measurement instance0NDPA only1NDPA + NDPA2TF only3TF + TF4TF + NDPA5TF + NDPA + TF + NDPA6NDPA + TF7NDPA + TF + NDPA + TF Technical Feature 3. G. viii. Information on TF sounding indication. Technical Feature 3. G. viii. A. The NDPA frame may be used for both Trigger Based (TB) and Non-Trigger Based (NTB) sensing measurements. Therefore, the information may be used to indicate the presence or absence of TF sounding after NDPA sounding. Technical Feature 3. G. viii. B. The information may consist of one bit. For example, the information may indicate a 1 if TB sounding is present, and the information may indicate a 0 if TB sounding is not present. Technical Feature 3. G. viii. C. For NTB sensing measurements, the information may be set to 0. Technical Feature 3. G. ix. Information about Long Training Field (LTF) repetition. Technical Feature 3. G. ix. 1. The information may be used when transmitting an NDP frame to indicate whether to repeat the LTF of the NDP frame and the repeat value/number of times. Technical Feature 3. G. x. Information about inactive or available channels. Technical Feature 3. G. x. 1. The information may include information about preamble puncturing or inactive channels in the bandwidth for the PPDU. Wherein, the information may be configured based on 20 MHz units. Technical Feature 3. G. x. 1. A. In one example, the information may be configured with 8 bits or 16 bits, considering the 20 MHz subchannel indication. Technical Feature 3. G. x. 1. B. In another example, the information may be indicated based on different resolutions depending on the bandwidth to reduce signaling overhead and support different puncturing methods. Technical Feature 3. G. x. 1. B. i. For example, when the bandwidth is 160 MHz or less, the information may be indicated in 20 MHz units. Further, if the bandwidth is 320 MHz, the information may be indicated in 40 MHz units. Technical Feature 3. G. x. 1. B. ii. The information may comprise a 1-bit field to indicate the resolution bandwidth (e.g., a 1-bit field to indicate whether the subchannel units are 20 MHz or 40 MHz) and an 8-bit field to indicate the availability per subchannel. Technical Feature 3. G. x. 1. B. ii. 1. For example, the information may be configured identically to the Partial BW info field in 11be. Technical Feature 3. G. x. 1. B. ii. 2. For example, the information may be expressed as a requested feedback bandwidth to the sensing STA, i.e. a bandwidth that should be reported after measurement, identical to the 11be Partial BW info field. Technical Feature 3. G. x. 2. In another example, the information may be represented as bandwidth information including puncturing. Technical Feature 3. H. The sensing parameters listed in the above technical features may be included in the user information field included in the Sensing NDPA frame. Technical Feature 4. The Sensing NDPA Frame may utilize either the Ranging NDPA Frame Format or the EHT NDPA Frame Format. Technical Feature 4. A. The Sensing NDPA frame may utilize any of the previously defined NDPA variants. For sensing indication, B31of the user information field may be used as a bit for sensing indication. Technical Feature 4. A. i. For example, if the NDPA type is Ranging or EHT, and the value of B31of the user information field in the NDPA frame is set to a specific value, the specific value may indicate that the NDPA frame is a Sensing NDPA frame. Technical Feature 4. A. i. 1. For example, the specific value for B31may be set to 0 or 1. Technical Feature 4. A. ii. When the value of B31is set to a specific value, the user information field may be configured differently than the user information field of a conventional ranging NDPA frame or an EHT NDPA frame. For example, when the value of B31is set to a certain value, the user information field may be reconfigured as sensing parameters, or may additionally include information of the aforementioned technical features. Each of the foregoing technical features may be used alone or in combination. For example, Technical Feature 3 and Technical Feature 4 may be used in combination for a Sensing NDPA frame. In this case, the following Configurations may be further applied. Configuration 1. [B0B1] of the Sounding Dialog Token field in the Sensing NDPA frame may be set to the same as in the Ranging NDPA frame. Configuration 1. A. For example, [B0B1] in this field may be set to [10]. Configuration 2. A Special User Information field or a Special STA Information Field may be included before the user information field in the Sensing NDPA frame. The Special User Information field or Special STA Information Field may contain specific AIDs. Configuration 2. A. The Special User Field may contain common information about the sensing measurement. Configuration 2. A. i. The common information may comprise a combination of the information comprising the Special User Information field proposed in Technical Feature 3 above. Configuration 2. A. i. 1. For example, the Special User Information field may include information about the measurement setup ID, available channel information or punctured/puncturing bandwidth information, measurement instance sequence, TF sounding indication, feedback type, delayed feedback, etc. Configuration 3. A Sensing NDPA frame may reuse the format of a Ranging NDPA frame. In this case, the reserved bit (B31) contained in the user information field of the ranging NDPA frame format may be used for indication in the Sensing NDPA frame. Configuration 3. A. Through the use of the reserved bits, it is possible to distinguish between Sensing NDPA frames and Ranging NDPA frames. Thus, reuse of existing frame formats may be possible. In Technical Features 1, 2, 3, and 4 above, the Sounding Dialog Token Number field contained in the Sensing NDPA frame may be used to indicate the sensing measurement setup ID. Additionally, the sensing measurement instance ID may be included in a Special User Information field or user information field included in the Sensing NDPA frame. Alternatively, in Technical Feature 1, 2, 3, and 4 above, the Sounding Dialog Token Number fields (B2through B7) included in the Sensing NDPA frame may be used to indicate the Sensing Measurement Instance ID. Additionally, the Sensing Measurement Setup ID may be included in the Special User Information field included in the Sensing NDPA frame. Technical Feature 5. The indication of the Sensing NDPA frame may be based on the Subtype field in the Frame control field and the Control Frame Extension value (B11, B10, B9, and B8in the Frame control field). The Control Frame Extension value is defined to extend the Subtype field. Thus, the indication of the Sensing NDPA frame may be performed using these values. For example, the Subtype field of a Sensing NDPA frame may be set to 0110. In this case, the Control Frame Extension value may be set to one of the reserved values 1100 to 1111 for the indication of the Sensing NDPA frame. For example, the Control Frame Extension value for the indication of the Sensing NDPA frame may be defined/set to 1100. The above values are exemplary, and any of the reserved values may be used to indicate the Sensing NDPA frame. If the Sensing NDPA Frame is indicated based on the Subtype field and Control Frame Extension value as described above, the Sensing NDPA Frame may be configured as follows. Configuration 1. The Sensing NDPA frame may be configured to include a Measurement info field. The above field names are exemplary and other names may be used.FIG.21shows an example of a Sensing NDPA frame format. Configuration 1. A. The measurement information fields may be configured as follows. Configuration 1. A. i. The measurement information field may include an NDPA variant subfield. Configuration 1. A. i. 1. The subfield may be used to indicate the NDP frame format/PHY format used for the sensing measurement. Configuration 1. A. i. 1. A. For example, the subfield may indicate a PHY PPDU format used for transmission of the NDP frame. For example, the subfield may include information about an 11ac, 11ax, 11be, or next generation wireless LAN system standard. Configuration 1. A. i. 1. A. i. In one example, the subfield may consist of 3 bits. The value indicated by the subfield may indicate the following specifications. Configuration 1. A. i. 1. A. i. 1. If the subfield indicates 0, then the PHY PPDU format may support the 11ac specification. Configuration 1. A. i. 1. A. i. 2. 2. if the subfield indicates 1, the PHY PPDU format can support the 11ax specification. Configuration 1. A. i. 1. A. i. 3. If the above subfield indicates 2, the PHY PPDU format may support the 11be specification. Configuration 1. A. i. 1. A. i. 4. Values 3 through 7 may be values used for next generation WLAN system specifications. Configuration 1. A. ii. The Measurement Information field can contain information about the measurement setup ID. Configuration 1. A. ii. 1. Measurement Setup ID may mean the measurement setup ID performed/used for the sensing measurement. Configuration 1. A. ii. 1. A. Sounding Dialog Token field may be used to indicate a measurement instance. In this case, the entire field (8 bits) may be used to represent the measurement instance, or some of the fields (e.g., B2to B7) may be used. Configuration 2. The Sensing NDPA frame may include a sensing Measurement info field. The field may be two bytes in length. The name of the field is exemplary and other names may be used.FIG.22illustrates another example of a Sensing NDPA frame format. Configuration 2. A. The sensing measurement information field may be configured as follows. Configuration 2. A. i. The sensing measurement information field may include an NDPA version identifier (3/4 bit) field. The NDPA version identifier may be used to identify the PHY version or PHY PPDU format for which the sensing measurement is performed. For example, if the NDPA version identifier field consists of 3 bits, the value indicated by the field may indicate the following specifications. Configuration 2. A. i. 1. If the field indicates 0, then the PHY PPDU format may support 11ac or VHT specifications. Configuration 2. A. i. 2. if the field indicates 1, the PHY PPDU format can support 11ax or HE specifications. Configuration 2. A. i. 3. If the above field indicates 2, the PHY PPDU format may support 11be or EHT specifications. Configuration 2. A. i. 4. Values 3 through 7 may be the values used for the next generation WLAN system specification (or EHT+). Configuration 2. A. ii. The Sensing Measurement Information field may include a Measurement Setup ID field. Configuration 2. A. ii. 1. The measurement setup ID may mean a sensing measurement setup ID in which the sensing measurement is performed. The Measurement Setup ID field may consist of 4/5/6 bits. Configuration 2. A. iii. The Sensing Measurement Information field may include a field for a Sounding Dialog Token Number or a Measurement Instance ID. The Sounding Dialog Token Number or Measurement Instance ID may represent information about the measurement instance for which the measurement is being performed at the time of transmission of the Sensing NDPA Frame. Alternatively, the Type field and the Subtype field of the Frame Control field may be set as follows to indicate the Sensing NDPA frame. For example, the Type field may be set to 11 to indicate an extension. In this case, the Subtype field may be set to one of the reserved values 0010 through 1111. For example, when transmitting a Sensing NDPA frame, the Type field may be set to 11 and the Subtype field may be set to 0010. By way of example only, the value of the Subtype field may be set to one of the reserved values. In the following, it is proposed to utilize the Frame Control field not only for the Sensing NDPA frame, but also for indicating NDPA frames that support next-generation wireless LAN system standards/protocols, i.e., enhanced NDPA frames. The following methods can be considered for the indication of enhanced NDPA frames. Method 1. Using the Subtype field within the Frame Control field. Method 1. A. For the indication of an enhanced NDPA frame, the value of the Subtype field in the Frame Control field may be set to 0110 indicating a Control Frame Extension. Then, for the indication of the enhanced NDPA frame, the Control Frame Extension value may be set to one of the reserved values 1100-1111. For example, for the indication of the enhanced NDPA frame, the Control Frame Extension value may be set to 1100. It should be noted that 1100 is an example, and the Control Frame Extension value may be set to any one of the reserved values. Method 1. B. To indicate that the enhanced NDPA frame is to be used for sensing measurements, or to indicate the frame format of the enhanced NDPA frame used for sensing, the enhanced NDPA frame may be designed as follows. Method 1. B. i. An enhanced NDPA type field or an extended subtype field to indicate the type or version of the enhanced NDPA frame may be configured after the MAC header of the enhanced NDPA frame. Method 1. B. i. 1. The enhanced NDPA type field or extended subtype field may consist of three or four bits. In this case, the remaining bits may be reserved or used to indicate other information. Method 1. B. i. 1. A. For example, if the enhanced NDPA type field or extended subtype field consists of three bits, the enhanced NDPA type field or extended subtype field may be configured as follows. Method 1. B. i. 1. A. i. For example, if the value indicated by the enhanced NDPA type field or extended subtype field is 0, the value may indicate that the enhanced NDPA frame is a Sensing NDPA frame, i.e., supports the 1 lbf specification. Method 1. B. i. 1. A. ii. For example, if the value indicated by the enhanced NDPA type field or the extended subtype field is between 1 and 7, the value may indicate that the enhanced NDPA frame supports a newly defined specification after 1 lbf or 11be. Method 1. B. i. 1. B. The remaining bits (bits4or5) may be used to indicate a setup ID for sensing measurements. Method 1. B. i. 1. C. In another example, if the enhanced NDPA type field or extended subtype field indicates support for the Sensing NDPA frame or 1 lbf specification, the remaining bits may be used to indicate the PHY version or format in which the Sensing NDPA frame is transmitted. For example, if the remaining bit is 4 bits, the following methods may be defined Method 1. B. i. 1. C. i. If the value indicated by the remaining bit is 0, then the PHY version or format in which the Sensing NDPA frame is transmitted may be VHT. Method 1. B. i. 1. C. ii. If the value indicated by the remaining bits is 1, the PHY version or format in which the Sensing NDPA frame is transmitted can be HE. Method 1. B. i. 1. C. iii. If the value indicated by the remaining bits is 2, the PHY version or format in which the Sensing NDPA frame is transmitted may be EHT. Method 1. B. i. 1. C. iv. If the value indicated by the remaining bits is 3 to 15, the PHY version or format in which the Sensing NDPA frame is transmitted may be a next generation WLAN system standard after EHT. Method 1. B. i. 1. D. In another example, the indication may be performed via a 3-bit field. Method 1. B. i. 1. E. In one example, the enhanced NDPA frame may comprise the following. Method 1. B. i. 1. E. i. An enhanced NDPA type subfield may be included in the enhanced NDPA frame to indicate the enhanced NDPA type.FIG.23illustrates an example of an enhanced NDPA frame format. Method 1. B. i. 1. E. ii. The Sounding Dialog Token field inFIG.23may be configured as an NDPA type (variant) field and a Sounding Dialog Token Number field as before. In this case, the NDPA type field may indicate in what format the NDP frame used for the sensing measurement is configured. For example, if the NDP frame utilizes the HE format, the value of the NDPA Type field may be set to a value indicating the HE format. Method 1. B. i. 1. E. iii. In another example, the Sounding Dialog Token field may comprise a measurement setup ID and a measurement instance ID. Method 1. B. ii. The enhanced NDPA frame may include an enhanced NDPA common info field for transmitting common information for the enhanced NDPA frame after the MAC header. In this case, the enhanced NDPA common info field may comprise two bytes.FIG.24illustrates another example of an enhanced NDPA frame format. Method 1. B. ii. 1. The enhanced NDPA common info field of the enhanced NDPA frame may include the following information. Method 1. B. ii. 1. i. Information about the NDPA version identifier (3/4 bits). This information may indicate which NDPA frames are supported by the WLAN system after the 11be specification. For example, if the field for the information about the NDPA version identifier consists of 3 bits, the value of the field for the information about the NDPA version identifier indicating the Sensing NDPA frame may be 000. ii. The remaining values that may be indicated by the field for information about the NDPA version identifier may be used for NDPA frames supported by next generation wireless LAN systems. Method 1. B. ii. 1. ii. Information about the measurement setup ID. This information may be included if the information for the NDPA version identifier described above indicates an 1 lbf specification or sensing. The information about the measurement setup ID may indicate the setup ID for the sensing measurement. Method 1. b. ii. 1. iii. Subfields/information about the NDPA form/format may be included. The subfield/information may be present if the enhanced NDPA frame is an NDPA frame for sensing. If the enhanced NDPA frame is not an NDPA frame for sensing, the subfield/information may be reserved or used to indicate other information. Method 1. B. ii. 1. iii. 1. The subfield/information may be used to indicate the format of the NDP frame used for the sensing measurement. Method 1. B. ii. 1. iii. 2. For example, the subfield/information may be used to indicate the PHY PPDU format used for the transmission of the NDP frame. Here, the subfield/information may specify 11ac, 11ax, 11be, and next generation WLAN systems. Method 1. B. ii. 1. iii. 2. A. For example, if the subfield consists of 3 bits, the following methods may be defined. Method 1. B. ii. 1. iii. 2. A. i. If the value indicated by the subfield is 0, the PHY version or format in which the NDP frame is transmitted may be 11ac compliant. Method 1. B. ii. 1. iii. 2. A. ii. If the value indicated by the above subfield is 1, then the PHY version or format on which the NDP frame is transmitted may be 11ax compliant. Method 1. B. ii. 1. iii. 2. A. iii. If the value indicated by the above subfield is 2, the PHY version or format on which the NDP frame is transmitted may be 11be compliant. Method 1. B. ii. 1. iii. 2. A. iv. If the value indicated by the above subfield is 3 to 15, the PHY version or format in which the NDP frame is transmitted may be a post-EHT next generation WLAN system standard. Method 1. B. ii. 1. iv. Sounding Dialog Token Number. When the enhanced NDPA frame is an NDPA frame for sensing, the sounding dialog token number may represent information about the measurement instance for which the measurement is performed. Method 2. Method of using the Type field within the Frame Control field. Method 2. A. To indicate an enhanced NDPA frame, the value of the Type field in the Frame Control field may be set to 11. Method 2. B. In this case, the Subtype field in the Frame Control field may be set to a value of one of the reserved values 0010 through 1111. For example, to indicate an enhanced NDPA frame, the Subtype field may be set to 0010. Method 2. C. When the Subtype field is utilized to indicate the enhanced NDPA frame, the enhanced NDPA frame may be constructed as in method 1 above. FIG.25is a flow diagram of a method performed by a transmitting STA according to some implementations of the present disclosure. The egress STA may be a non-AP STA or an AP STA. Referring toFIG.25, the transmitting STA transmits an NDPA frame to the receiving STA (S2510). Subsequently, the transmitting STA transmits an NDP frame based on the NDPA frame to the receiving STA (S2520). For example, the NDPA frame may be the NDPA frame ofFIG.15or the NDPA frame ofFIG.16. In other words, the NDPA frame may be an NDPA frame that is transmitted to all responders as shown inFIG.15. Alternatively, the NDPA frame may be an NDPA frame transmitted to responders that have transmitted a response frame to the TF sensing poll frame, as shown inFIG.16. In one example, the NDPA frame may be constructed based on Technical features 3 and Technical features 4 above. Specifically, the NDPA frame may include a Special User Information field. Here, the Special User Information field may comprise an AID field. The AID field may indicate a predefined value for sensing. In one example, the predefined value for sensing may be 2007. In one example, the Special User Information field may include common information for sensing. The common information for sensing may refer to Technical feature 3. G. and/or Configuration 2. A. above. Specifically, the Special User Information field may include a specific field indicating a preamble puncturing subchannel or an inactive subchannel for a PPDU comprising the NDPA frame and/or bandwidth for a PPDU comprising the NDP frame. Here, information about the subchannel indicated by the specific field may be used for reporting/feedback on sensing measurements performed by the receiving STA. For example, if the specific field indicates a subchannel with preamble puncturing and a subchannel without preamble puncturing, the receiving STA may perform the reporting/feedback only on the subchannel without preamble puncturing. In one example, the NDPA frame may include one or more user information fields. Here, the Special User Information field may be located prior to the one or more user information fields within the NDPA frame. B31, i.e., the thirty-second bit, included in the one or more user information fields may be used as a field to indicate whether the NDPA frame is an NDPA frame for sensing. Further, B31and B26included in the one or more user information fields may be used to indicate whether the NDPA frame is an NDPA frame for sensing. For example, if the B31indicates 1 and the B26indicates 0, the NDPA frame may be an NDPA frame for sensing. For example, based on Technical Feature 5 above, if the NDPA frame is an NDPA frame for sensing, the NDPA frame for sensing may include a Control Frame Extension field, i.e., if the NDPA frame is an NDPA frame for sensing, the Type field of the Frame Control field included in the NDPA frame may indicate 1, and the Subtype field of the Frame Control field may indicate6. In this case, the control frame extension field of the NDPA frame for sensing may indicate a predefined value for sensing. FIG.26is a flow diagram of a method performed by a receiving STA according to some implementations of the present disclosure. The receiving STA may be a non-AP STA or an AP STA. Referring toFIG.26, the receiving STA receives an NDPA frame from the transmitting STA (S2610). Subsequently, the receiving STA receives an NDP frame based on the NDPA frame from the transmitting STA (S2620). The receiving STA performs a sensing measurement based on the NDP frame (S2630). For example, the receiving STA may be the responder ofFIG.15or the responder ofFIG.16. In other words, the receiving STA may be an STA that receives the NDPA frame as inFIG.15. Alternatively, the receiving STA may be a responder that transmitted a response frame to the TF sensing poll frame, as shown inFIG.16. Here, the receiving STA may determine whether the NDPA frame is an NDPA frame for sensing based on various methods/technical features/configurations proposed herein. For example, the NDPA frame may include a Special User Information field. The receiving STA may determine that the NDPA frame is an NDPA frame for sensing based on the Special User Information field. For example, the Special User Information field may include common information for sensing. Specifically, the Special User Information field may include a specific field indicating a preamble puncturing subchannel or an inactive subchannel for the PPDU containing the NDPA frame and/or bandwidth for the PPDU containing the NDP frame. For example, the NDPA frame may include one or more user information fields. Here, the Special User Information field may be located prior to the one or more user information fields within the NDPA frame. B31, i.e., the thirty-second bit, included in the one or more user information fields may be used as a field to indicate whether the NDPA frame is an NDPA frame for sensing. Further, B31and B26included in the one or more user information fields may be used to indicate whether the NDPA frame is an NDPA frame for sensing. For example, if the B31indicates 1 and the B26indicates 0, the receiving STA may determine that the NDPA frame is an NDPA frame for sensing. For example, Technical Feature 5. may be applied to distinguish the NDPA frame for sensing. Specifically, if the NDPA frame is an NDPA frame for sensing, the NDPA frame for sensing may include a Control Frame Extension field, i.e., if the NDPA frame is an NDPA frame for sensing, the Type field of the Frame Control field included in the NDPA frame may indicate 1, and the Subtype field of the Frame Control field may indicate 6. In this case, the control frame extension field of the NDPA frame for sensing may indicate a predefined value for sensing. The examples ofFIGS.25and26are illustrative of some of the various technical features and configurations proposed herein. Again, it will be appreciated that various technical features and/or configurations proposed herein may be applied to NDPA frames for sensing and NDPA frames for specifications for next generation wireless LAN systems. The foregoing technical features of the present specification are applicable to various applications or business models. For example, the foregoing technical features may be applied for wireless communication of a device supporting artificial intelligence (AI). Artificial intelligence refers to a field of study on artificial intelligence or methodologies for creating artificial intelligence, and machine learning refers to a field of study on methodologies for defining and solving various issues in the area of artificial intelligence. Machine learning is also defined as an algorithm for improving the performance of an operation through steady experiences of the operation. An artificial neural network (ANN) is a model used in machine learning and may refer to an overall problem-solving model that includes artificial neurons (nodes) forming a network by combining synapses. The artificial neural network may be defined by a pattern of connection between neurons of different layers, a learning process of updating a model parameter, and an activation function generating an output value. The artificial neural network may include an input layer, an output layer, and optionally one or more hidden layers. Each layer includes one or more neurons, and the artificial neural network may include synapses that connect neurons. In the artificial neural network, each neuron may output a function value of an activation function of input signals input through a synapse, weights, and deviations. A model parameter refers to a parameter determined through learning and includes a weight of synapse connection and a deviation of a neuron. A hyperparameter refers to a parameter to be set before learning in a machine learning algorithm and includes a learning rate, the number of iterations, a mini-batch size, and an initialization function. Learning an artificial neural network may be intended to determine a model parameter for minimizing a loss function. The loss function may be used as an index for determining an optimal model parameter in a process of learning the artificial neural network. Machine learning may be classified into supervised learning, unsupervised learning, and reinforcement learning. Supervised learning refers to a method of training an artificial neural network with a label given for training data, wherein the label may indicate a correct answer (or result value) that the artificial neural network needs to infer when the training data is input to the artificial neural network. Unsupervised learning may refer to a method of training an artificial neural network without a label given for training data. Reinforcement learning may refer to a training method for training an agent defined in an environment to choose an action or a sequence of actions to maximize a cumulative reward in each state. Machine learning implemented with a deep neural network (DNN) including a plurality of hidden layers among artificial neural networks is referred to as deep learning, and deep learning is part of machine learning. Hereinafter, machine learning is construed as including deep learning. The foregoing technical features may be applied to wireless communication of a robot. Robots may refer to machinery that automatically process or operate a given task with own ability thereof. In particular, a robot having a function of recognizing an environment and autonomously making a judgment to perform an operation may be referred to as an intelligent robot. Robots may be classified into industrial, medical, household, military robots and the like according uses or fields. A robot may include an actuator or a driver including a motor to perform various physical operations, such as moving a robot joint. In addition, a movable robot may include a wheel, a brake, a propeller, and the like in a driver to run on the ground or fly in the air through the driver. The foregoing technical features may be applied to a device supporting extended reality. Extended reality collectively refers to virtual reality (VR), augmented reality (AR), and mixed reality (MR). VR technology is a computer graphic technology of providing a real-world object and background only in a CG image, AR technology is a computer graphic technology of providing a virtual CG image on a real object image, and MR technology is a computer graphic technology of providing virtual objects mixed and combined with the real world. MR technology is similar to AR technology in that a real object and a virtual object are displayed together. However, a virtual object is used as a supplement to a real object in AR technology, whereas a virtual object and a real object are used as equal statuses in MR technology. XR technology may be applied to a head-mount display (HMD), a head-up display (HUD), a mobile phone, a tablet PC, a laptop computer, a desktop computer, a TV, digital signage, and the like. A device to which XR technology is applied may be referred to as an XR device.
84,680
11943647
DETAILED DESCRIPTION Cross-link interference (CLI) may include opposite transmission directions at different cells leading to uplink-to-downlink interference and/or downlink-to-uplink interference. Such CLI may be even more pronounced in dense deployment scenarios, e.g., such as where many cells are deployed in a small geographic footprint. Conventional techniques may define or otherwise provide for a measurement of the CLI, but do not provide any mechanism to perform and/or report such CLI measurements. That is, conventional techniques may define a CLI received signal strength indicator (RSSI) measurement (CLI-RSSI) and/or a sounding reference signal (SRS) reference signal received power (RSRP) measurement (SRS-RSRP) for CLI, but may not provide any mechanism by which such measurements are configured, performed, and/or reported by base station(s) and/or user equipment (UE)(s). Accordingly, aspects of the described techniques provide various mechanisms that can be employed to configure, perform, and report CLI measurement information by UEs. For example, neighboring base stations may coordinate over an Xn/F1 interface to establish otherwise configure a measurement resource configuration for a CLI signal strength measurement (e.g., CLI-RSSI and/or SRS-RSRP measurements). Each base station may transmit or otherwise provide a measurement configuration signal to their respective UE(s) that carries or otherwise conveys an indication of the measurement resource configuration for the CLI signal strength measurement. One or more of the UEs within the coverage area of a base station may perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells (e.g., may measure a transmission from the neighboring UE) according to the measurement resource configuration. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap, e.g., a layer 3 measurement gap. Each UE may then transmit or otherwise provide a report of the CLI signal strength measurement to each respective base station. The base stations may use the report of the CLI signal strength measurement to mitigate or avoid CLI within their respective coverage areas. Aspects of the disclosure are initially described in the context of a wireless communication system. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to UE measurement for CLI. FIG.1illustrates an example of a wireless communication system100that supports UE measurement for CLI in accordance with aspects of the present disclosure. The wireless communication system100includes base stations105, UEs115, and a core network130. In some examples, the wireless communication system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some cases, wireless communication system100may support enhanced broadband communications, ultra-reliable (e.g., mission critical) communications, low latency communications, or communications with low-cost and low-complexity devices. Base stations105may wirelessly communicate with UEs115via one or more base station antennas. Base stations105described herein may include or may be referred to by those skilled in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or some other suitable terminology. Wireless communication system100may include base stations105of different types (e.g., macro or small cell base stations). The UEs115described herein may be able to communicate with various types of base stations105and network equipment including macro eNBs, small cell eNBs, gNBs, relay base stations, and the like. Each base station105may be associated with a particular geographic coverage area110in which communications with various UEs115is supported. Each base station105may provide communication coverage for a respective geographic coverage area110via communication links125, and communication links125between a base station105and a UE115may utilize one or more carriers. Communication links125shown in wireless communication system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions from a base station105to a UE115. Downlink transmissions may also be called forward link transmissions while uplink transmissions may also be called reverse link transmissions. The geographic coverage area110for a base station105may be divided into sectors making up a portion of the geographic coverage area110, and each sector may be associated with a cell. For example, each base station105may provide communication coverage for a macro cell, a small cell, a hot spot, or other types of cells, or various combinations thereof. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, and overlapping geographic coverage areas110associated with different technologies may be supported by the same base station105or by different base stations105. The wireless communication system100may include, for example, a heterogeneous LTE/LTE-A/LTE-A Pro or NR network in which different types of base stations105provide coverage for various geographic coverage areas110. The term “cell” refers to a logical communication entity used for communication with a base station105(e.g., over a carrier), and may be associated with an identifier for distinguishing neighboring cells (e.g., a physical cell identifier (PCID), a virtual cell identifier (VCID)) operating via the same or a different carrier. In some examples, a carrier may support multiple cells, and different cells may be configured according to different protocol types (e.g., machine-type communication (MTC), narrowband Internet-of-Things (NB-IoT), enhanced mobile broadband (eMBB), or others) that may provide access for different types of devices. In some cases, the term “cell” may refer to a portion of a geographic coverage area110(e.g., a sector) over which the logical entity operates. UEs115may be dispersed throughout the wireless communication system100, and each UE115may be stationary or mobile. A UE115may also be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client. A UE115may also be a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may also refer to a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or an MTC device, or the like, which may be implemented in various articles such as appliances, vehicles, meters, or the like. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices, and may provide for automated communication between machines (e.g., via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station105without human intervention. In some examples, M2M communication or MTC may include communications from devices that integrate sensors or meters to measure or capture information and relay that information to a central server or application program that can make use of the information or present the information to humans interacting with the program or application. Some UEs115may be designed to collect information or enable automated behavior of machines. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. Some UEs115may be configured to employ operating modes that reduce power consumption, such as half-duplex communications (e.g., a mode that supports one-way communication via transmission or reception, but not transmission and reception simultaneously). In some examples half-duplex communications may be performed at a reduced peak rate. Other power conservation techniques for UEs115include entering a power saving “deep sleep” mode when not engaging in active communications, or operating over a limited bandwidth (e.g., according to narrowband communications). In some cases, UEs115may be designed to support critical functions (e.g., mission critical functions), and a wireless communication system100may be configured to provide ultra-reliable communications for these functions. In some cases, a UE115may also be able to communicate directly with other UEs115(e.g., using a peer-to-peer (P2P) or device-to-device (D2D) protocol). One or more of a group of UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105, or be otherwise unable to receive transmissions from a base station105. In some cases, groups of UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some cases, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between UEs115without the involvement of a base station105. Base stations105may communicate with the core network130and with one another. For example, base stations105may interface with the core network130through backhaul links132(e.g., via an S1, N2, N3, or other interface). Base stations105may communicate with one another over backhaul links134(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105) or indirectly (e.g., via core network130). The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC), which may include at least one mobility management entity (MME), at least one serving gateway (S-GW), and at least one Packet Data Network (PDN) gateway (P-GW). The MME may manage non-access stratum (e.g., control plane) functions such as mobility, authentication, and bearer management for UEs115served by base stations105associated with the EPC. User IP packets may be transferred through the S-GW, which itself may be connected to the P-GW. The P-GW may provide IP address allocation as well as other functions. The P-GW may be connected to the network operators IP services. The operators IP services may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched (PS) Streaming Service. At least some of the network devices, such as a base station105, may include subcomponents such as an access network entity, which may be an example of an access node controller (ANC). Each access network entity may communicate with UEs115through a number of other access network transmission entities, which may be referred to as a radio head, a smart radio head, or a transmission/reception point (TRP). In some configurations, various functions of each access network entity or base station105may be distributed across various network devices (e.g., radio heads and access network controllers) or consolidated into a single network device (e.g., a base station105). Wireless communication system100may operate using one or more frequency bands, typically in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band, since the wavelengths range from approximately one decimeter to one meter in length. UHF waves may be blocked or redirected by buildings and environmental features. However, the waves may penetrate structures sufficiently for a macro cell to provide service to UEs115located indoors. Transmission of UHF waves may be associated with smaller antennas and shorter range (e.g., less than 100 km) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. Wireless communication system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band. The SHF region includes bands such as the 5 GHz industrial, scientific, and medical (ISM) bands, which may be used opportunistically by devices that may be capable of tolerating interference from other users. Wireless communication system100may also operate in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, wireless communication system100may support millimeter wave (mmW) communications between UEs115and base stations105, and EHF antennas of the respective devices may be even smaller and more closely spaced than UHF antennas. In some cases, this may facilitate use of antenna arrays within a UE115. However, the propagation of EHF transmissions may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. Techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. In some cases, wireless communication system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, wireless communication system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz ISM band. When operating in unlicensed radio frequency spectrum bands, wireless devices such as base stations105and UEs115may employ listen-before-talk (LBT) procedures to ensure a frequency channel is clear before transmitting data. In some cases, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, peer-to-peer transmissions, or a combination of these. Duplexing in unlicensed spectrum may be based on frequency division duplexing (FDD), time division duplexing (TDD), or a combination of both. In some examples, base station105or UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. For example, wireless communication system100may use a transmission scheme between a transmitting device (e.g., a base station105) and a receiving device (e.g., a UE115), where the transmitting device is equipped with multiple antennas and the receiving device is equipped with one or more antennas. MIMO communications may employ multipath signal propagation to increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers, which may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream, and may carry bits associated with the same data stream (e.g., the same codeword) or different data streams. Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO) where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO) where multiple spatial layers are transmitted to multiple devices. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105or a UE115) to shape or steer an antenna beam (e.g., a transmit beam or receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying certain amplitude and phase offsets to signals carried via each of the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). In one example, a base station105may use multiple antennas or antenna arrays to conduct beamforming operations for directional communications with a UE115. For instance, some signals (e.g., synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a base station105multiple times in different directions, which may include a signal being transmitted according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to identify (e.g., by the base station105or a receiving device, such as a UE115) a beam direction for subsequent transmission and/or reception by the base station105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a base station105in a single beam direction (e.g., a direction associated with the receiving device, such as a UE115). In some examples, the beam direction associated with transmissions along a single beam direction may be determined based at least in in part on a signal that was transmitted in different beam directions. For example, a UE115may receive one or more of the signals transmitted by the base station105in different directions, and the UE115may report to the base station105an indication of the signal it received with a highest signal quality, or an otherwise acceptable signal quality. Although these techniques are described with reference to signals transmitted in one or more directions by a base station105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (e.g., for identifying a beam direction for subsequent transmission or reception by the UE115), or transmitting a signal in a single direction (e.g., for transmitting data to a receiving device). A receiving device (e.g., a UE115, which may be an example of a mmW receiving device) may try multiple receive beams when receiving various signals from the base station105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets applied to signals received at a plurality of antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at a plurality of antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive beams or receive directions. In some examples a receiving device may use a single receive beam to receive along a single beam direction (e.g., when receiving a data signal). The single receive beam may be aligned in a beam direction determined based at least in part on listening according to different receive beam directions (e.g., a beam direction determined to have a highest signal strength, highest signal-to-noise ratio, or otherwise acceptable signal quality based at least in part on listening according to multiple beam directions). In some cases, the antennas of a base station105or UE115may be located within one or more antenna arrays, which may support MIMO operations, or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some cases, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. In some cases, wireless communication system100may be a packet-based network that operate according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use hybrid automatic repeat request (HARQ) to provide retransmission at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a base station105or core network130supporting radio bearers for user plane data. At the Physical layer, transport channels may be mapped to physical channels. In some cases, UEs115and base stations105may support retransmissions of data to increase the likelihood that data is received successfully. HARQ feedback is one technique of increasing the likelihood that data is received correctly over a communication link125. HARQ may include a combination of error detection (e.g., using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (e.g., automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (e.g., signal-to-noise conditions). In some cases, a wireless device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In other cases, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. Time intervals in LTE or NR may be expressed in multiples of a basic time unit, which may, for example, refer to a sampling period of Ts=1/30,720,000 seconds. Time intervals of a communications resource may be organized according to radio frames each having a duration of 10 milliseconds (ms), where the frame period may be expressed as Tf=307,200 Ts. The radio frames may be identified by a system frame number (SFN) ranging from 0 to 1023. Each frame may include 10 subframes numbered from 0 to 9, and each subframe may have a duration of 1 ms. A subframe may be further divided into 2 slots each having a duration of 0.5 ms, and each slot may contain 6 or 7 modulation symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). Excluding the cyclic prefix, each symbol period may contain 2048 sampling periods. In some cases, a subframe may be the smallest scheduling unit of the wireless communication system100, and may be referred to as a transmission time interval (TTI). In other cases, a smallest scheduling unit of the wireless communication system100may be shorter than a subframe or may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs) or in selected component carriers using sTTIs). In some wireless communication systems, a slot may further be divided into multiple mini-slots containing one or more symbols. In some instances, a symbol of a mini-slot or a mini-slot may be the smallest unit of scheduling. Each symbol may vary in duration depending on the subcarrier spacing (SCS) or frequency band of operation, for example. Further, some wireless communication systems may implement slot aggregation in which multiple slots or mini-slots are aggregated together and used for communication between a UE115and a base station105. The term “carrier” refers to a set of radio frequency spectrum resources having a defined physical layer structure for supporting communications over a communication link125. For example, a carrier of a communication link125may include a portion of a radio frequency spectrum band that is operated according to physical layer channels for a given radio access technology. Each physical layer channel may carry user data, control information, or other signaling. A carrier may be associated with a pre-defined frequency channel (e.g., an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)), and may be positioned according to a channel raster for discovery by UEs115. Carriers may be downlink or uplink (e.g., in an FDD mode), or be configured to carry downlink and uplink communications (e.g., in a TDD mode). In some examples, signal waveforms transmitted over a carrier may be made up of multiple sub-carriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). The organizational structure of the carriers may be different for different radio access technologies (e.g., LTE, LTE-A, LTE-A Pro, NR). For example, communications over a carrier may be organized according to TTIs or slots, each of which may include user data as well as control information or signaling to support decoding the user data. A carrier may also include dedicated acquisition signaling (e.g., synchronization signals or system information, etc.) and control signaling that coordinates operation for the carrier. In some examples (e.g., in a carrier aggregation configuration), a carrier may also have acquisition signaling or control signaling that coordinates operations for other carriers. Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. In some examples, control information transmitted in a physical control channel may be distributed between different control regions in a cascaded manner (e.g., between a common control region or common search space and one or more UE-specific control regions or UE-specific search spaces). A carrier may be associated with a particular bandwidth of the radio frequency spectrum, and in some examples the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communication system100. For example, the carrier bandwidth may be one of a number of predetermined bandwidths for carriers of a particular radio access technology (e.g., 1.4, 3, 5, 10, 15, 20, 40, or 80 MHz). In some examples, each served UE115may be configured for operating over portions or all of the carrier bandwidth. In other examples, some UEs115may be configured for operation using a narrowband protocol type that is associated with a predefined portion or range (e.g., set of subcarriers or RBs) within a carrier (e.g., “in-band” deployment of a narrowband protocol type). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and SCS are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. In MIMO systems, a wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers), and the use of multiple spatial layers may further increase the data rate for communications with a UE115. Devices of the wireless communication system100(e.g., base stations105or UEs115) may have a hardware configuration that supports communications over a particular carrier bandwidth, or may be configurable to support communications over one of a set of carrier bandwidths. In some examples, the wireless communication system100may include base stations105and/or UEs115that support simultaneous communications via carriers associated with more than one different carrier bandwidth. Wireless communication system100may support communication with a UE115on multiple cells or carriers, a feature which may be referred to as carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both FDD and TDD component carriers. In some cases, wireless communication system100may utilize enhanced component carriers (eCCs). An eCC may be characterized by one or more features including wider carrier or frequency channel bandwidth, shorter symbol duration, shorter TTI duration, or modified control channel configuration. In some cases, an eCC may be associated with a carrier aggregation configuration or a dual connectivity configuration (e.g., when multiple serving cells have a suboptimal or non-ideal backhaul link). An eCC may also be configured for use in unlicensed spectrum or shared spectrum (e.g., where more than one operator is allowed to use the spectrum). An eCC characterized by wide carrier bandwidth may include one or more segments that may be utilized by UEs115that are not capable of monitoring the whole carrier bandwidth or are otherwise configured to use a limited carrier bandwidth (e.g., to conserve power). In some cases, an eCC may utilize a different symbol duration than other component carriers, which may include use of a reduced symbol duration as compared with symbol durations of the other component carriers. A shorter symbol duration may be associated with increased spacing between adjacent subcarriers. A device, such as a UE115or base station105, utilizing eCCs may transmit wideband signals (e.g., according to frequency channel or carrier bandwidths of 20, 40, 60, 80 MHz, etc.) at reduced symbol durations (e.g., 16.67 microseconds). A TTI in eCC may consist of one or multiple symbol periods. In some cases, the TTI duration (that is, the number of symbol periods in a TTI) may be variable. Wireless communication system100may be an NR system that may utilize any combination of licensed, shared, and unlicensed spectrum bands, among others. The flexibility of eCC symbol duration and SCS may allow for the use of eCC across multiple spectrums. In some examples, NR shared spectrum may increase spectrum utilization and spectral efficiency, specifically through dynamic vertical (e.g., across the frequency domain) and horizontal (e.g., across the time domain) sharing of resources. In some aspects, a UE115may receive from a base station105a measurement configuration signal comprising a measurement resource configuration associated with a CLI signal strength measurement. The UE115may perform the CLI signal strength measurement for one or more UEs115associated with one or more intra-frequency neighboring cells according to the measurement resource configuration, wherein the CLI signal strength measurement is performed during an intra-frequency measurement gap. The UE115may transmit a report of the CLI signal strength measurement to the base station105. In some aspects, a base station105may coordinating with a neighboring base station105to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE115associated with the base station. The base station105may transmit to the UE115a measurement configuration signal comprising the measurement resource configuration. The base station105may receive a report of the CLI signal strength measurement from the UE115, the CLI signal strength measurement being performed during an intra-frequency measurement gap and based at least in part on the measurement resource configuration. FIG.2illustrates an example of a wireless communication system200that supports UE measurement for CLI in accordance with aspects of the present disclosure. In some examples, wireless communication system200may implement aspects of wireless communication system100. Wireless communication system200may include base station205, base station210, UE215, and UE220, which may be examples of the corresponding devices described herein. Generally, base station205may be a serving base station of UE215and base station210may be a serving base station of UE220. In some aspects, base station205may be a neighboring base station with respect to base station210, and vice versa. In some aspects, UE215may be a neighboring UE with respect to UE220, and vice versa, (e.g., may each belong to different, but intra-frequency neighboring cells). In some aspects, wireless communication system200may experience CLI, e.g., opposite direction transmissions from different cells. For example, a downlink transmission225from base station205to UE215and/or an uplink transmission230from UE220to base station210may introduce or otherwise contributes to CLI. That is, the link between base station205and UE215may experience interference caused by transmissions between base station210and UE220, and vice versa, which may be CLI. Aspects of the described techniques provide a mechanism where UE215and/or UE220may perform CLI signal strength measurements for each other, and report the results of the CLI signal strength measurements to their respective base stations. In some aspects, the CLI signal strength measurements may be based, at least in some aspects, on a measurement resource configuration coordinated between base stations205and210. That is, base stations205and210may coordinate over a link (e.g., a wired link and/or a wireless link) to establish or otherwise configure the measurement resource configuration to be used for CLI signal strength measurements. In some aspects, the CLI signal strength measurement may include CLI-RSSI and/or SRS-RSRP measurements. In some aspects, the measurement resource configuration may include various resources (e.g., time, frequency, spatial, and the like), parameters, and the like. In some aspects, an information element of measurement resource configuration for CLI-RSSI measurement may carry or otherwise convey an indication of parameters such as, but not limited to, a number of physical resource blocks (PRBs), a starting PRB for subband indication, a number of OFDM symbol(s), a starting or first OFDM symbol index in the slot, and the like. In some aspects, the PRBs may be contiguous. In some aspects, the configured OFDM symbols may be contagious. In some aspects and depending on the capability of a particular UE, the UE may not be required to assume that physical downlink shared channel (PDSCH) is FDMd with CLI-RSSI measurement resources. In some aspects, an information element of measurement resource configuration for CLI-RSSI measurement may carry or otherwise convey values for slot configurations being used for the CLI signal strength measurements. For example, the information element may carry or convey an indication of 10 slots using integer values (0 . . . 9), 20 slots using integer values (0 . . . 19), 40 slots using integer values (0 . . . 39), 80 slots using integer values (0 . . . 79), 160 slots using integer values (0 . . . 159), 320 slots using integer values (0 . . . 319), 640 slots using integer values (0 . . . 639), and so on. In some aspects, the network may allocate or otherwise configures slot durations for CLI-RSSI measurements which correspond with a value of periodicity among a periodicity set (e.g., 10 ms, 20 ms, 40 ms, 80 ms, 160 ms, 320 ms, 640 ms, etc.). An example of a CLI-RSSI configured measurement periodicity is illustrated in Table 1 below. TABLE 1SCS:SCS:SCS:SCS:Slot Duration15 KHz30 KHz60 KHz120 KHz10 slots10 ms20 slots20 ms10 ms40 slots40 ms20 ms10 ms80 slots80 ms40 ms20 ms10 ms160 slots160 ms80 ms40 ms20 ms320 slots320 ms160 ms80 ms40 ms640 slots640 ms320 ms160 ms80 ms In some aspects, the information element of the measurement resource configuration for CLI-RSSI measurement may carry or convey a reference SCS for CLI-RSSI measurement. In some aspects, all SCS for frequency range 1 (FR1) and FR2 may be supported. In some aspects, SCS may include a reference unit of time and/or frequency resource configuration. In some aspects, the UE may perform CLI measurement (e.g., a CLI signal strength measurement) within an active bandwidth part (BWP). In some aspects, the SCS for CLI measurement resource configuration may be the same or different from the SCS of the active BWP. In some aspects, one or multiple resources for CLI-RSSI measurement may be configured. In some aspects, the number of measurement resources for CLI-RSSI measurement may be up to 64. Accordingly, in some aspects the periodicity of the CLI signal strength measurement may be determined based on the slot duration and/or SCS associated with the link between the UE and its respective serving base station. In some aspects, the measurement resource configuration may include various information for SRS-RSRP measurement. In some aspects and in order to perform SRS transmissions for CLI measurement, this may include the timing advance (TA) value applied to the corresponding uplink symbol being the same as the latest TA for regular uplink symbols transmitted to the base station. In some aspects, for SRS-RSRP measurement the UE may not be required to perform time tracking or time adjustment other than a constant offset relative to its own downlink timing. The constant offset may be derived by UE implementation. For SRS transmissions for the purposes of SRS-RSRP measurement, conventional SRS resource set usage may be used. For example, for SRS-RSRP measurement, one or more SRS resource-per-serving cell may be configured. In some aspects, for SRS-RSRP measurement, the total number of SRS resources to be monitored by UE may not exceed 32. Accordingly, base stations205and210may coordinate over a backhaul link (e.g., an Xn link or interface and/or an F1 link or interface) to configure the measurement resource configuration for the CLI signal strength measurement (e.g., CLI measurement using CLI-RSSI and/or SRS-RSRP) for an associated UE. In some aspects, one or both of base stations205and210may transmit or otherwise provide the measurement configuration signal to UEs215and220, respectively, carrying or conveying an indication of the measurement resource configuration. In some aspects, this may include defining a measurement object for the CLI signal strength measurement (e.g., for the CLI measurement). Conventionally, an NR measurement object (MeasObjectNR) is defined for synchronization signal block (SSB) or a channel state information reference signal (CSI-RS) based measurements. However, these are different from CLI-RSSI and SRS-RSRP measurements. Moreover, the conventional measurement objects are defined for the UE to measure downlink signals, which is different from the UE measuring uplink signals for CLI measurements in accordance with aspects of the described techniques. Accordingly, aspects of the described techniques define a new measurement object for CLI measurements (e.g., for the CLI signal strength measurements). In some aspects, this may include defining a new measurement object (MeasObjectCLI) for CLI-RSSI and SRS-RSRP measurements (e.g., the CLI signal strength measurement). In some aspects, the CLI measurement object (e.g., MeasObjectCLI) may include, carry, or otherwise convey, CLI-RSSI and SRS-RSRP measurement resource configuration (e.g., may carry or convey the measurement resource configuration transmitted to each respective UE). In some aspects, the CLI measurement object may include, carry, or otherwise convey, corresponding filtering configuration (e.g., a layer 3 filtering configuration) for CLI-RSSI and SRS-RSRP measurements. One example of the CLI measurement object may include, but is not limited to: MeasObjectCLI ::=    SEQUENCE {cli-RSSI-Measurement CLI-RSSI-MeasurementOPTIONAL,srs-RSRP-Measurement SRS-RSRP-MeasurementOPTIONAL,...} With respect to a CLI-RSSI measurement, the information element (CLI-RSSI-Measurement) may be used to configure CLI-RSSI measurements. In some aspects, the information element may be carried or conveyed in the measurement configuration signal carrying or indicating the measurement resource configuration. One example of the CLI-RSSI measurement information element may include, but is not limited to: -- ASN1START-- TAG-CLI-RSSI-MEASUREMENT-STARTCLI-RSSI-Measurement ::=   SEQUENCE {<<fields TBD based on RAN1 LS>>,filterCoefficientCLI-RSSI    FilterCoefficient,...}-- TAG-CLI-RSSI-MEASUREMENT-STOP-- ASN1STOP With respect to SRS-RSRP measurement, the information element (SRS-RSRP-Measurement) may be used to configure SRS-RSRP measurements. In some aspects, the information element may be carried or conveyed in the measurement configuration signal carrying or indicating a measurement resource configuration. One example of the SRS-RSRP measurement information element may include, but is not limited to: -- ASN1START-- TAG-SRS-RSRP-MEASUREMENT-STARTSRS-RSRP-Measurement ::=   SEQUENCE {<<fields TBD based on RANI LS>>,filterCoefficientSRS-RSRP     FilterCoefficient,...}-- TAG-SRS-RSRP-MEASUREMENT-STOP-- ASN1STOP In some aspects, all UEs served by particular cell (or base station) may be configured to transmit the same SRS at the same time in order to improve the efficiency and accuracy of the CLI signal strength measurement. Accordingly, neighboring cells (or base stations) may know the SRS measurement window. In some aspects, it may be preferable that the measurement resource configuration is synchronized between intra-frequency neighbor cells (or base stations). For example, a central unit (CU) and/or base station may get its member cells synchronized in the CLI measurement resource, e.g., may coordinate the measurement resource configuration. In some aspects, an operations and management (OAM) function may be responsible for synchronizing the measurement resource configurations of its managed base stations/CUs. In some aspects, if the neighbor cells (or base stations) are not synchronized for the SRS configuration, a base station may configure CLI-RSSI/SRS-RSRP measurement for serving UEs by putting the neighbor cell's (or base station's) CLI-RSSI/SRS-RSRP measurement resources as a list into the SRS-RSRP measurement resource configuration. Accordingly, in some aspects the measurement resource configuration (e.g., the CLI-RSSI and/or SRS-RSRP measurement resources) may be coordinated between intra-frequency neighboring cells (or base stations). Accordingly, a base station may coordinate with the neighboring base station (e.g., base station205may coordinate with neighbor base station210, or vice versa) to configure the measurement resource configuration for CLI signal strength measurements (e.g., CLI measurements including CLI-RSSI and/or SRS-RSRP measurements) for the respective UEs. In some aspects, the measurement resource configuration may include any aspects of the information elements, resources, parameters, etc., discussed above with respect to CLI-RSSI and/or SRS-RSRP. In some aspects, the base station may transmit or otherwise provide a measurement configuration signal to its associated UE that carries or conveys an indication of the measurement resource configuration. For example, base station205may transmit the measurement configuration signal to UE215and/or base station210may transmit the measurement configuration signal to UE220. In some aspects, each UE may use the measurement resource configuration to perform the CLI signal strength measurement for neighboring UEs (e.g., neighboring UEs associated with intra-frequency neighboring cells). In some aspects, a CLI signal strength measurement may be performed during an intra-frequency measurement gap, such as a layer 3 measurement gap. That is, UE215may use the measurement resource configuration when performing the CLI signal strength measurement for UE220, or vice versa. In some aspects, one or more of the UEs215and/or220may be operating in a discontinuous reception (DRX) mode. Generally, the DRX mode may include idle or off periods in which one or more components, functions, and the like, of the UE are powered down or otherwise inactive to conserve power and one or more on or active periods in which the UE is activated. In some aspects, each UE may determine whether to perform the CLI signal strength measurement for the neighboring UE during an off period of the DRX mode. That is, it may be up to the UE implementation as to whether to measure CLI-RSSI and/or SRS-RSRP during the off period of the DRX cycle (e.g., rather than the active time of the DRX cycle). For example, each UE may determine whether or not performance requirements (e.g., radio link performance requirements) of the UE can be met while performing the CLI signal strength measurement during the off period of the DRX cycle. Accordingly, UE215and/or UE220may determine whether to perform the CLI signal strength measurement during the DRX off period based, at least in some aspects, on a radio link performance threshold for a link between the UE and its serving base station. In some aspects, an intra-frequency measurement a gap may be defined as the regular layer 3 measurement gap or realized using symbol level rate matching when the layer 3 measurement gap is not configured. By symbol level rate matching, PDSCH may be blank in the symbols for rate matching so that there is no FDM reception with different timing for the PDSCH and CLI-RSSI/SRS-RSRP. Moreover, this may not preclude the network transmitting physical downlink control channel (PDCCH) and other downlink channels to the UE, and therefore requires the UE to do FDM reception of these downlink channels with CLI. Accordingly, aspects of the described techniques provide for a layer 3 measurement gap for CLI-RSSI and SRS-RSRP measurements. Accordingly, the CLI signal strength measurement may be performed during an intra-frequency measurement gap using the layer 3 measurement gap. Accordingly, UE215and UE220may perform CLI signal strength measurements according to the measurement resource configuration and during an intra-frequency measurement gap. That is, in some examples UE215may perform the CLI signal strength measurement for UE220and/or UE220may perform the CLI signal strength measurement for UE215during the intra-frequency measurement gap. In some aspects, each UE may transmit or otherwise provide a report of the CLI signal strength measurement to each respective base station. That is, UE215may transmit a report of the CLI signal strength measurement to base station205and/or UE220may transmit a report of the CLI signal strength measurement to base station210. In some aspects, the report of the CLI signal strength measurement may include both periodic and/or event triggered measurement reporting. That is, both periodic and/or event triggered reporting may be supported for CLI-RSSI and SRS-RSRP measurements (e.g., CLI signal strength measurements). For event triggered reporting, aspects of the described techniques may define new events for the report of the CLI signal strength measurement. Examples of the new events for CLI signal strength measurements (e.g., CLI measurements) include, but are not limited to, CLI-RSSI and/or SRS-RSRP being above a threshold and CLI-RSSI and/or SRS-RSRP being below a threshold. In some aspects, the CLI measurements (e.g., the CLI signal strength measurements) may not need or include the UE measuring both CLI-RSSI and SRS-RSRP. That is, in some aspects of a triggering condition for an event may be detected based on comparison of the CLI signal strength measurement to a threshold, e.g., the CLI signal strength measurement falling below a low threshold and/or exceeding a high threshold. In some aspects, the report of the CLI signal strength measurement may be transmitted to the base station together with serving cell measurements (e.g., of the base station) taken by the UE. For example, UE215may include the CLI signal strength measurement report in a MeasResults object transmitted to base station205, and the MeasResults object may include both serving cell measurements of base station205and a measResultCLI object containing CLI signal strength measurements of UE220. In some aspects, the respective base station may transmit or otherwise provide an indication to each UE of which measurement is to be performed. In some aspects, a new measurement quantity may be defined (e.g., CH-MeasQuantity) to indicate which measurement (e.g., CLI-RSSI and/or SRS-RSRP) for the UE to measure and report. That is, each UE may receive an indication from its respective base station of the type of CLI signal strength measurement to be performed, e.g., whether to perform a CLI-RSSI measurement and/or a SRS-RSRP measurement. In some aspects, the measurement resource configuration may be a combined measurement configuration for the CLI-RSSI and SRS-RSRP measurements. In other aspects, the measurement resource configuration may be independent measurement configurations for the CLI-RSSI and SRS-RSRP measurements. In some aspects, this may include a report configuration (ReportConfigNR) information element that carries or otherwise conveys an indication of which measurement is to be performed and reported. One example of the report configuration information element may include, but is not limited to: -- ASN1START-- TAG-REPORTCONFIGNR-STARTReportConfigNR ::=      SEQUENCE {reportType         CHOICE {periodical          PeriodicalReportConfig,eventTriggered         EventTriggerConfig,...,reportCGI          ReportCGI}}ReportCGI ::=     SEQUENCE {cellForWhichToReportCGI   PhysCellId,...}EventTriggerConfig::=       SEQUENCE {eventId           CHOICE {eventA1           SEQUENCE {a1-Threshold         MeasTriggerQuantity,reportOnLeave         BOOLEAN,hysteresis          Hysteresis,timeToTrigger         TimeToTrigger},eventA2           SEQUENCE {a2-Threshold         MeasTriggerQuantity,reportOnLeave         BOOLEAN,hysteresis          Hysteresis,timeToTrigger         TimeToTrigger},eventA3           SEQUENCE {a3-Offset           MeasTriggerQuantityOffset,reportOnLeave         BOOLEAN,hysteresis          Hysteresis,timeToTrigger         TimeToTrigger,useWhiteCellList        BOOLEAN},eventA4           SEQUENCE {a4-Threshold         MeasTriggerQuantity,reportOnLeave         BOOLEAN,hysteresis          Hysteresis,timeToTrigger         TimeToTrigger,useWhiteCellList        BOOLEAN},eventA5           SEQUENCE {a5-Threshold1         MeasTriggerQuantity,a5-Threshold2         MeasTriggerQuantity,reportOnLeave         BOOLEAN,hysteresis          Hysteresis,timeToTrigger         TimeToTrigger,useWhiteCellList        BOOLEAN},eventA6           SEQUENCE {a6-Offset           MeasTriggerQuantityOffset,reportOnLeave         BOOLEAN,hysteresis          Hysteresis,timeToTrigger         TimeToTrigger,useWhiteCellList        BOOLEAN},...,eventL1           SEQUENCE {l1-cli-RSSI-Threshold           CLI-MeasTriggerQuantityOPTIONAL,l1-srs-RSRP-Threshold           CLI-MeasTriggerQuantityOPTIONAL,reportOnLeave        BOOLEAN,hysteresis         Hysteresis,timeToTrigger        TimeToTrigger}eventL2           SEQUENCE {l2-cli-RSSI-Threshold           RSSI-RangeOPTIONAL,l2-srs-RSRP-Threshold          RSRP-RangeOPTIONAL,reportOnLeave        BOOLEAN,hysteresis         Hysteresis,timeToTrigger        TimeToTrigger}},rsType           NR-RS-Type,reportInterval        ReportInterval,reportAmount         ENUMERATED {r1, r2, r4, r8, r16, r32, r64,infinity},reportQuantityCell        MeasReportQuantity,maxReportCells         INTEGER (1..maxCellReport),reportQuantityRS-Indexes      MeasReportQuantityOPTIONAL, -- Need RmaxNrofRS-IndexesToReport      INTEGER (1..maxNrofIndexesToReport)OPTIONAL, -- Need RincludeBeamMeasurements       BOOLEAN,reportAddNeighMeas         ENUMERATED {setup }OPTIONAL, -- Need R...,[[cli-MeasReportQuantity        CLI-MeasReportQuantityOPTIONAL]]}PeriodicalReportConfig ::=       SEQUENCE {rsType             NR-RS-Type,reportInterval          ReportInterval,reportAmount          ENUMERATED {r1, r2, r4, r8, r16, r32, r64,infinity},reportQuantityCell        MeasReportQuantity,maxReportCells         INTEGER (1..maxCellReport),reportQuantityRS-Indexes      MeasReportQuantityOPTIONAL, -- Need RmaxNrofRS-IndexesToReport      INTEGER(1..maxNrofIndexesToReport)        OPTIONAL, -- Need RincludeBeamMeasurements      BOOLEAN,useWhiteCellList         BOOLEAN,...,[[cli-MeasReportQuantity          CLI-MeasReportQuantityOPTIONAL]]}NR-RS-Type ::=          ENUMERATED {ssb, csi-rs}MeasTriggerQuantity ::=        CHOICE {rsrp            RSRP-Range,rsrq            RSRQ-Range,sinr            SINR-Range}CLI-MeasTriggerQuantity ::=       CHOICE {cli-rssi           RSSI-Range,      -- to be definedby RAN4srs-rsrp        RSRP-Range    -- to be defined by RAN4}MeasTriggerQuantityOffset ::=  CHOICE {rsrp         INTEGER (−30..30),rsrq         INTEGER (−30..30),sinr         INTEGER (−30..30)}MeasReportQuantity ::=     SEQUENCE {rsrp         BOOLEAN,rsrq         BOOLEAN,sinr         BOOLEAN}CLI-MeasReportQuantity     SEQUENCE {cli-rssi         BOOLEAN,srs-rsrp          BOOLEAN}-- TAG-REPORTCONFIGNR-STOP-- ASN1 STOP In some aspects, a network configured with dynamic TDD configuration, a strong CLI may occur to a UE if it is close to one or more UEs from another cell (or base station) that have strong uplink transmissions colliding with its downlink reception. In this case, the strong interference power may push the components (e.g., analog circuitry) of the UE to saturation. If the UE that is configured to measure the SRS-RSRP or the CLI-RSSI, the measurement may be inaccurate and/or the UE may not be able to detect the SRS sequence identifier. Therefore, the UE may transmit an overload flag in the report (e.g., the measResultCLI object containing the SRS-RSRP/CLI-RSSI report) to indicate that the interference is too strong to measure. Accordingly, aspects of the described techniques may include a “too strong signal to measure” (or “too strong of a signal to measure”) as one of the possible values of CLI-RSSI and/or SRS-RSSI measurement result. Accordingly, in some aspects the report of the CLI signal strength measurement may carry or otherwise convey an indication that the cross-link signal for the neighboring UEs is too strong to measure or a CLI signal strength measurement value associated with the cross-link signal being too strong to measure. In some aspects, various SRS SCS parameters may be defined as the SCS for SRS to be measured. For example, an SRS-SCS parameter may be defined, which may describe the SCS for the SRS. In some aspects, the SRS-SCS parameter may indicate or otherwise be associated with the UE performing the CLI measurement within the active BWP. In some aspects, the SRS-SCS parameter may indicate that the SCS for the CLI measurement resource configuration (e.g., the measurement resource configuration used for the CLI signal strength measurement) may be the same or different from the SCS of the active BWP. In this context, the UE may not be required to measure SRS-RSRP if the SCS of the SRS is different than the downlink active BWP SCS of the same carrier. In some aspects, the value range is for this parameter may include, but are not limited to, 15, 30, 60 kHz for FR1 and/or 60, 120 kHz for FR2. In some aspects, the description of the parameter states that the UE is not required to measure SRS using different SCS compared to the downlink active BWP SCS of the same carrier. This is based on whether the SRS-SCS parameter needs to be explicitly configured by the network dependent upon whether the SRS-RSRP measurement configuration is cell specific or BWP dependent. If the SRS measurement configuration is cell specific, the SRS-SCS parameter may be configured for the UE. Otherwise, the SRS-SCS parameter may be omitted from the SRS-RSRP configuration because the network may only configure the SRS with SCS same as the downlink active BWP SCS of the same carrier. However, aspects of the described techniques may include, if the SRS-RSRP measurement configuration is BWP dependent, the network may not explicitly configure SRS-SCS. In some aspects, although SRS may be transmitted in up to 4 ports from a UE, aspects of the described techniques address whether the SRS-RSRP should be measured in up to 4 ports. For SRS-RSRP measurement report, layer 3 measurement and reporting may be applied. Conventionally, layer 3 measurement may be performed using a single port. To make CLI measurement compatible with existing layer 3 measurements, the UE may be configured only to measure SRS-RSRP in a single SRS Port. However, it is to be understood that this does not preclude the UE from measuring multiple SRS ports if each port of the SRS is configured as a separate measurement resource. Accordingly, aspects of the described techniques may include each SRS-RSRP measurement resource only using or otherwise including a single SRS Port. To measure the SRS transmitted from N (N=2, 4) ports from the interfering UE, the network may configure N separate SRS resources that corresponds to the N ports at the interfering UE. In some aspects, this may include or otherwise be based on a parameter (nrofSRS-Ports) having a value range of 1,[2],[4]. Accordingly, aspects of the described techniques provide a mechanism whereby base stations205and210coordinate to configure a measurement resource configuration associated with a CLI signal strength measurement for UEs215and220, respectively. The measurement resource configuration may be based on any of the information elements, parameters, resources, values, and the like, discussed above, alone or in any combination. UEs215and220may perform the CLI signal strength measurement according to the measurement resource configuration and transmit a report to their respective base station based at least in part on the CLI signal strength measurement. Base stations205and210may receive the reports from the respective UEs, and use this information to mitigate or otherwise reduce CLI within their respective coverage areas. FIG.3illustrates an example of a process300that supports UE measurement for CLI in accordance with aspects of the present disclosure. In some examples, process300may implement aspects of wireless communication systems100and/or200. Aspects of process300may be implemented by UE305, base station310, base station315, and/or UE320, which may be examples of the corresponding devices described herein. In some aspects, base station310may be a serving base station of UE305and base station315may be a serving base station of UE320. In some aspects, UE305may be a neighboring UE of UE320, and vice versa. In some aspects, base station310may be a neighboring base station a base station315, and vice versa. At325, base stations310and315may coordinate to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE associated with the base station. In some aspects, this may include base station310coordinating with base station315to configure the measurement resource configuration associated with the CLI signal strength measurement for UE305. In some aspects, this may include base station315coordinating with base station310to configure the measurement resource configuration associated with the CLI signal strength measurement for UE320. In some aspects, base stations310and315may coordinate over an Xn interface and/or an F1 interface. At330, base station310may transmit or otherwise provide a measurement configuration signal to UE305that carries or conveys the measurement resource configuration. At335, base station315may transmit or otherwise provide the measurement configuration signal to UE320that carries or conveys the measurement resource configuration. In some aspects, the measurement resource signal may use or otherwise be based on one or more of the information elements, values, parameters, and the like, discussed with reference to wireless communication system200. For example, the measurement configuration signal may use a measurement object (MeasObjCLI) for CLI-RSSI and/or SRS-RSRP measurements. In some examples, the measurement configuration signal may carry or otherwise convey an indication of a filtering configuration (e.g., a layer 3 filtering configuration) for the CLI signal strength measurement (e.g., CLI-RSSI and/or SRS-RSRP measurements). In some aspects, the measurement configuration signal may carry or convey an indication (e.g., CLI-MeasQuantity) to indicate which measurement(s) (e.g., CLI-RSSI and/or SRS-RSRP) that the UE is expected to measure and report. At340and at345, UEs305and320may perform the CLI signal strength measurement (e.g., for the neighboring UE) according to the measurement resource configuration. In some aspects, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. In some aspects, the CLI signal strength measurement may include UEs305and320monitoring for interfering signals from any other neighboring UE performing the wireless communications on the links between UEs305and320and base stations310and315, respectively (e.g., CLI-RSSI measurements). In some aspects, the CLI signal strength measurements may include UE305transmitting SRS and UE320measuring the signal strength of the SRS (e.g., SRS-RSRP measurements), or vice versa. As discussed, in some examples the interfering CLI may be too strong for a UE to accurately measure, which may trigger a “too strong to measure” flag at the measuring UE. In some aspects, one or more of UEs305and/or320may be operating in a DRX mode. In this context, it may be up to the respective UE to determine whether to perform the CLI signal strength measurement during an off period of the DRX cycle or to wait until an active period (or on period) of the DRX cycle to perform the CLI signal strength measurement. In some aspects, the determination of whether to perform the CLI signal strength measurement may be based on whether or not the respective UE can perform such measurement while maintaining or otherwise supporting radio link performance requirements. At350, UE305may transmit (and base station310may receive) a report of the CLI signal strength measurement. Similarly and at355, UE320may transmit (and base station315may receive) a report of the CLI signal strength measurement. Broadly, the reports may carry or convey an indication of the results of the CLI signal strength measurement performed by UE305and/or UE320, respectively. In some aspects, the report may be periodic and/or event triggered reporting, e.g., based on the CLI-RSSI and/or SRS-RSRP measurements being above a high threshold and/or being below a low threshold. In some examples, the report of the CLI signal strength measurement may be transmitted to the base station310together with serving cell measurements (e.g., of the base station310) taken by the UE305. For example, the UE305may include the CLI signal strength measurement report in a MeasResults object transmitted to base station310, and the MeasResults object may include both serving cell measurements of base station310and a measResultCLI object containing CLI signal strength measurements of UE320. FIG.4illustrates an example of a process400that supports UE measurement for CLI in accordance with aspects of the present disclosure. In some examples, process400may implement aspects of wireless communication systems100,200, and/or process300. Aspects of process400may be implemented by base station405and/or UE410, which may be examples of corresponding devices described herein. In some aspects, base station405may be a serving base station of UE410. At415, base station405may coordinate with a neighboring base station to configure a measurement resource configuration associated with a CLI signal strength measurement for UE410. In some aspects, the coordination may be over an Xn interface and/or an F1 interface. At420, base station405may transmit (and UE410may receive) a measurement configuration signal carrying or conveying the measurement resource configuration. In some aspects, this may include UE410determining a filtering configuration for the CLI signal strength measurement based on the measurement resource configuration. In this aspect, the CLI signal strength measurement may be performed based at least in part on the filtering configuration. At425, UE410may perform the CLI signal strength measurement for UE(s) in intra-frequency neighboring cell(s) according to the measurement resource configuration. In some aspects, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. Examples of the CLI signal strength measurement include, but are not limited to, CLI-RSSI measurements and/or SRS-RSRP measurements. In some aspects, UE410may determine to perform the CLI signal strength measurement during a DRX off period based at least in part on a radio link performance threshold associated with a link between UE410and base station405. In some aspects, UE410may determine, based at least in part on a slot duration and/or a SCS associated with a link between UE410the base station405, a periodicity for the CLI signal strength measurement. The report may be transmitted based at least in part on the periodicity. In some aspects, the CLI signal strength measurement may include an SRS-RSRP measurement time that is depended on a BWP. In some aspects, the CLI signal strength measurement is performed using a single antenna port of UE410. At430, UE410may transmit (and base station405may receive) a report of the CLI signal strength measurement. In some aspects, UE410may detect a triggering condition for the event based at least in part on a comparison of the CLI signal strength measurement to a threshold, and transmitting the report may be based at least in part on the event. Examples of the triggering condition for the event include, but are not limited to, a first triggering condition associated with the CLI signal strength measurement falling below a low threshold, a second triggering condition associated with the CLI signal strength measurement exceeding a high threshold, and the like. In some aspects, the report of the CLI signal strength measurement may be transmitted to the base station405together with serving cell measurements (e.g., of the base station405) taken by the UE410. For example, UE415may include the CLI signal strength measurement report in a MeasResults object transmitted to base station405, as discussed herein. In some aspects, UE410may receive from base station405an indication of a type of CLI signal strength measurement to be performed. This indication may be received via explicit signaling of the requested type of measurement, or implicitly indicated to the UE410, such as through an indicated association of a triggering condition or an event with a certain type of measurement. In this aspect, the CLI signal strength measurement may be performed and reported based at least in part on the type CLI signal strength measurement. Examples of the type of CLI signal strength measurement include, but are not limited to, a CLI-RSSI measurement type, and SRS-RSRP measurement type, and the like. In some aspects, the measurement configuration signal may carry or convey a first measurement configuration associated with the CLI-RSSI measurement type and a second measurement configuration associated with the SRS-RSRP measurement type. In some aspects, the report of the CLI signal strength measurement may carry or convey an indication that a cross-link signal for the neighboring UEs is too strong to measure or a CLI signal strength measurement value associated with the cross-link signal being too strong to measure. FIG.5shows a block diagram500of a device505that supports UE measurement for CLI in accordance with aspects of the present disclosure. The device505may be an example of aspects of a UE115as described herein. The device505may include a receiver510, a UE communications manager515, and a transmitter520. The device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver510may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to UE measurement for CLI, etc.). Information may be passed on to other components of the device505. The receiver510may be an example of aspects of the transceiver820described with reference toFIG.8. The receiver510may utilize a single antenna or a set of antennas. The UE communications manager515may receive from a base station a measurement configuration signal including a measurement resource configuration associated with a CLI signal strength measurement, perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells according to the measurement resource configuration, and transmit a report of the CLI signal strength measurement to the base station. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. The UE communications manager515may be an example of aspects of the UE communications manager810described herein. The UE communications manager515, or its sub-components, may be implemented in hardware, code (e.g., software or firmware) executed by a processor, or any combination thereof. If implemented in code executed by a processor, the functions of the UE communications manager515, or its sub-components may be executed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The UE communications manager515, or its sub-components, may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical components. In some examples, the UE communications manager515, or its sub-components, may be a separate and distinct component in accordance with various aspects of the present disclosure. In some examples, the UE communications manager515, or its sub-components, may be combined with one or more other hardware components, including but not limited to an input/output (I/O) component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. The transmitter520may transmit signals generated by other components of the device505. In some examples, the transmitter520may be collocated with a receiver510in a transceiver module. For example, the transmitter520may be an example of aspects of the transceiver820described with reference toFIG.8. The transmitter520may utilize a single antenna or a set of antennas. In some examples, the UE communications manager515may include a mobile device modem chip or integrated circuit. In some examples, and the receiver510and transmitter520may include analog electronic components (e.g., amplifiers, filters, antennas, etc.) coupled with the UE communications manager515to enable the receipt and transmission of wireless signals under the management of UE communications manager515. FIG.6shows a block diagram600of a device605that supports UE measurement for CLI in accordance with aspects of the present disclosure. The device605may be an example of aspects of a device505, or a UE115as described herein. The device605may include a receiver610, a UE communications manager615, and a transmitter635. The device605may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver610may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to UE measurement for CLI, etc.). Information may be passed on to other components of the device605. The receiver610may be an example of aspects of the transceiver820described with reference toFIG.8. The receiver610may utilize a single antenna or a set of antennas. The UE communications manager615may be an example of aspects of the UE communications manager515as described herein. The UE communications manager615may include a CLI measurement configuration manager620, a CLI measurement performance manager625, and a CLI measurement report manager630. The UE communications manager615may be an example of aspects of the UE communications manager810described herein. The CLI measurement configuration manager620may receive from a base station, via receiver610, a measurement configuration signal including a measurement resource configuration associated with a CLI signal strength measurement. The CLI measurement performance manager625may perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells according to the measurement resource configuration. In some examples, the CLI signal strength measurement is performed during an intra-frequency measurement gap. The CLI measurement report manager630may control the transmitter635to transmit a report of the CLI signal strength measurement to the base station. The transmitter635may transmit signals generated by other components of the device605. In some examples, the transmitter635may be collocated with a receiver610in a transceiver module. For example, the transmitter635may be an example of aspects of the transceiver820described with reference toFIG.8. The transmitter635may utilize a single antenna or a set of antennas. FIG.7shows a block diagram700of a UE communications manager705that supports UE measurement for CLI in accordance with aspects of the present disclosure. The UE communications manager705may be an example of aspects of a UE communications manager515, a UE communications manager615, or a UE communications manager810described herein. The UE communications manager705may include a CLI measurement configuration manager710, a CLI measurement performance manager715, a CLI measurement report manager720, a DRX manager725, a periodicity manager730, an event manager735, and a CLI measurement type manager740. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The CLI measurement configuration manager710may receive from a base station a measurement configuration signal including a measurement resource configuration associated with a CLI signal strength measurement. In some cases, the measurement resource configuration includes one or more of a reporting configuration for the CLI signal strength measurement, a filtering configuration for the CLI signal strength measurement, a measurement gap configuration for the CLI signal strength measurement, or a quantity configuration for the CLI signal strength measurement. In some cases, the CLI signal strength measurement includes a SRS based RSRP measurement type that is dependent on a bandwidth part. In some cases, the CLI signal strength measurement is performed using a single antenna port of the UE. The CLI measurement performance manager715may perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells according to the measurement resource configuration, where the CLI signal strength measurement is performed during an intra-frequency measurement gap. The CLI measurement report manager720may transmit a report of the CLI signal strength measurement to the base station. In some cases, the report of the CLI signal strength measurement includes an indication that a cross-link signal for the neighboring UEs is too strong to measure or a CLI signal strength measurement value associated with the cross-link signal being too strong to measure. In some cases, the report of the CLI signal strength measurement may be transmitted together with a serving cell measurement report. The DRX manager725may determine to perform the CLI signal strength measurement during a DRX off period based on a measurement accuracy threshold. The periodicity manager730may determine, based on a slot duration and a subcarrier spacing associated with a link between the UE and the base station, a periodicity for the CLI signal strength measurement, where transmitting the report is based on the determined periodicity. The event manager735may detect a triggering condition for an event based on a comparison of the CLI signal strength measurement to a threshold. Transmitting the report may be based on the occurrence of the event, as defined by the triggering condition. In some cases, the triggering condition for the event includes one or more of a first triggering condition associated with the CLI signal strength measurement falling below a low threshold or a second triggering condition associated with the CLI signal strength measurement exceeding a high threshold. In response to the event being triggered The CLI measurement type manager740may receive from the base station an explicit or implicit indication of a type of CLI signal strength measurement, where performing the CLI signal strength measurement and reporting the CLI signal strength measurement are based on the indicated type of CLI signal strength measurement. In some cases, the indicated type of CLI signal strength measurement includes one or more of a CLI-RSSI measurement type or a SRS-RSRP measurement type. In some cases, the measurement configuration signal includes a first measurement configuration associated with the CLI RSSI measurement type and a second measurement configuration associated with the sounding reference signal RSRP measurement type. FIG.8shows a diagram of a system800including a device805that supports UE measurement for CLI in accordance with aspects of the present disclosure. The device805may be an example of or include the components of device505, device605, or a UE115as described herein. The device805may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including a UE communications manager810, an I/O controller815, a transceiver820, an antenna825, memory830, and a processor840. These components may be in electronic communication via one or more buses (e.g., bus845). The UE communications manager810may receive from a base station a measurement configuration signal including a measurement resource configuration associated with a CLI signal strength measurement, perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells according to the measurement resource configuration, and transmit a report of the CLI signal strength measurement to the base station. The I/O controller815may manage input and output signals for the device805. The I/O controller815may also manage peripherals not integrated into the device805. In some cases, the I/O controller815may represent a physical connection or port to an external peripheral. In some cases, the I/O controller815may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. In other cases, the I/O controller815may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller815may be implemented as part of a processor. In some cases, a user may interact with the device805via the I/O controller815or via hardware components controlled by the I/O controller815. The transceiver820may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. For example, the transceiver820may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver820may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna825. However, in some cases the device may have more than one antenna825, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The memory830may include random-access memory (RAM) and read-only memory (ROM). The memory830may store computer-readable, computer-executable code835including instructions that, when executed, cause the processor to perform various functions described herein. In some cases, the memory830may contain, among other things, a basic input/output system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor840may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor840may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into the processor840. The processor840may be configured to execute computer-readable instructions stored in a memory (e.g., the memory830) to cause the device805to perform various functions (e.g., functions or tasks supporting UE measurement for CLI). The code835may include instructions to implement aspects of the present disclosure, including instructions to support wireless communications. The code835may be stored in a non-transitory computer-readable medium such as system memory or other type of memory. In some cases, the code835may not be directly executable by the processor840but may cause a computer (e.g., when compiled and executed) to perform functions described herein. FIG.9shows a block diagram900of a device905that supports UE measurement for CLI in accordance with aspects of the present disclosure. The device905may be an example of aspects of a base station105as described herein. The device905may include a receiver910, a base station (BS) communications manager915, and a transmitter920. The device905may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver910may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to UE measurement for CLI, etc.). Information may be passed on to other components of the device905. The receiver910may be an example of aspects of the transceiver1220described with reference toFIG.12. The receiver910may utilize a single antenna or a set of antennas. The BS communications manager915may coordinate with a neighboring base station to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE associated with the base station, control transmitter920to transmit to the UE a measurement configuration signal including the measurement resource configuration, and receive, via receiver910, a report of the CLI signal strength measurement from the UE. The CLI signal strength measurement may be based on the measurement resource configuration. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. The BS communications manager915may be an example of aspects of the BS communications manager1210described herein. The BS communications manager915, or its sub-components, may be implemented in hardware, code (e.g., software or firmware) executed by a processor, or any combination thereof. If implemented in code executed by a processor, the functions of the BS communications manager915, or its sub-components may be executed by a general-purpose processor, a DSP, an application-specific integrated circuit (ASIC), a FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The BS communications manager915, or its sub-components, may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical components. In some examples, the BS communications manager915, or its sub-components, may be a separate and distinct component in accordance with various aspects of the present disclosure. In some examples, the BS communications manager915, or its sub-components, may be combined with one or more other hardware components, including but not limited to an input/output (I/O) component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. The transmitter920may transmit signals generated by other components of the device905. In some examples, the transmitter920may be collocated with a receiver910in a transceiver module. For example, the transmitter920may be an example of aspects of the transceiver1220described with reference toFIG.12. The transmitter920may utilize a single antenna or a set of antennas. In some examples, the BS communications manager915may include a mobile device modem chip or integrated circuit. In some examples, and the receiver910and transmitter920may include analog electronic components (e.g., amplifiers, filters, antennas, etc.) coupled with the BS communications manager915to enable the receipt and transmission of wireless signals under the management of BS communications manager915 FIG.10shows a block diagram1000of a device1005that supports UE measurement for CLI in accordance with aspects of the present disclosure. The device1005may be an example of aspects of a device905, or a base station105as described herein. The device1005may include a receiver1010, a BS communications manager1015, and a transmitter1035. The device1005may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver1010may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to UE measurement for CLI, etc.). Information may be passed on to other components of the device1005. The receiver1010may be an example of aspects of the transceiver1220described with reference toFIG.12. The receiver1010may utilize a single antenna or a set of antennas. The BS communications manager1015may be an example of aspects of the BS communications manager915as described herein. The BS communications manager1015may include a CLI measurement coordination manager1020, a CLI measurement configuration manager1025, and a CLI measurement report manager1030. The BS communications manager1015may be an example of aspects of the BS communications manager1210described herein. The CLI measurement coordination manager1020may coordinate with a neighboring base station to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE associated with the base station. The CLI measurement configuration manager1025may transmit to the UE a measurement configuration signal including the measurement resource configuration. The CLI measurement report manager1030may receive a report of the CLI signal strength measurement from the UE, the CLI signal strength measurement being based on the measurement resource configuration. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. In some cases, the report of the CLI signal strength measurement may be received together with a serving cell measurement report. The transmitter1035may transmit signals generated by other components of the device1005. In some examples, the transmitter1035may be collocated with a receiver1010in a transceiver module. For example, the transmitter1035may be an example of aspects of the transceiver1220described with reference toFIG.12. The transmitter1035may utilize a single antenna or a set of antennas. FIG.11shows a block diagram1100of a BS communications manager1105that supports UE measurement for CLI in accordance with aspects of the present disclosure. The BS communications manager1105may be an example of aspects of a BS communications manager915, a BS communications manager1015, or a BS communications manager1210described herein. The BS communications manager1105may include a CLI measurement coordination manager1110, a CLI measurement configuration manager1115, a CLI measurement report manager1120, a filtering configuration manager1125, a periodicity manager1130, a CLI measurement type manager1135, and a CLI measurement performance manager1140. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The CLI measurement coordination manager1110may coordinate with a neighboring base station to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE associated with the base station. In some examples, coordinating with the neighboring base station is over at least one of an Xn interface, or an F1 interface, or a combination thereof, and where the coordinating includes an exchange of one or more of an interface setup message, a configuration update message, or a combination thereof. The CLI measurement configuration manager1115may transmit to the UE a measurement configuration signal including the measurement resource configuration. In some cases, the measurement resource configuration includes one or more of a reporting configuration for the CLI signal strength measurement, a filtering configuration for the CLI signal strength measurement, a measurement gap configuration for the CLI signal strength measurement, or a quantity configuration for the CLI signal strength measurement. The CLI measurement report manager1120may receive a report of the CLI signal strength measurement from the UE, the CLI signal strength measurement being based on the measurement resource configuration. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. In some examples, the CLI measurement report manager1120may receive the report based on fulfillment of a triggering condition for an event based on a comparison of the CLI signal strength measurement to a threshold. In some cases, a first triggering condition associated with the CLI signal strength measurement falling below a low threshold or a second triggering condition associated with the CLI signal strength measurement exceeding a high threshold. In some cases, the report of the CLI signal strength measurement includes an indication that a cross-link signal for one or more UEs associated with one or more intra-frequency neighboring cells is too strong to measure or a CLI signal strength measurement value associated with the cross-link signal being too strong to measure. In some cases, the report of the CLI signal strength measurement may be transmitted together with a serving cell measurement report. The filtering configuration manager1125may determine a filtering configuration for the CLI signal strength measurement based on the measurement resource configuration, where the CLI signal strength measurement is based on the determined filtering configuration. The periodicity manager1130may determine, based on a slot duration and a subcarrier spacing associated with a link between the UE and the base station, a periodicity for the CLI signal strength measurement, where receiving the report is based on the determined periodicity. The CLI measurement type manager1135may transmit to the UE an indication of a type of CLI signal strength measurement, where the CLI signal strength measurement and report of the CLI signal strength measurement are based on the indicated type of CLI signal strength measurement. In some cases, a CLI-RSSI measurement type or a SRS-RSRP measurement type. In some cases, the measurement configuration signal includes a first measurement configuration associated with the CLI RSSI measurement type and a second measurement configuration associated with the sounding reference signal RSRP measurement type. The CLI measurement performance manager1140may monitor, control, or otherwise manage aspects of the CLI signal strength measurement including a SRS-RSRP measurement type that is dependent on a BWP. FIG.12shows a diagram of a system1200including a device1205that supports UE measurement for CLI in accordance with aspects of the present disclosure. The device1205may be an example of or include the components of device905, device1005, or a base station105as described herein. The device1205may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including a BS communications manager1210, a network communications manager1215, a transceiver1220, an antenna1225, memory1230, a processor1240, and an inter-station communications manager1245. These components may be in electronic communication via one or more buses (e.g., bus1250). The BS communications manager1210may coordinate with a neighboring base station to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE associated with the base station, transmit to the UE a measurement configuration signal including the measurement resource configuration, and receive a report of the CLI signal strength measurement from the UE, the CLI signal strength measurement being based on the measurement resource configuration. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. The network communications manager1215may manage communications with the core network (e.g., via one or more wired backhaul links). For example, the network communications manager1215may manage the transfer of data communications for client devices, such as one or more UEs115. The transceiver1220may communicate bi-directionally, via one or more antennas, wired, or wireless links as described above. For example, the transceiver1220may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1220may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the wireless device may include a single antenna1225. However, in some cases the device may have more than one antenna1225, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The memory1230may include RAM, ROM, or a combination thereof. The memory1230may store computer-readable code1235including instructions that, when executed by a processor (e.g., the processor1240) cause the device to perform various functions described herein. In some cases, the memory1230may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1240may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1240may be configured to operate a memory array using a memory controller. In some cases, a memory controller may be integrated into processor1240. The processor1240may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1230) to cause the device1205to perform various functions (e.g., functions or tasks supporting UE measurement for CLI). The inter-station communications manager1245may manage communications with other base station105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1245may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, the inter-station communications manager1245may provide an X2 interface within an LTE/LTE-A wireless communication network technology to provide communication between base stations105. The code1235may include instructions to implement aspects of the present disclosure, including instructions to support wireless communications. The code1235may be stored in a non-transitory computer-readable medium such as system memory or other type of memory. In some cases, the code1235may not be directly executable by the processor1240but may cause a computer (e.g., when compiled and executed) to perform functions described herein. FIG.13shows a flowchart illustrating a method1300that supports UE measurement for CLI in accordance with aspects of the present disclosure. The operations of method1300may be implemented by a UE115or its components as described herein. For example, the operations of method1300may be performed by a communications manager as described with reference toFIGS.5through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the functions described below. Additionally or alternatively, a UE may perform aspects of the functions described below using special-purpose hardware. At1305, the UE may receive from a base station a measurement configuration signal including a measurement resource configuration associated with a CLI signal strength measurement. The operations of1305may be performed according to the methods described herein. In some examples, aspects of the operations of1305may be performed by a CLI measurement configuration manager as described with reference toFIGS.5through8. At1310, the UE may perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells according to the measurement resource configuration. In some examples, the CLI signal strength measurement is performed during an intra-frequency measurement gap. The operations of1310may be performed according to the methods described herein. In some examples, aspects of the operations of1310may be performed by a CLI measurement performance manager as described with reference toFIGS.5through8. At1315, the UE may transmit a report of the CLI signal strength measurement to the base station. The operations of1315may be performed according to the methods described herein. In some examples, aspects of the operations of1315may be performed by a CLI measurement report manager as described with reference toFIGS.5through8. FIG.14shows a flowchart illustrating a method1400that supports UE measurement for CLI in accordance with aspects of the present disclosure. The operations of method1400may be implemented by a UE115or its components as described herein. For example, the operations of method1400may be performed by a communications manager as described with reference toFIGS.5through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the functions described below. Additionally or alternatively, a UE may perform aspects of the functions described below using special-purpose hardware. At1405, the UE may receive from a base station a measurement configuration signal including a measurement resource configuration associated with a CLI signal strength measurement. The operations of1405may be performed according to the methods described herein. In some examples, aspects of the operations of1405may be performed by a CLI measurement configuration manager as described with reference toFIGS.5through8. At1410, the UE may perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells according to the measurement resource configuration. In some examples, the CLI signal strength measurement is performed during an intra-frequency measurement gap. The operations of1410may be performed according to the methods described herein. In some examples, aspects of the operations of1410may be performed by a CLI measurement performance manager as described with reference toFIGS.5through8. At1415, the UE may determine to perform the CLI signal strength measurement during a DRX off period based on a measurement accuracy threshold. The operations of1415may be performed according to the methods described herein. In some examples, aspects of the operations of1415may be performed by a DRX manager as described with reference toFIGS.5through8. At1420, the UE may transmit a report of the CLI signal strength measurement to the base station. The operations of1420may be performed according to the methods described herein. In some examples, aspects of the operations of1420may be performed by a CLI measurement report manager as described with reference toFIGS.5through8. In some cases, the report of the CLI signal strength measurement may be transmitted together with a serving cell measurement report. FIG.15shows a flowchart illustrating a method1500that supports UE measurement for CLI in accordance with aspects of the present disclosure. The operations of method1500may be implemented by a UE115or its components as described herein. For example, the operations of method1500may be performed by a communications manager as described with reference toFIGS.5through8. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the functions described below. Additionally or alternatively, a UE may perform aspects of the functions described below using special-purpose hardware. At1505, the UE may receive from a base station a measurement configuration signal including a measurement resource configuration associated with a CLI signal strength measurement. The operations of1505may be performed according to the methods described herein. In some examples, aspects of the operations of1505may be performed by a CLI measurement configuration manager as described with reference toFIGS.5through8. At1510, the UE may perform the CLI signal strength measurement for one or more UEs associated with one or more intra-frequency neighboring cells according to the measurement resource configuration. In some examples, the CLI signal strength measurement is performed during an intra-frequency measurement gap. The operations of1510may be performed according to the methods described herein. In some examples, aspects of the operations of1510may be performed by a CLI measurement performance manager as described with reference toFIGS.5through8. At1515, the UE may transmit a report of the CLI signal strength measurement to the base station. The operations of1515may be performed according to the methods described herein. In some examples, aspects of the operations of1515may be performed by a CLI measurement report manager as described with reference toFIGS.5through8. At1520, the UE may determine, based on a slot duration and a subcarrier spacing associated with a link between the UE and the base station, a periodicity for the CLI signal strength measurement, where transmitting the report is based on the determined periodicity. In some cases, the report of the CLI signal strength measurement may be transmitted together with a serving cell measurement report. The operations of1520may be performed according to the methods described herein. In some examples, aspects of the operations of1520may be performed by a periodicity manager as described with reference toFIGS.5through8. FIG.16shows a flowchart illustrating a method1600that supports UE measurement for CLI in accordance with aspects of the present disclosure. The operations of method1600may be implemented by a base station105or its components as described herein. For example, the operations of method1600may be performed by a communications manager as described with reference toFIGS.9through12. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the functions described below. Additionally or alternatively, a base station may perform aspects of the functions described below using special-purpose hardware. At1605, the base station may coordinate with a neighboring base station to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE associated with the base station. The operations of1605may be performed according to the methods described herein. In some examples, aspects of the operations of1605may be performed by a CLI measurement coordination manager as described with reference toFIGS.9through12. At1610, the base station may transmit to the UE a measurement configuration signal including the measurement resource configuration. The operations of1610may be performed according to the methods described herein. In some examples, aspects of the operations of1610may be performed by a CLI measurement configuration manager as described with reference toFIGS.9through12. At1615, the base station may receive a report of the CLI signal strength measurement from the UE, the CLI signal strength measurement being based on the measurement resource configuration. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. In some cases, the report of the CLI signal strength measurement may be received together with a serving cell measurement report. The operations of1615may be performed according to the methods described herein. In some examples, aspects of the operations of1615may be performed by a CLI measurement report manager as described with reference toFIGS.9through12. FIG.17shows a flowchart illustrating a method1700that supports UE measurement for CLI in accordance with aspects of the present disclosure. The operations of method1700may be implemented by a base station105or its components as described herein. For example, the operations of method1700may be performed by a communications manager as described with reference toFIGS.9through12. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the functions described below. Additionally or alternatively, a base station may perform aspects of the functions described below using special-purpose hardware. At1705, the base station may coordinate with a neighboring base station to configure a measurement resource configuration associated with a CLI signal strength measurement for a UE associated with the base station. The operations of1705may be performed according to the methods described herein. In some examples, aspects of the operations of1705may be performed by a CLI measurement coordination manager as described with reference toFIGS.9through12. At1710, the base station may transmit to the UE an indication of a type of CLI signal strength measurement, where the CLI signal strength measurement and report of the CLI signal strength measurement are based on the indicated type of CLI signal strength measurement. The operations of1710may be performed according to the methods described herein. In some examples, aspects of the operations of1710may be performed by a CLI measurement type manager as described with reference toFIGS.9through12. At1715, the base station may transmit to the UE a measurement configuration signal including the measurement resource configuration. The operations of1715may be performed according to the methods described herein. In some examples, aspects of the operations of1715may be performed by a CLI measurement configuration manager as described with reference toFIGS.9through12. At1720, the base station may receive a report of the CLI signal strength measurement from the UE, the CLI signal strength measurement being based on the measurement resource configuration. In some examples, the CLI signal strength measurement may be performed during an intra-frequency measurement gap. In some cases, the report of the CLI signal strength measurement may be received together with a serving cell measurement report. The operations of1720may be performed according to the methods described herein. In some examples, aspects of the operations of1720may be performed by a CLI measurement report manager as described with reference toFIGS.9through12. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Techniques described herein may be used for various wireless communication systems such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), OFDMA, single carrier frequency division multiple access (SC-FDMA), and other systems. A CDMA system may implement a radio technology such as CDMA2000, Universal Terrestrial Radio Access (UTRA), etc. CDMA2000 covers IS-2000, IS-95, and IS-856 standards. IS-2000 Releases may be commonly referred to as CDMA2000 1×, 1×, etc. IS-856 (TIA-856) is commonly referred to as CDMA2000 1×EV-DO, High Rate Packet Data (HRPD), etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. A TDMA system may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA system may implement a radio technology such as Ultra Mobile Broadband (UMB), Evolved UTRA (E-UTRA), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunications System (UMTS). LTE, LTE-A, and LTE-A Pro are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A, LTE-A Pro, NR, and GSM are described in documents from the organization named “3rd Generation Partnership Project” (3GPP). CDMA2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the systems and radio technologies mentioned herein as well as other systems and radio technologies. While aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR applications. A macro cell generally covers a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscriptions with the network provider. A small cell may be associated with a lower-powered base station, as compared with a macro cell, and a small cell may operate in the same or different (e.g., licensed, unlicensed, etc.) frequency bands as macro cells. Small cells may include pico cells, femto cells, and micro cells according to various examples. A pico cell, for example, may cover a small geographic area and may allow unrestricted access by UEs with service subscriptions with the network provider. A femto cell may also cover a small geographic area (e.g., a home) and may provide restricted access by UEs having an association with the femto cell (e.g., UEs in a closed subscriber group (CSG), UEs for users in the home, and the like). An eNB for a macro cell may be referred to as a macro eNB. An eNB for a small cell may be referred to as a small cell eNB, a pico eNB, a femto eNB, or a home eNB. An eNB may support one or multiple (e.g., two, three, four, and the like) cells, and may also support communications using one or multiple component carriers. The wireless communication systems described herein may support synchronous or asynchronous operation. For synchronous operation, the base stations may have similar frame timing, and transmissions from different base stations may be approximately aligned in time. For asynchronous operation, the base stations may have different frame timing, and transmissions from different base stations may not be aligned in time. The techniques described herein may be used for either synchronous or asynchronous operations. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA, or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
121,679
11943648
DESCRIPTION OF EXEMPLARY EMBODIMENTS In the present disclosure, “A or B” may mean “only A”, “only B” or “both A and B.” In other words, in the present disclosure, “A or B” may be interpreted as “A and/or B”. For example, in the present disclosure, “A, B, or C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, C”. A slash (/) or comma used in the present disclosure may mean “and/or”. For example, “A/B” may mean “A and/or B”. Accordingly, “A/B” may mean “only A”, “only B”, or “both A and B”. For example, “A, B, C” may mean “A, B, or C”. In the present disclosure, “at least one of A and B” may mean “only A”, “only B”, or “both A and B”. In addition, in the present disclosure, the expression “at least one of A or B” or “at least one of A and/or B” may be interpreted as “at least one of A and B”. In addition, in the present disclosure, “at least one of A, B, and C” may mean “only A”, “only B”, “only C”, or “any combination of A, B, and C”. In addition, “at least one of A, B, or C” or “at least one of A, B, and/or C” may mean “at least one of A, B, and C”. In addition, a parenthesis used in the present disclosure may mean “for example”. Specifically, when indicated as “control information (PDCCH)”, it may mean that “PDCCH” is proposed as an example of the “control information”. In other words, the “control information” of the present disclosure is not limited to “PDCCH”, and “PDDCH” may be proposed as an example of the “control information”. In addition, when indicated as “control information (i.e., PDCCH)”, it may also mean that “PDCCH” is proposed as an example of the “control information”. A technical feature described individually in one figure in the present disclosure may be individually implemented, or may be simultaneously implemented. The technology described below may be used in various wireless communication systems such as code division multiple access (CDMA), frequency division multiple access (FDMA), time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), single carrier frequency division multiple access (SC-FDMA), and so on. The CDMA may be implemented with a radio technology, such as universal terrestrial radio access (UTRA) or CDMA-2000. The TDMA may be implemented with a radio technology, such as global system for mobile communications (GSM)/general packet ratio service (GPRS)/enhanced data rate for GSM evolution (EDGE). The OFDMA may be implemented with a radio technology, such as institute of electrical and electronics engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, evolved UTRA (E-UTRA), and so on. IEEE 802.16m is an evolved version of IEEE 802.16e and provides backward compatibility with a system based on the IEEE 802.16e. The UTRA is part of a universal mobile telecommunication system (UMTS). 3rd generation partnership project (3GPP) long term evolution (LTE) is part of an evolved UMTS (E-UMTS) using the E-UTRA. The 3GPP LTE uses the OFDMA in a downlink and uses the SC-FDMA in an uplink. LTE-advanced (LTE-A) is an evolution of the LTE. 5G NR is a successive technology of LTE-A corresponding to a new Clean-slate type mobile communication system having the characteristics of high performance, low latency, high availability, and so on. 5G NR may use resources of all spectrum available for usage including low frequency bands of less than 1 GHz, middle frequency bands ranging from 1 GHz to 10 GHz, high frequency (millimeter waves) of 24 GHz or more, and so on. For clarity in the description, the following description will mostly focus on LTE-A or 5G NR. However, technical features according to an embodiment of the present disclosure will not be limited only to this. FIG.2shows a structure of an NR system, based on an embodiment of the present disclosure. The embodiment ofFIG.2may be combined with various embodiments of the present disclosure. Referring toFIG.2, a next generation-radio access network (NG-RAN) may include a BS20providing a UE10with a user plane and control plane protocol termination. For example, the BS20may include a next generation-Node B (gNB) and/or an evolved-NodeB (eNB). For example, the UE10may be fixed or mobile and may be referred to as other terms, such as a mobile station (MS), a user terminal (UT), a subscriber station (SS), a mobile terminal (MT), wireless device, and so on. For example, the BS may be referred to as a fixed station which communicates with the UE10and may be referred to as other terms, such as a base transceiver system (BTS), an access point (AP), and so on. The embodiment ofFIG.2exemplifies a case where only the gNB is included. The BSs20may be connected to one another via Xn interface. The BS20may be connected to one another via 5th generation (5G) core network (5GC) and NG interface. More specifically, the BSs20may be connected to an access and mobility management function (AMF)30via NG-C interface, and may be connected to a user plane function (UPF)30via NG-U interface. FIG.3shows a functional division between an NG-RAN and a 5GC, based on an embodiment of the present disclosure. The embodiment ofFIG.3may be combined with various embodiments of the present disclosure. Referring toFIG.3, the gNB may provide functions, such as Inter Cell Radio Resource Management (RRM), Radio Bearer (RB) control, Connection Mobility Control, Radio Admission Control, Measurement Configuration & Provision, Dynamic Resource Allocation, and so on. An AMF may provide functions, such as Non Access Stratum (NAS) security, idle state mobility processing, and so on. A UPF may provide functions, such as Mobility Anchoring, Protocol Data Unit (PDU) processing, and so on. A Session Management Function (SMF) may provide functions, such as user equipment (UE) Internet Protocol (IP) address allocation, PDU session control, and so on. Layers of a radio interface protocol between the UE and the network can be classified into a first layer (L1), a second layer (L2), and a third layer (L3) based on the lower three layers of the open system interconnection (OSI) model that is well-known in the communication system. Among them, a physical (PHY) layer belonging to the first layer provides an information transfer service by using a physical channel, and a radio resource control (RRC) layer belonging to the third layer serves to control a radio resource between the UE and the network. For this, the RRC layer exchanges an RRC message between the UE and the BS. FIG.4shows a radio protocol architecture, based on an embodiment of the present disclosure. The embodiment ofFIG.4may be combined with various embodiments of the present disclosure. Specifically,FIG.4(a)shows a radio protocol architecture for a user plane, andFIG.4(b)shows a radio protocol architecture for a control plane. The user plane corresponds to a protocol stack for user data transmission, and the control plane corresponds to a protocol stack for control signal transmission. Referring toFIG.4, a physical layer provides an upper layer with an information transfer service through a physical channel. The physical layer is connected to a medium access control (MAC) layer which is an upper layer of the physical layer through a transport channel. Data is transferred between the MAC layer and the physical layer through the transport channel. The transport channel is classified according to how and with what characteristics data is transmitted through a radio interface. Between different physical layers, i.e., a physical layer of a transmitter and a physical layer of a receiver, data are transferred through the physical channel. The physical channel is modulated using an orthogonal frequency division multiplexing (OFDM) scheme, and utilizes time and frequency as a radio resource. The MAC layer provides services to a radio link control (RLC) layer, which is a higher layer of the MAC layer, via a logical channel. The MAC layer provides a function of mapping multiple logical channels to multiple transport channels. The MAC layer also provides a function of logical channel multiplexing by mapping multiple logical channels to a single transport channel. The MAC layer provides data transfer services over logical channels. The RLC layer performs concatenation, segmentation, and reassembly of Radio Link Control Service Data Unit (RLC SDU). In order to ensure diverse quality of service (QoS) required by a radio bearer (RB), the RLC layer provides three types of operation modes, i.e., a transparent mode (TM), an unacknowledged mode (UM), and an acknowledged mode (AM). An AM RLC provides error correction through an automatic repeat request (ARQ). A radio resource control (RRC) layer is defined only in the control plane. The RRC layer serves to control the logical channel, the transport channel, and the physical channel in association with configuration, reconfiguration and release of RBs. The RB is a logical path provided by the first layer (i.e., the physical layer or the PHY layer) and the second layer (i.e., the MAC layer, the RLC layer, and the packet data convergence protocol (PDCP) layer) for data delivery between the UE and the network. Functions of a packet data convergence protocol (PDCP) layer in the user plane include user data delivery, header compression, and ciphering. Functions of a PDCP layer in the control plane include control-plane data delivery and ciphering/integrity protection. A service data adaptation protocol (SDAP) layer is defined only in a user plane. The SDAP layer performs mapping between a Quality of Service (QoS) flow and a data radio bearer (DRB) and QoS flow ID (QFI) marking in both DL and UL packets. The configuration of the RB implies a process for specifying a radio protocol layer and channel properties to provide a particular service and for determining respective detailed parameters and operations. The RB can be classified into two types, i.e., a signaling RB (SRB) and a data RB (DRB). The SRB is used as a path for transmitting an RRC message in the control plane. The DRB is used as a path for transmitting user data in the user plane. When an RRC connection is established between an RRC layer of the UE and an RRC layer of the E-UTRAN, the UE is in an RRC_CONNECTED state, and, otherwise, the UE may be in an RRC_IDLE state. In case of the NR, an RRC_INACTIVE state is additionally defined, and a UE being in the RRC_INACTIVE state may maintain its connection with a core network whereas its connection with the BS is released. Data is transmitted from the network to the UE through a downlink transport channel. Examples of the downlink transport channel include a broadcast channel (BCH) for transmitting system information and a downlink-shared channel (SCH) for transmitting user traffic or control messages. Traffic of downlink multicast or broadcast services or the control messages can be transmitted on the downlink-SCH or an additional downlink multicast channel (MCH). Data is transmitted from the UE to the network through an uplink transport channel. Examples of the uplink transport channel include a random access channel (RACH) for transmitting an initial control message and an uplink SCH for transmitting user traffic or control messages. Examples of logical channels belonging to a higher channel of the transport channel and mapped onto the transport channels include a broadcast channel (BCCH), a paging control channel (PCCH), a common control channel (CCCH), a multicast control channel (MCCH), a multicast traffic channel (MTCH), etc. The physical channel includes several OFDM symbols in a time domain and several sub-carriers in a frequency domain. One sub-frame includes a plurality of OFDM symbols in the time domain. A resource block is a unit of resource allocation, and consists of a plurality of OFDM symbols and a plurality of sub-carriers. Further, each subframe may use specific sub-carriers of specific OFDM symbols (e.g., a first OFDM symbol) of a corresponding sub-frame for a physical downlink control channel (PDCCH), i.e., an L1/L2 control channel. A transmission time interval (TTI) is a unit time of subframe transmission. FIG.5shows a structure of an NR system, based on an embodiment of the present disclosure. The embodiment ofFIG.5may be combined with various embodiments of the present disclosure. Referring toFIG.5, in the NR, a radio frame may be used for performing uplink and downlink transmission. A radio frame has a length of 10 ms and may be defined to be configured of two half-frames (HFs). A half-frame may include five 1 ms subframes (SFs). A subframe (SF) may be divided into one or more slots, and the number of slots within a subframe may be determined based on subcarrier spacing (SCS). Each slot may include 12 or 14 OFDM(A) symbols according to a cyclic prefix (CP). In case of using a normal CP, each slot may include 14 symbols. In case of using an extended CP, each slot may include 12 symbols. Herein, a symbol may include an OFDM symbol (or CP-OFDM symbol) and a Single Carrier-FDMA (SC-FDMA) symbol (or Discrete Fourier Transform-spread-OFDM (DFT-s-OFDM) symbol). Table 1 shown below represents an example of a number of symbols per slot (Nslotsymb), a number slots per frame (Nframe,uslot), and a number of slots per subframe (Nsubframe,uslot) based on an SCS configuration (u), in a case where a normal CP is used. TABLE 1SCS (15*2u)NslotsymbNframe, uslotNsubframe, uslot15 KHz (u = 0)1410130 KHz (u = 1)1420260 KHz (u = 2)14404120 KHz (u = 3)14808240 KHz (u = 4)1416016 Table 2 shows an example of a number of symbols per slot, a number of slots per frame, and a number of slots per subframe based on the SCS, in a case where an extended CP is used. TABLE 2SCS (15*2u)NslotsymbNframe, uslotNsubframe, uslot60 KHz (u = 2)12404 In an NR system, OFDM(A) numerologies (e.g., SCS, CP length, and so on) between multiple cells being integrate to one UE may be differently configured. Accordingly, a (absolute time) duration (or section) of a time resource (e.g., subframe, slot or TTI) (collectively referred to as a time unit (TU) for simplicity) being configured of the same number of symbols may be differently configured in the integrated cells. In the NR, multiple numerologies or SCSs for supporting diverse 5G services may be supported. For example, in case an SCS is 15 kHz, a wide area of the conventional cellular bands may be supported, and, in case an SCS is 30 kHz/60 kHz a dense-urban, lower latency, wider carrier bandwidth may be supported. In case the SCS is 60 kHz or higher, a bandwidth that is greater than 24.25 GHz may be used in order to overcome phase noise. An NR frequency band may be defined as two different types of frequency ranges. The two different types of frequency ranges may be FR1 and FR2. The values of the frequency ranges may be changed (or varied), and, for example, the two different types of frequency ranges may be as shown below in Table 3. Among the frequency ranges that are used in an NR system, FR1 may mean a “sub 6 GHz range”, and FR2 may mean an “above 6 GHz range” and may also be referred to as a millimeter wave (mmW). TABLE 3Frequency RangeCorrespondingSubcarrier Spacingdesignationfrequency range(SCS)FR1450 MHz-6000 MHz15, 30, 60 kHzFR224250 MHz-52600 MHz60, 120, 240 kHz As described above, the values of the frequency ranges in the NR system may be changed (or varied). For example, as shown below in Table 4, FR1 may include a band within a range of 410 MHz to 7125 MHz. More specifically, FR1 may include a frequency band of 6 GHz (or 5850, 5900, 5925 MHz, and so on) and higher. For example, a frequency band of 6 GHz (or 5850, 5900, 5925 MHz, and so on) and higher being included in FR1 mat include an unlicensed band. The unlicensed band may be used for diverse purposes, e.g., the unlicensed band for vehicle-specific communication (e.g., automated driving). TABLE 4Frequency RangeCorrespondingSubcarrier Spacingdesignationfrequency range(SCS)FR1410 MHz-7125 MHz15, 30, 60 kHzFR224250 MHz-52600 MHz60, 120, 240 kHz FIG.6shows a structure of a slot of an NR frame, based on an embodiment of the present disclosure. The embodiment ofFIG.6may be combined with various embodiments of the present disclosure. Referring toFIG.6, a slot includes a plurality of symbols in a time domain. For example, in case of a normal CP, one slot may include 14 symbols. However, in case of an extended CP, one slot may include 12 symbols. Alternatively, in case of a normal CP, one slot may include 7 symbols. However, in case of an extended CP, one slot may include 6 symbols. A carrier includes a plurality of subcarriers in a frequency domain. A Resource Block (RB) may be defined as a plurality of consecutive subcarriers (e.g., 12 subcarriers) in the frequency domain. A Bandwidth Part (BWP) may be defined as a plurality of consecutive (Physical) Resource Blocks ((P)RBs) in the frequency domain, and the BWP may correspond to one numerology (e.g., SCS, CP length, and so on). A carrier may include a maximum of N number BWPs (e.g., 5 BWPs). Data communication may be performed via an activated BWP. Each element may be referred to as a Resource Element (RE) within a resource grid and one complex symbol may be mapped to each element. Meanwhile, a radio interface between a UE and another UE or a radio interface between the UE and a network may consist of an L1 layer, an L2 layer, and an L3 layer. In various embodiments of the present disclosure, the L1 layer may imply a physical layer. In addition, for example, the L2 layer may imply at least one of a MAC layer, an RLC layer, a PDCP layer, and an SDAP layer. In addition, for example, the L3 layer may imply an RRC layer. Hereinafter, a bandwidth part (BWP) and a carrier will be described. The BWP may be a set of consecutive physical resource blocks (PRBs) in a given numerology. The PRB may be selected from consecutive sub-sets of common resource blocks (CRBs) for the given numerology on a given carrier. When using bandwidth adaptation (BA), a reception bandwidth and transmission bandwidth of a UE are not necessarily as large as a bandwidth of a cell, and the reception bandwidth and transmission bandwidth of the BS may be adjusted. For example, a network/BS may inform the UE of bandwidth adjustment. For example, the UE receive information/configuration for bandwidth adjustment from the network/BS. In this case, the UE may perform bandwidth adjustment based on the received information/configuration. For example, the bandwidth adjustment may include an increase/decrease of the bandwidth, a position change of the bandwidth, or a change in subcarrier spacing of the bandwidth. For example, the bandwidth may be decreased during a period in which activity is low to save power. For example, the position of the bandwidth may move in a frequency domain. For example, the position of the bandwidth may move in the frequency domain to increase scheduling flexibility. For example, the subcarrier spacing of the bandwidth may be changed. For example, the subcarrier spacing of the bandwidth may be changed to allow a different service. A subset of a total cell bandwidth of a cell may be called a bandwidth part (BWP). The BA may be performed when the BS/network configures the BWP to the UE and the BS/network informs the UE of the BWP currently in an active state among the configured BWPs. For example, the BWP may be at least any one of an active BWP, an initial BWP, and/or a default BWP. For example, the UE may not monitor downlink radio link quality in a DL BWP other than an active DL BWP on a primary cell (PCell). For example, the UE may not receive PDCCH, physical downlink shared channel (PDSCH), or channel state information-reference signal (CSI-RS) (excluding RRM) outside the active DL BWP. For example, the UE may not trigger a channel state information (CSI) report for the inactive DL BWP. For example, the UE may not transmit physical uplink control channel (PUCCH) or physical uplink shared channel (PUSCH) outside an active UL BWP. For example, in a downlink case, the initial BWP may be given as a consecutive RB set for a remaining minimum system information (RMSI) control resource set (CORESET) (configured by physical broadcast channel (PBCH)). For example, in an uplink case, the initial BWP may be given by system information block (SIB) for a random access procedure. For example, the default BWP may be configured by a higher layer. For example, an initial value of the default BWP may be an initial DL BWP. For energy saving, if the UE fails to detect downlink control information (DCI) during a specific period, the UE may switch the active BWP of the UE to the default BWP. Meanwhile, the BWP may be defined for SL. The same SL BWP may be used in transmission and reception. For example, a transmitting UE may transmit an SL channel or an SL signal on a specific BWP, and a receiving UE may receive the SL channel or the SL signal on the specific BWP. In a licensed carrier, the SL BWP may be defined separately from a Uu BWP, and the SL BWP may have configuration signaling separate from the Uu BWP. For example, the UE may receive a configuration for the SL BWP from the BS/network. The SL BWP may be (pre-)configured in a carrier with respect to an out-of-coverage NR V2X UE and an RRC_IDLE UE. For the UE in the RRC_CONNECTED mode, at least one SL BWP may be activated in the carrier. FIG.7shows an example of a BWP, based on an embodiment of the present disclosure. The embodiment ofFIG.7may be combined with various embodiments of the present disclosure. It is assumed in the embodiment ofFIG.7that the number of BWPs is 3. Referring toFIG.7, a common resource block (CRB) may be a carrier resource block numbered from one end of a carrier band to the other end thereof. In addition, the PRB may be a resource block numbered within each BWP. A point A may indicate a common reference point for a resource block grid. The BWP may be configured by a point A, an offset NstartBWPfrom the point A, and a bandwidth NsizeBWP. For example, the point A may be an external reference point of a PRB of a carrier in which a subcarrier 0 of all numerologies (e.g., all numerologies supported by a network on that carrier) is aligned. For example, the offset may be a PRB interval between a lowest subcarrier and the point A in a given numerology. For example, the bandwidth may be the number of PRBs in the given numerology. Hereinafter, V2X or SL communication will be described. FIG.8shows a radio protocol architecture for a SL communication, based on an embodiment of the present disclosure. The embodiment ofFIG.8may be combined with various embodiments of the present disclosure. More specifically,FIG.8(a)shows a user plane protocol stack, andFIG.8(b)shows a control plane protocol stack. Hereinafter, a sidelink synchronization signal (SLSS) and synchronization information will be described. The SLSS may include a primary sidelink synchronization signal (PSSS) and a secondary sidelink synchronization signal (SSSS), as an SL-specific sequence. The PSSS may be referred to as a sidelink primary synchronization signal (S-PSS), and the SSSS may be referred to as a sidelink secondary synchronization signal (S-SSS). For example, length-127 M-sequences may be used for the S-PSS, and length-127 gold sequences may be used for the S-SSS. For example, a UE may use the S-PSS for initial signal detection and for synchronization acquisition. For example, the UE may use the S-PSS and the S-SSS for acquisition of detailed synchronization and for detection of a synchronization signal ID. A physical sidelink broadcast channel (PSBCH) may be a (broadcast) channel for transmitting default (system) information which must be first known by the UE before SL signal transmission/reception. For example, the default information may be information related to SLSS, a duplex mode (DM), a time division duplex (TDD) uplink/downlink (UL/DL) configuration, information related to a resource pool, a type of an application related to the SLSS, a subframe offset, broadcast information, or the like. For example, for evaluation of PSBCH performance, in NR V2X, a payload size of the PSBCH may be 56 bits including 24-bit CRC. The S-PSS, the S-SSS, and the PSBCH may be included in a block format (e.g., SL synchronization signal (SS)/PSBCH block, hereinafter, sidelink-synchronization signal block (S-SSB)) supporting periodical transmission. The S-SSB may have the same numerology (i.e., SCS and CP length) as a physical sidelink control channel (PSCCH)/physical sidelink shared channel (PSSCH) in a carrier, and a transmission bandwidth may exist within a (pre-)configured sidelink (SL) BWP. For example, the S-SSB may have a bandwidth of 11 resource blocks (RBs). For example, the PSBCH may exist across 11 RBs. In addition, a frequency position of the S-SSB may be (pre-)configured. Accordingly, the UE does not have to perform hypothesis detection at frequency to discover the S-SSB in the carrier. FIG.9shows a UE performing V2X or SL communication, based on an embodiment of the present disclosure. The embodiment ofFIG.9may be combined with various embodiments of the present disclosure. Referring toFIG.9, in V2X or SL communication, the term ‘UE’ may generally imply a UE of a user. However, if a network equipment such as a BS transmits/receives a signal according to a communication scheme between UEs, the BS may also be regarded as a sort of the UE. For example, a UE1may be a first apparatus100, and a UE2may be a second apparatus200. For example, the UE1may select a resource unit corresponding to a specific resource in a resource pool which implies a set of series of resources. In addition, the UE1may transmit an SL signal by using the resource unit. For example, a resource pool in which the UE1is capable of transmitting a signal may be configured to the UE2which is a receiving UE, and the signal of the UE1may be detected in the resource pool. Herein, if the UE1is within a connectivity range of the BS, the BS may inform the UE1of the resource pool. Otherwise, if the UE1is out of the connectivity range of the BS, another UE may inform the UE1of the resource pool, or the UE1may use a pre-configured resource pool. In general, the resource pool may be configured in unit of a plurality of resources, and each UE may select a unit of one or a plurality of resources to use it in SL signal transmission thereof. Hereinafter, resource allocation in SL will be described. FIG.10shows a procedure of performing V2X or SL communication by a UE based on a transmission mode, based on an embodiment of the present disclosure. The embodiment ofFIG.10may be combined with various embodiments of the present disclosure. In various embodiments of the present disclosure, the transmission mode may be called a mode or a resource allocation mode. Hereinafter, for convenience of explanation, in LTE, the transmission mode may be called an LTE transmission mode. In NR, the transmission mode may be called an NR resource allocation mode. For example,FIG.10(a)shows a UE operation related to an LTE transmission mode1or an LTE transmission mode3. Alternatively, for example,FIG.10(a)shows a UE operation related to an NR resource allocation mode1. For example, the LTE transmission mode1may be applied to general SL communication, and the LTE transmission mode3may be applied to V2X communication. For example,FIG.10(b)shows a UE operation related to an LTE transmission mode2or an LTE transmission mode4. Alternatively, for example,FIG.10(b)shows a UE operation related to an NR resource allocation mode2. Referring toFIG.10(a), in the LTE transmission mode1, the LTE transmission mode3, or the NR resource allocation mode1, a BS may schedule an SL resource to be used by the UE for SL transmission. For example, the BS may perform resource scheduling to a UE1through a PDCCH (more specifically, downlink control information (DCI)), and the UE1may perform V2X or SL communication with respect to a UE2according to the resource scheduling. For example, the UE1may transmit a sidelink control information (SCI) to the UE2through a physical sidelink control channel (PSCCH), and thereafter transmit data based on the SCI to the UE2through a physical sidelink shared channel (PSSCH). Referring toFIG.10(b), in the LTE transmission mode2, the LTE transmission mode4, or the NR resource allocation mode2, the UE may determine an SL transmission resource within an SL resource configured by a BS/network or a pre-configured SL resource. For example, the configured SL resource or the pre-configured SL resource may be a resource pool. For example, the UE may autonomously select or schedule a resource for SL transmission. For example, the UE may perform SL communication by autonomously selecting a resource within a configured resource pool. For example, the UE may autonomously select a resource within a selective window by performing a sensing and resource (re)selection procedure. For example, the sensing may be performed in unit of subchannels. In addition, the UE1which has autonomously selected the resource within the resource pool may transmit the SCI to the UE2through a PSCCH, and thereafter may transmit data based on the SCI to the UE2through a PSSCH. FIG.11shows three cast types, based on an embodiment of the present disclosure. The embodiment ofFIG.11may be combined with various embodiments of the present disclosure. Specifically,FIG.11(a)shows broadcast-type SL communication,FIG.11(b)shows unicast type-SL communication, andFIG.11(c)shows groupcast-type SL communication. In case of the unicast-type SL communication, a UE may perform one-to-one communication with respect to another UE. In case of the groupcast-type SL transmission, the UE may perform SL communication with respect to one or more UEs in a group to which the UE belongs. In various embodiments of the present disclosure, SL groupcast communication may be replaced with SL multicast communication, SL one-to-many communication, or the like. Hereinafter, sidelink (SL) congestion control will be described. If a UE autonomously determines an SL transmission resource, the UE also autonomously determines a size and frequency of use for a resource used by the UE. Of course, due to a constraint from a network or the like, it may be restricted to use a resource size or frequency of use, which is greater than or equal to a specific level. However, if all UEs use a relatively great amount of resources in a situation where many UEs are concentrated in a specific region at a specific time, overall performance may significantly deteriorate due to mutual interference. Accordingly, the UE may need to observe a channel situation. If it is determined that an excessively great amount of resources are consumed, it is preferable that the UE autonomously decreases the use of resources. In the present disclosure, this may be defined as congestion control (CR). For example, the UE may determine whether energy measured in a unit time/frequency resource is greater than or equal to a specific level, and may adjust an amount and frequency of use for its transmission resource based on a ratio of the unit time/frequency resource in which the energy greater than or equal to the specific level is observed. In the present disclosure, the ratio of the time/frequency resource in which the energy greater than or equal to the specific level is observed may be defined as a channel busy ratio (CBR). The UE may measure the CBR for a channel/frequency. Additionally, the UE may transmit the measured CBR to the network/BS. FIG.12shows a resource unit for CBR measurement, based on an embodiment of the present disclosure. The embodiment ofFIG.12may be combined with various embodiments of the present disclosure. Referring toFIG.12, CBR may denote the number of sub-channels in which a measurement result value of a received signal strength indicator (RSSI) has a value greater than or equal to a pre-configured threshold as a result of measuring the RSSI by a UE on a sub-channel basis for a specific period (e.g., 100 ms). Alternatively, the CBR may denote a ratio of sub-channels having a value greater than or equal to a pre-configured threshold among sub-channels for a specific duration. For example, in the embodiment ofFIG.12, if it is assumed that a hatched sub-channel is a sub-channel having a value greater than or equal to a pre-configured threshold, the CBR may denote a ratio of the hatched sub-channels for a period of 100 ms. Additionally, the CBR may be reported to the BS. Further, congestion control considering a priority of traffic (e.g. packet) may be necessary. To this end, for example, the UE may measure a channel occupancy ratio (CR). Specifically, the UE may measure the CBR, and the UE may determine a maximum value CRlimitk of a channel occupancy ratio k (CRk) that can be occupied by traffic corresponding to each priority (e.g., k) based on the CBR. For example, the UE may derive the maximum value CRlimitk of the channel occupancy ratio with respect to a priority of each traffic, based on a predetermined table of CBR measurement values. For example, in case of traffic having a relatively high priority, the UE may derive a maximum value of a relatively great channel occupancy ratio. Thereafter, the UE may perform congestion control by restricting a total sum of channel occupancy ratios of traffic, of which a priority k is lower than i, to a value less than or equal to a specific value. Based on this method, the channel occupancy ratio may be more strictly restricted for traffic having a relatively low priority. In addition thereto, the UE may perform SL congestion control by using a method of adjusting a level of transmit power, dropping a packet, determining whether retransmission is to be performed, adjusting a transmission RB size (MCS coordination), or the like. Hereinafter, SL measurement and reporting will be described. For the purpose of QoS prediction, initial transmission parameter setting, link adaptation, link management, admission control, or the like, SL measurement and reporting (e.g., RSRP, RSRQ) between UEs may be considered in SL. For example, a receiving UE may receive a reference signal from a transmitting UE, and the receiving UE may measure a channel state for the transmitting UE based on the reference signal. In addition, the receiving UE may report channel state information (CSI) to the transmitting UE. SL-related measurement and reporting may include measurement and reporting of CBR and reporting of location information. Examples of channel status information (CSI) for V2X may include a channel quality indicator (CQI), a precoding matrix index (PM), a rank indicator (RI), reference signal received power (RSRP), reference signal received quality (RSRQ), pathgain/pathloss, a sounding reference symbol (SRS) resource indicator (SRI), a SRI-RS resource indicator (CRI), an interference condition, a vehicle motion, or the like. In case of unicast communication, CQI, RI, and PMI or some of them may be supported in a non-subband-based aperiodic CSI report under the assumption of four or less antenna ports. A CSI procedure may not be dependent on a standalone reference signal (RS). A CSI report may be activated or deactivated based on a configuration. For example, the transmitting UE may transmit CSI-RS to the receiving UE, and the receiving UE may measure CQI or RI based on the CSI-RS. For example, the CSI-RS may be referred to as SL CSI-RS. For example, the CSI-RS may be confined within PSSCH transmission. For example, the transmitting UE may perform transmission to the receiving UE by including the CSI-RS on the PSSCH. Based on various embodiments of the present disclosure, a method for transmitting and/or receiving sidelink channel state information or sidelink measurement information and an apparatus supporting the same will be described. Based on an embodiment of the present disclosure, a UE may transmit at least one of the following information through a SCI.resource allocation information, e.g., resource allocation information related to a PSSCH and/or a PSCCH, e.g., location and/or number of time resources and/or frequency resources, and/orresource reservation information, e.g., resource reservation information related to a PSSCH and/or a PSCCH, e.g., a period of resource reservation, and/orinformation for requesting a report of SL CSI, e.g., information for requesting a report of SL RSRP information, information for requesting a report of SL RSRQ information, and/or information for requesting a report of SL RSSI information, and/orSL CSI transmission information, e.g., information indicating transmission of SL CSI on a PSSCH, e.g., information indicating transmission of SL RSRP information, information indicating transmission of SL RSRQ information, and/or information indicating transmission of SL RSSI information, and/ormodulation coding scheme (MCS) information, and/orinformation related to transmit power, and/orinformation related to L1 destination ID and/or information related to L1 source ID, and/orSL HARQ process ID information, and/ornew data indicator (NDI) information, and/orredundancy version (RV) information, and/orQoS information (e.g., priority), e.g., related traffic and/or packet(s) to be transmitted, and/orSL CSI-RS transmission indicator or information on the number of antenna ports related to (transmitted) SL CSI-RS, and/orinformation related to a location of a transmitting UE, and/orinformation related to a location of a target receiving UE (for which SL HARQ feedback is requested), and/or information related to a communication range of a target receiving UE (for which SL HARQ feedback is requested), and/orinformation related to pattern(s) of reference signal(s) (e.g., DM-RS) used for channel estimation and/or decoding of data on a PSSCH, e.g., information related to time pattern(s) and/or frequency pattern(s) of reference signal(s) used for channel estimation and/or decoding of data on a PSSCH; In the present disclosure, for example, “configuration” or “definition” may mean (pre-)configuration from base station(s) or network(s). For example, “configuration” or “definition” may mean resource pool specific (pre-)configuration from base station(s) or network(s). For example, base station(s) or network(s) may transmit information related to “configuration” or “definition” to UE(s). For example, base station(s) or network(s) may transmit information related to “configuration” or “definition” to UE(s) through pre-defined signaling. For example, the pre-defined signaling may include at least one of RRC signaling, MAC signaling, PHY signaling, and/or SIB. In the present disclosure, SL CSI may include at least one of a channel quality indicator (CQI), a precoding matrix indicator (PMI), and/or a rank indicator (RI). For example, SL measurement information may include at least one of SL reference signal received power (RSRP), SL reference signal received quality (RSRQ), and/or SL received signal strength indicator (RSSI). In the present disclosure, “PSCCH” may be replaced with “SCI”, or vice versa. For example, since a UE may transmit a SCI through a PSCCH, “PSCCH” may be replaced with “SCI”, or vice versa. In the present disclosure, a SCI may be at least one of a first SCI and a second SCI. For example, in consideration of a (relatively) high payload size of a SCI, a UE may divide fields constituting the SCI into two groups and transmit them. For example, a UE may divide fields constituting the SCI into two groups and transmit them through different channels. For example, the UE may transmit a first SCI through a PSCCH. For example, the UE may piggyback a second SCI on a PSSCH and transmit a second SCI together with data. For example, a UE may transmit a second SCI through a (independent) PSCCH. For example, piggyback may mean that control information (e.g., SCI) is transmitted through a data channel. FIG.13shows a procedure for a UE to request a report of SL CSI and/or a report of SL measurement information through a SCI, based on an embodiment of the present disclosure. The embodiment ofFIG.13may be combined with various embodiments of the present disclosure. Referring toFIG.13, in step S1310, a first UE may transmit a SCI to a second UE. For example, based on the following rule(s), the first UE may request the second UE to report SL CSI and/or SL measurement information by transmitting the SCI including a pre-defined field. For example, a field related to requesting a report of SL CSI (hereinafter, SL CSI report request field) and a field related to requesting a report of SL measurement information (hereinafter, SL measurement information report request field) may be defined independently or separately. For example, the SL CSI report request field and the SL measurement information report request field may be defined independently or separately in the SCI. For example, one field related to requesting a report of SL CSI and requesting a report of SL measurement information may be defined. For example, one field related to requesting a report of SL CSI and requesting a report of SL measurement information may be defined in the SCI. In this case, for example, the first UE may simultaneously request the second UE to report two pieces of information (i.e., SL CSI and SL measurement information) by using one field included in the SCI. For example, the form in which a report of SL CSI and a report of SL measurement information are simultaneously requested based on one field included in the SCI may be useful if the first UE is configured to always request the report of SL CSI and the report of SL measurement information together. For example, one field related to requesting a report of SL CSI and requesting a report of SL measurement information may be defined. For example, one field related to requesting a report of SL CSI and requesting a report of SL measurement information may be defined in the SCI. In this case, the one field may indicate/represent a plurality of states. For example, among the plurality of states, some states may indicate/represent the SL CSI report request, and other states may indicate/represent the SL measurement information report request. In this case, for example, the first UE may request the second UE to report of at least one of two pieces of information (i.e., SL CSI and SL measurement information) by using one field included in the SCI. For example, the first UE may simultaneously inform the second UE whether or not to transmit SL CSI-RS(s) and whether or not to request a report of SL CSI based on one field defined in the SCI. Herein, for example, if the first UE informs the second UE of transmission of SL CSI-RS(s) based on the field included in the SCI, the second UE may interpret or determine (implicitly) that a report of SL CSI is requested by the first UE based on the field. For example, if the first UE transmits the SCI including the field indicating/representing transmission of SL CSI-RS(s) to the second UE, in step S1320, the second UE may (implicitly) determine that a report of SL CSI is requested by the first UE based on the field even if the first UE does not (separately) request the second UE to report SL CSI. For example, the field indicating/representing transmission of SL CSI-RS(s) may be a field included in a second SCI transmitted through a PSSCH. For example, the second SCI may be scheduled by a first SCI transmitted through a PSCCH. For example, the second SCI may be defined as shown in Table 5. For example, in the embodiment of Table 5, SCI format 0-2 may be the second SCI, and SCI format 0-1 may be the first SCI. The embodiment of Table 5 is only an example of the second SCI, and the second SCI may be defined in various forms. TABLE 5SCI format 0-2 is used for the decoding of PSSCH.The following information is transmitted by means of the SCI format 0-2:HARQ Process IDNew data indicatorRedundancy versionSource IDDestination IDCSI requestIf the 2nd-stage SCI format field in the corresponding SCI format 0-1indicates type 1 groupcast, the following fields are presentZone IDCommunication range requirement Referring to Table 5, the second SCI may include a ‘CSI request’ field. For example, the ‘CSI request’ field may be a field indicating/representing whether or not SL CSI-RS(s) is transmitted. For example, if the first UE sets the ‘CSI request’ field to 1 and transmits the second SCI to the second UE, the second UE may determine that the first UE transmits SL CSI-RS(s), and furthermore, the second UE may determine that the first UE requests a report of SL CSI. That is, although the first UE only indicates/represents transmission of SL CSI-RS(s) to the second UE based on the field, the second UE may trigger a report of SL CSI for the first UE based on the field. Accordingly, the second UE may obtain SL CSI based on SL CSI-RS(s) transmitted by the first UE, and the second UE may transmit the SL CSI to the first UE. For example, only if the ‘CSI request’ field included in the second SCI is set to 1, the first UE may transmit SL CSI-RS(s) to the second UE. For example, only if the ‘CSI request’ field included in the first SCI is set to 1, the first UE may transmit SL CSI-RS(s) to the second UE. For example, if the first UE sets the ‘CSI request’ field to 0 and transmits a SCI to the second UE, the second UE may determine that the first UE does not transmit SL CSI-RS(s), and furthermore, the second UE may determine that the first UE does not request a report of SL CSI. For example, the first UE may simultaneously inform the second UE whether or not to transmit SL CSI-RS(s) and whether or not to request a report of SL measurement information based on one field defined in the SCI. Herein, for example, if the first UE informs the second UE of transmission of SL CSI-RS(s) based on the field included in the SCI, the second UE may interpret or determine (implicitly) that a report of SL measurement information is requested by the first UE based on the field. For example, if the first UE transmits the SCI including the field indicating/representing transmission of SL CSI-RS(s) to the second UE, in step S1320, the second UE may (implicitly) determine that a report of SL measurement information is requested by the first UE based on the field even if the first UE does not (separately) request the second UE to report SL measurement information. For example, the first UE may simultaneously inform the second UE whether or not to transmit SL CSI-RS(s) and whether or not to request a report of SL CSI and SL measurement information based on one field defined in the SCI. Herein, for example, if the first UE informs the second UE of transmission of SL CSI-RS(s) based on the field included in the SCI, the second UE may interpret or determine (implicitly) that a report of SL CSI and SL measurement information is requested by the first UE based on the field. For example, if the first UE transmits the SCI including the field indicating/representing transmission of SL CSI-RS(s) to the second UE, in step S1320, the second UE may (implicitly) determine that a report of SL CSI and SL measurement information is requested by the first UE based on the field even if the first UE does not (separately) request the second UE to report SL CSI and SL measurement information. For example, the first UE may simultaneously inform the second UE whether or not to transmit SL CSI-RS(s), whether or not to request a report of SL CSI, and the number of antenna ports related to (transmitted) SL CSI-RS(s), based on one field defined in the SCI. Herein, for example, if the first UE requests the second UE to report SL CSI based on the field included in the SCI, the second UE may (implicitly) interpret or determine that SL CSI-RS(s) is transmitted by the first UE in consideration of the number of antenna ports related to SL CSI-RS(s). For example, if the first UE requests the second UE to report SL CSI based on the field included in the SCI, the second UE may (implicitly) interpret or determine that SL CSI-RS(s) is transmitted by the first UE based on the number of antenna ports related to SL CSI-RS(s). For example, the first UE may simultaneously inform the second UE whether or not to transmit SL CSI-RS(s), whether or not to request a report of SL measurement information, and the number of antenna ports related to (transmitted) SL CSI-RS(s), based on one field defined in the SCI. Herein, for example, if the first UE requests the second UE to report SL measurement information based on the field included in the SCI, the second UE may (implicitly) interpret or determine that SL CSI-RS(s) is transmitted by the first UE in consideration of the number of antenna ports related to SL CSI-RS(s). For example, if the first UE requests the second UE to report SL measurement information based on the field included in the SCI, the second UE may (implicitly) interpret or determine that SL CSI-RS(s) is transmitted by the first UE based on the number of antenna ports related to SL CSI-RS(s). For example, the first UE may simultaneously inform the second UE whether or not to transmit SL CSI-RS(s), whether or not to request a report of SL CSI and SL measurement information, and the number of antenna ports related to (transmitted) SL CSI-RS(s), based on one field defined in the SCI. Herein, for example, if the first UE requests the second UE to report SL CSI and SL measurement information based on the field included in the SCI, the second UE may (implicitly) interpret or determine that SL CSI-RS(s) is transmitted by the first UE in consideration of the number of antenna ports related to SL CSI-RS(s). For example, if the first UE requests the second UE to report SL CSI and SL measurement information based on the field included in the SCI, the second UE may (implicitly) interpret or determine that SL CSI-RS(s) is transmitted by the first UE based on the number of antenna ports related to SL CSI-RS(s). In the above-described embodiment, the number of antenna ports related to SL CSI-RS(s) may be pre-configured for the first UE. For example, the number of antenna ports related to SL CSI-RS(s) may be the maximum number of antenna ports allowed to transmit SL SCI-RS(s). For example, the number of antenna ports related to SL CSI-RS(s) may be the minimum number of antenna ports allowed to transmit SL SCI-RS(s). For example, the number of antenna ports related to SL CSI-RS(s) may be pre-configured for the first UE per a resource pool (i.e., resource pool specific). For example, the number of antenna ports related to SL CSI-RS(s) may be pre-configured for the first UE per a carrier (i.e., carrier specific). For example, the number of antenna ports related to SL CSI-RS(s) may be pre-configured for the first UE per a service (i.e., service specific). In step S1330, the first UE may transmit reference signal(s) (RS(s)) to the second UE. For example, the RS(s) may be CSI-RS(s). For example, if the SCI transmitted by the first UE indicates/represents transmission of CSI-RS(s), the first UE may transmit CSI-RS(s) to the second UE. In step S1340, the second UE may transmit SL CSI to the first UE. For example, the SL CSI may be obtained based on the RS(s). For example, the second UE receiving the SCI including the field indicating/representing transmission of CSI-RS(s) from the first UE may measure a channel state between the first UE and the second UE by using CSI-RS(s). In addition, the second UE may generate SL CSI related to the channel state and transmit the SL CSI to the first UE. For example, the second UE may generate SL CSI in a form of MAC CE and transmit the SL CSI to the first UE. For example, SL CSI may be transmitted through a MAC CE. For example, a MAC CE for reporting SL CSI may be referred to as a CSI Reporting MAC CE. For example, a priority of the CSI Reporting MAC CE may be defined as a fixed value. For example, a priority of the CSI Reporting MAC CE may be defined as 1. For example, a base station or a network may configure or pre-configure a priority of the CSI Reporting MAC CE to a fixed value for the UE. For example, a priority of the CSI Reporting MAC CE may be exchanged or designated between UEs through PC5-RRC signaling. For example, the second UE may transmit SL measurement information to the first UE. For example, the SL measurement information may be obtained based on the RS(s). For example, the second UE receiving the SCI including the field indicating/representing transmission of CSI-RS(s) from the first UE may measure a channel state between the first UE and the second UE by using CSI-RS(s). In addition, the second UE may transmit SL measurement information related to the channel state to the first UE. For example, a priority of the SL measurement information may be defined as a fixed value. For example, a base station or a network may configure or pre-configure a priority of the SL measurement information to a fixed value for the UE. For example, a priority of the SL measurement information and a priority of the CSI Reporting MAC CE may be set/fixed to the same priority. For example, a priority of the SL measurement information may be exchanged or designated between UEs through PC5-RRC signaling. For example, a priority of the SL measurement information may be higher than a priority of SL data. For example, the second UE may transmit SL measurement information to the first UE through PC5-RRC connection/signaling. For example, a priority of the PC5-RRC signaling may be higher than a priority of the CSI Reporting MAC CE. For example, a priority of the CSI Reporting MAC CE may be higher than a priority of SL data. For example, in a logical channel prioritization (LCP) procedure, a priority of the PC5-RRC signaling may be higher than a priority of the CSI Reporting MAC CE, and a priority of the CSI Reporting MAC CE may be higher than a priority of SL data. For example, the second UE may transmit SL CSI and/or SL measurement information to the first UE through a PSSCH or a pre-defined channel. For example, SL CSI and/or SL measurement information may be piggybacked on a PSSCH or a pre-defined channel and transmitted together with data (of a specific service). For convenience of description, a case in which SL CSI and/or SL measurement information is piggybacked on a PSSCH or a pre-defined channel and transmitted together with data may be referred to as a first case. For example, SL CSI and/or SL measurement information may be transmitted without data (of a specific service) through a PSSCH or a pre-defined channel. For convenience of description, a case in which SL CSI and/or SL measurement information is transmitted without data through a PSSCH or a pre-defined channel may be referred to as a second case. For example, in case the second UE transmits SL CSI and/or SL measurement information to the first UE through a PSSCH or a pre-defined channel, the second UE may inform the first UE whether or not to transmit SL CSI and/or whether or not to transmit SL measurement information through a pre-defined field included in a SCI. For example, in order to reduce complexity of blind decoding of the first UE, the second UE may inform the first UE whether or not to transmit SL CSI and/or whether or not to transmit SL measurement information through a pre-defined field included in a SCI. Herein, for example, a case in which SL CSI and/or SL measurement information is transmitted through a PSSCH without data and a case in which SL CSI and/or SL measurement information is piggybacked on a PSSCH and transmitted together with data may be distinguished based on a pre-defined field included in the SCI. For example, the pre-defined field included in the SCI may be 2 bits. For example, a field related to whether or not to transmit SL CSI (hereinafter, SL CSI report field) and a field related to whether or not to transmit SL measurement information (hereinafter, SL measurement information report field) may be defined independently or separately. For example, the SL CSI report field and the SL measurement information report field may be defined independently or separately in the SCI. For example, one field related to transmission of SL CSI and transmission of SL measurement information may be defined. For example, one field related to transmission of SL CSI and transmission of SL measurement information may be defined in the SCI. In this case, for example, the second UE may simultaneously inform the first UE whether or not to transmit two pieces of information (i.e., SL CSI and SL measurement information) by using one field included in the SCI. For example, the form in which whether or not SL CSI is transmitted and whether or not SL measurement information is transmitted are simultaneously informed based on one field included in the SCI may be useful if the second UE is configured to always transmit the report of SL CSI and the report of SL measurement information together. For example, one field related to transmission of SL CSI and transmission of SL measurement information may be defined. For example, one field related to transmission of SL CSI and transmission of SL measurement information may be defined in the SCI. In this case, the one field may indicate/represent a plurality of states. For example, among the plurality of states, some states may indicate/represent transmission of SL CSI, and other states may indicate/represent transmission of SL measurement information. In this case, for example, the second UE may inform the first UE of transmission of at least one of two pieces of information (i.e., SL CSI and SL measurement information) by using one field included in the SCI. Based on an embodiment of the present disclosure, a QoS field value included in a SCI, which schedules a PSSCH or a pre-defined channel in the first case and/or the second case, may be configured differently, based on at least one of the following rules, compared with a QoS field value related to a case in which the second UE transmits only data (of a specific service) through a PSSCH. For convenience of description, a case in which data (of a specific service) is only transmitted through a PSSCH may be referred to as a third case. For example, the SCI may be transmitted through a PSCCH. For example, the QoS field value may include a value related to priority information. (1) First Rule For example, compared to the third case, the second UE may designate or set a priority included in a SCI to a relatively high priority in the first case. For example, compared to the third case, the second UE may designate or set a priority included in a SCI to a relatively high pre-configured priority in the first case. For example, compared to the third case, the second UE may designate or set a priority included in a SCI to a relatively high priority by applying a pre-configured offset value in the first case. For example, compared to the third case, the second UE may designate or set a priority included in a SCI to a relatively high priority in the second case. For example, compared to the third case, the second UE may designate or set a priority included in a SCI to a relatively high pre-configured priority in the second case. For example, compared to the third case, the second UE may designate or set a priority included in a SCI to a relatively high priority by applying a pre-configured offset value in the second case. For example, the second UE may designate or set a priority included in a SCI in the first case to be the same as a priority included in a SCI in the second case. For example, in the first case and the second case, the second UE may designate or set a priority included in a SCI to the same pre-configured priority. For example, compared to remaining case(s), the second UE may designate or set a priority included in a SCI to the highest priority in the first case. For example, compared to remaining case(s), the second UE may designate or set a priority included in a SCI to the highest pre-configured priority in the first case. For example, compared to remaining case(s), the second UE may designate or set a priority included in a SCI to the highest priority by applying a pre-configured offset value in the first case. For example, compared to remaining case(s), the second UE may designate or set a priority included in a SCI to the highest priority in the second case. For example, compared to remaining case(s), the second UE may designate or set a priority included in a SCI to the highest pre-configured priority in the second case. For example, compared to remaining case(s), the second UE may designate or set a priority included in a SCI to the highest priority by applying a pre-configured offset value in the second case. (2) Second Rule For example, the second UE may designate or set a priority included in a SCI in the first case to be the same as a priority included in a SCI in the third case. For example, a priority included in a SCI in the first case may be set to follow a priority of data related to a specific service or a priority of data related to a specific service transmitted through (unicast) session(s) in the third case. For example, a priority included in a SCI in the first case may be set to follow the highest priority of data related to a specific service transmitted through (unicast) session(s). For example, a priority included in a SCI in the first case may be set to follow the lowest priority of data related to a specific service transmitted through (unicast) session(s). For example, a priority included in a SCI in the first case may be set to follow an average of priorities of data related to specific services transmitted through (unicast) session(s). For example, the second UE may designate or set a priority included in a SCI in the second case to be the same as a priority included in a SCI in the third case. For example, a priority included in a SCI in the second case may be set to follow a priority of data related to a specific service or a priority of data related to a specific service transmitted through (unicast) session(s) in the third case. For example, a priority included in a SCI in the second case may be set to follow the highest priority of data related to a specific service transmitted through (unicast) session(s). For example, a priority included in a SCI in the second case may be set to follow the lowest priority of data related to a specific service transmitted through (unicast) session(s). For example, a priority included in a SCI in the second case may be set to follow an average of priorities of data related to specific services transmitted through (unicast) session(s). For example, the second UE may designate or set a priority included in a SCI to a pre-configured priority in the first case. For example, a value related to the priority may be set differently based on at least one of a type of a service, a requirement (e.g., latency, reliability), a congestion level, SL quality, and/or a value of SL measurement. For example, the second UE may designate or set a priority included in a SCI to a pre-configured priority in the second case. For example, a value related to the priority may be set differently based on at least one of a type of a service, a requirement (e.g., latency, reliability), a congestion level, SL quality, and/or a value of SL measurement. Based on an embodiment of the present disclosure, the second UE may differently designate or set a QoS field value included in a SCI related to transmission of SL CSI and a QoS field value included in a SCI related to transmission of SL measurement information. For example, the second UE may differently designate or set a QoS field value included in a SCI related to transmission of SL CSI and a QoS field value included in a SCI related to transmission of SL measurement information, and may transmit the SCI to the first UE. For example, the QoS field value may include a value related to priority information. For example, the second UE may set a priority related to transmission of the SL CSI to a relatively high priority, compared to a priority related to transmission of the SL measurement report. For example, the second UE may set a priority related to transmission of the SL measurement report to a relatively high priority, compared to a priority related to transmission of the SL CSI. For example, the second UE may set a priority related to transmission of the SL measurement report to a relatively high priority or the same priority, compared to a priority related to transmission of the SL CSI. For example, in a logical channel prioritization (LCP) procedure, the second UE may set a priority of a logical channel related to transmission of the SL measurement report to a relatively high priority, compared to a priority related to transmission of the SL CSI. For example, the logical channel may be a sidelink transport channel (STCH). For example, the second UE may set a priority related to transmission of SL data to a relatively low priority or the same priority, compared to a priority related to transmission of the SL CSI. For example, in a logical channel prioritization (LCP) procedure, the second UE may set a priority of a logical channel related to transmission of SL data to a relatively low priority, compared to a priority related to transmission of the SL CSI. For example, the second UE may set a priority related to transmission of SL data to a relatively low priority or the same priority, compared to a priority related to transmission of the SL measurement information. For example, in a logical channel prioritization (LCP) procedure, the second UE may set a priority of a logical channel related to transmission of SL data to a relatively low priority, compared to a priority of a logical channel related to transmission of the SL measurement information. For example, the second UE may differently designate or set a QoS field value included in a SCI related to simultaneous transmission of SL CSI and SL measurement information and a QoS field value included in a SCI related to transmission of SL CSI. For example, the second UE may differently designate or set a QoS field value included in a SCI related to simultaneous transmission of SL CSI and SL measurement information and a QoS field value included in a SCI related to transmission of SL CSI, and may transmit the SCI to the first UE. For example, the QoS field value may include a value related to priority information. For example, the second UE may differently designate or set a QoS field value included in a SCI related to simultaneous transmission of SL CSI and SL measurement information and a QoS field value included in a SCI related to transmission of SL measurement information. For example, the second UE may differently designate or set a QoS field value included in a SCI related to simultaneous transmission of SL CSI and SL measurement information and a QoS field value included in a SCI related to transmission of SL measurement information, and may transmit the SCI to the first UE. For example, the QoS field value may include a value related to priority information. Based on an embodiment of the present disclosure, the first UE may designate or set differently a QoS field value included in a SCI related to a PSSCH when SL CSI-RS(s) is transmitted through the PSSCH and a QoS field value included in a SCI related to a PSSCH when only PSSCH is transmitted without SL CSI-RS(s). In addition, the first UE may transmit the SCI including the differently set QoS field to the second UE. For example, the QoS field value may include a value related to priority information. For example, the first UE may set a priority related to transmission of a PSSCH including SL CSI-RS(s) to a relatively high priority, compared to a priority related to transmission of a PSSCH not including SL CSI-RS(s). For example, the first UE may designate or set differently a QoS field value included in a SCI related to a PSSCH when SL CSI-RS(s) is transmitted through the PSSCH and a QoS field value included in a SCI related to a PSSCH when only data (of a specific service) is transmitted through the PSSCH. In addition, the first UE may transmit the SCI including the differently set QoS field to the second UE. For example, the QoS field value may include a value related to priority information. For example, the first UE may set a priority related to transmission of a PSSCH including only SL CSI-RS(s) to a relatively high priority, compared to a priority related to transmission of a PSSCH including only data. Based on an embodiment of the present disclosure, if at least one of the following conditions is satisfied, the second UE may trigger transmission of the SL CSI report request and/or transmission of the SL measurement information report request by the first UE through pre-defined signaling. For example, if at least one of the following conditions is satisfied, the second UE may trigger transmission of SL CSI-RS(s) by the first UE through pre-defined signaling. For example, the pre-defined signaling may include at least one of MAC signaling and RRC signaling. For example, if at least one of the following conditions is satisfied, the second UE may transmit SL CSI and/or SL measurement information to the first UE. (1) First Condition For example, if a value of SL channel busy ratio (CBR) measured/reported by the second UE is changed by more than (or equal to) a pre-configured threshold value, compared to a value measured/reported previously, or For example, if a value of SL CBR measured/reported by the second UE is greater than a pre-configured threshold value, or For example, if a value of SL CBR measured/reported by the second UE is smaller than a pre-configured threshold value (2) Second Condition For example, if a value of SL interference measurement measured/reported by the second UE is changed by more than (or equal to) a pre-configured threshold value, compared to a value measured/reported previously, or For example, if a value of SL interference measurement measured/reported by the second UE is greater than a pre-configured threshold value, or For example, if a value of SL interference measurement measured/reported by the second UE is smaller than a pre-configured threshold value (3) Third Condition For example, if SL measurement information (for the first UE) reported/measured by the second UE is changed by more than (or equal to) a pre-configured threshold value, compared to a value reported/measured previously, herein, the SL measurement information may be at least one of SL RSRP, SL RSRQ and/or SL RSSI, or For example, if SL measurement information (for the first UE) reported/measured by the second UE is greater than a pre-configured threshold value, for example, if a value of SL RSRP between the first UE and the second UE measured by the second UE is greater than a pre-configured threshold value, or For example, if SL measurement information (for the first UE) reported/measured by the second UE is smaller than a pre-configured threshold value, for example, if a value of SL RSRP between the first UE and the second UE measured by the second UE is smaller than a pre-configured threshold value (4) Fourth Condition For example, if SL CSI (for the first UE) reported/measured by the second UE is changed by more than (or equal to) a pre-configured threshold value, compared to a value reported/measured previously, herein, the SL CSI may be at least one of SL CQI, SL PMI and/or SL RI, or For example, if SL CSI (for the first UE) reported/measured by the second UE is greater than a pre-configured threshold value, or For example, if SL CSI (for the first UE) reported/measured by the second UE is smaller than a pre-configured threshold value Based on an embodiment of the present disclosure, there may be no data (related to a specific service) to be transmitted by the first UE to the second UE. In this case, if the first UE needs to transmit SL CSI-RS(s) to the second UE, the first UE may transmit SL CSI-RS(s) to the second UE through a PSSCH, and the first UE may transmit dummy data information to the second UE through the PSSCH. For example, the first UE may transmit the dummy data information to the second UE through the PSSCH by rate matching or puncturing. For example, the SL CSI-RS(s) may be transmitted based on pre-configured resource(s) and/or MCS. For example, the pre-configured resource(s) may include pre-configured time resource(s) and/or pre-configured frequency resource(s). For example, the dummy data information may be transmitted on remaining resource(s) (e.g., time resource(s) and/or frequency resource(s)) in which SL CSI-RS(s) is not transmitted. For example, the dummy data information may be pre-configured information. Additionally, for example, the first UE may inform the second UE whether or not to transmit the dummy data information through a SCI related to the PSSCH. For example, the first UE may inform the second UE whether or not the form/type is transmitted through a SCI related to the PSSCH. Through this, the first UE can prevent the second UE from transmitting meaningless SL HARQ feedback to the first UE. For example, rate matching may refer to a process of matching the number of encoded bits to the number of bits required for transmission, by repeating or puncturing according to a rate matching pattern before a UE transmits the number of encoded bits. For example, the UE may repeat or puncture according to a rate matching pattern (on remaining (time/frequency) resource(s) in which SL CSI-RS(s) is not transmitted) on the corresponding PSSCH, and may transmit SL CSI-RS(s). Based on an embodiment of the present disclosure, when the first UE performs SL CSI-RS transmission to the second UE, the first UE may inform the second UE whether or not transmit power of SL CSI-RS(s) is changed through pre-defined signaling. For example, the second UE may measure or obtain SL CSI and/or SL measurement information by using the SL CSI-RS(s). For example, if session(s) is established or set up between the first UE and the second UE, the pre-defined signaling may be PC5 RRC signaling. For example, the pre-defined signaling may be MAC signaling. For example, the pre-defined signaling may be a pre-defined field included in a SCI. In this case, for example, the first UE may inform the second UE that transmit power of SL CSI-RS(s) is changed compared to previous transmit power, based on toggling of a value of the pre-defined field included in the SCI. For example, when the first UE performs SL CSI-RS transmission to the second UE, the first UE may transmit information on changed transmit power of SL CSI-RS(s) to the second UE through pre-defined signaling. For example, when the first UE performs SL CSI-RS transmission to the second UE, the first UE may transmit information on a time period in which transmit power of SL CSI-RS(s) is constantly kept/maintained to the second UE through pre-defined signaling. In the above-described case, for example, the second UE may be configured to separate measurement/averaging operation for SL CSI-RSs with different transmission power values. For example, the measurement/averaging operation may include at least one of interference measurement/averaging operation (based on SL CSI-RS(s)), quality measurement/averaging operation for desired signal(s), and/or averaging operation for SL measurement. Also, for example, if resource(s) (e.g., resource(s) to be used for SL communication between the first UE and the second UE) is reselected, the second UE may initialize a value obtained based on existing measurement/averaging operation, and the second UE may newly perform measurement/averaging operation. For example, the second UE may newly perform measurement/averaging operation based on SL CSI-RS(s) transmitted through reselected resource(s). For example, based on interference measurement/averaging operation, the second UE may measure interference for a plurality of resource elements, and the second UE may obtain an average value of the measured interference. FIG.14shows a method for a transmitting UE to transmit information related to a SL channel, based on an embodiment of the present disclosure. The embodiment ofFIG.14may be combined with various embodiments of the present disclosure. Referring toFIG.14, in step S1410, a transmitting UE may obtain information related to a SL channel. In step S1420, the transmitting UE may report information related to the SL channel to a receiving UE. The information related to the SL channel may include at least one of SL channel state information or SL measurement information. The receiving UE may receive information related to the SL channel from the transmitting UE. Additionally, the transmitting UE may perform synchronization with a synchronization source. Additionally, the transmitting UE may configure at least one BWP. FIG.15shows a method for a first device to perform wireless communication, based on an embodiment of the present disclosure. The embodiment ofFIG.15may be combined with various embodiments of the present disclosure. Referring toFIG.15, in step S1510, a first device may receive, from a second device, sidelink control information (SCI) including a field representing transmission of a sidelink (SL) channel state information reference signal (CSI-RS). In step S1520, the first device may determine that a report of SL channel state information is requested by the second device, based on the field representing transmission of the SL CSI-RS. In step S1530, the first device may obtain the SL channel state information related to a channel state between the first device and the second device based on the SL CSI-RS. In step S1540, the first device may transmit, to the second device, the SL channel state information. For example, the SL channel state information may include at least one of a channel quality indicator (CQI), a precoding matrix indicator (PMI), or a rank indicator (RI). For example, the SL channel state information may be transmitted to the second device through a first PSSCH. For example, a priority of the SL channel state information transmitted through the first PSSCH may be pre-configured. For example, the priority of the SL channel state information transmitted through the first PSSCH may be higher than a priority of data transmitted through a second PSSCH. For example, a priority field included in a SCI transmitted through a PSCCH related to the first PSSCH may be set to a value related to a higher priority, compared to a priority field included in a SCI transmitted through a PSCCH related to the second PSSCH. Herein, the first PSSCH may not include data, and the second PSSCH may not include SL channel state information. Additionally, for example, the first device may transmit, to the second device, SL measurement information. For example, the SL measurement information may include at least one of reference signal received power (RSRP), reference signal received quality (RSRQ), or received signal strength indicator (RSSI) between the first device and the second device. For example, a priority of the SL measurement information may be pre-configured. For example, the priority of the SL measurement information may be higher than a priority of SL data. For example, a priority of the SL channel state information may be lower than a priority of the SL measurement information, and the priority of the SL channel state information may be higher than a priority of SL data. For example, the SL measurement information may be transmitted to the second device, based on reference signal received power (RSRP) between the first device and the second device measured by the first device being greater than a threshold value. For example, the SL measurement information may be transmitted to the second device, based on reference signal received power (RSRP) between the first device and the second device measured by the first device is smaller than a threshold value. For example, the SL measurement information may be transmitted to the second device, based on a first reference signal received power (RSRP) between the first device and the second device measured or reported by the first device being changed by more than a pre-configured threshold value, compared to a second RSRP measured or reported previously. The proposed method can be applied to device(s) described below. First, the processor (102) of the first device (100) may control the transceiver (106) to receive, from a second device, sidelink control information (SCI) including a field representing transmission of a sidelink (SL) channel state information reference signal (CSI-RS). In addition, the processor (102) of the first device (100) may determine that a report of SL channel state information is requested by the second device, based on the field representing transmission of the SL CSI-RS. In addition, the processor (102) of the first device (100) may obtain the SL channel state information related to a channel state between the first device and the second device based on the SL CSI-RS. In addition, the processor (102) of the first device (100) may control the transceiver (106) to transmit, to the second device, the SL channel state information. Based on an embodiment of the present disclosure, a first device configured to perform wireless communication may be provided. For example, the first device may comprise: one or more memories storing instructions; one or more transceivers; and one or more processors connected to the one or more memories and the one or more transceivers. For example, the one or more processors may execute the instructions to: receive, from a second device, sidelink control information (SCI) including a field representing transmission of a sidelink (SL) channel state information reference signal (CSI-RS); determine that a report of SL channel state information is requested by the second device, based on the field representing transmission of the SL CSI-RS; obtain the SL channel state information related to a channel state between the first device and the second device based on the SL CSI-RS; and transmit, to the second device, the SL channel state information. Based on an embodiment of the present disclosure, an apparatus configured to control a first user equipment (UE) may be provided. For example, the apparatus may comprise: one or more processors; and one or more memories operably connected to the one or more processors and storing instructions. For example, the one or more processors may execute the instructions to: receive, from a second UE, sidelink control information (SCI) including a field representing transmission of a sidelink (SL) channel state information reference signal (CSI-RS); determine that a report of SL channel state information is requested by the second UE, based on the field representing transmission of the SL CSI-RS; obtain the SL channel state information related to a channel state between the first UE and the second UE based on the SL CSI-RS; and transmit, to the second UE, the SL channel state information. Based on an embodiment of the present disclosure, a non-transitory computer-readable storage medium storing instructions may be provided. For example, the instructions, when executed, may cause a first device to: receive, from a second device, sidelink control information (SCI) including a field representing transmission of a sidelink (SL) channel state information reference signal (CSI-RS); determine that a report of SL channel state information is requested by the second device, based on the field representing transmission of the SL CSI-RS; obtain the SL channel state information related to a channel state between the first device and the second device based on the SL CSI-RS; and transmit, to the second device, the SL channel state information. FIG.16shows a method for a second device to perform wireless communication, based on an embodiment of the present disclosure. The embodiment ofFIG.15may be combined with various embodiments of the present disclosure. Referring toFIG.16, in step S1610, a second device may transmit, to a first device, sidelink control information (SCI) including a field representing transmission of a sidelink (SL) channel state information reference signal (CSI-RS). In step S1620, the second device may transmit, to the first device, the SL CSI-RS based on the field representing transmission of the SL CSI-RS. In step S1630, the second device may receive, from the first device, SL channel state information obtained based on the SL CSI-RS. Herein, a report of the SL channel state information may be triggered based on the field representing transmission of the SL CSI-RS. For example, the field representing transmission of the SL CSI-RS may be a CSI request field. The proposed method can be applied to device(s) described below. First, the processor (202) of the second device (200) may control the transceiver (206) to transmit, to a first device, sidelink control information (SCI) including a field representing transmission of a sidelink (SL) channel state information reference signal (CSI-RS). In addition, the processor (202) of the second device (200) may control the transceiver (206) to transmit, to the first device, the SL CSI-RS based on the field representing transmission of the SL CSI-RS. In addition, the processor (202) of the second device (200) may control the transceiver (206) to receive, from the first device, SL channel state information obtained based on the SL CSI-RS. Herein, a report of the SL channel state information may be triggered based on the field representing transmission of the SL CSI-RS. For example, the field representing transmission of the SL CSI-RS may be a CSI request field. Hereinafter, device(s) to which various embodiments of the present disclosure can be applied will be described. The various descriptions, functions, procedures, proposals, methods, and/or operational flowcharts of the present disclosure described in this document may be applied to, without being limited to, a variety of fields requiring wireless communication/connection (e.g., 5G) between devices. Hereinafter, a description will be given in more detail with reference to the drawings. In the following drawings/description, the same reference symbols may denote the same or corresponding hardware blocks, software blocks, or functional blocks unless described otherwise. FIG.17shows a communication system1, based on an embodiment of the present disclosure. Referring toFIG.17, a communication system1to which various embodiments of the present disclosure are applied includes wireless devices, Base Stations (BSs), and a network. Herein, the wireless devices represent devices performing communication using Radio Access Technology (RAT) (e.g., 5G New RAT (NR)) or Long-Term Evolution (LTE)) and may be referred to as communication/radio/5G devices. The wireless devices may include, without being limited to, a robot100a, vehicles100b-1and100b-2, an eXtended Reality (XR) device100c, a hand-held device100d, a home appliance100e, an Internet of Things (IoT) device100f, and an Artificial Intelligence (AI) device/server400. For example, the vehicles may include a vehicle having a wireless communication function, an autonomous vehicle, and a vehicle capable of performing communication between vehicles. Herein, the vehicles may include an Unmanned Aerial Vehicle (UAV) (e.g., a drone). The XR device may include an Augmented Reality (AR)/Virtual Reality (VR)/Mixed Reality (MR) device and may be implemented in the form of a Head-Mounted Device (HMD), a Head-Up Display (HUD) mounted in a vehicle, a television, a smartphone, a computer, a wearable device, a home appliance device, a digital signage, a vehicle, a robot, etc. The hand-held device may include a smartphone, a smartpad, a wearable device (e.g., a smartwatch or a smartglasses), and a computer (e.g., a notebook). The home appliance may include a TV, a refrigerator, and a washing machine. The IoT device may include a sensor and a smartmeter. For example, the BSs and the network may be implemented as wireless devices and a specific wireless device200amay operate as a BS/network node with respect to other wireless devices. The wireless devices100ato100fmay be connected to the network300via the BSs200. An AI technology may be applied to the wireless devices100ato100fand the wireless devices100ato100fmay be connected to the AI server400via the network300. The network300may be configured using a 3G network, a 4G (e.g., LTE) network, or a 5G (e.g., NR) network. Although the wireless devices100ato100fmay communicate with each other through the BSs200/network300, the wireless devices100ato100fmay perform direct communication (e.g., sidelink communication) with each other without passing through the BSs/network. For example, the vehicles100b-1and100b-2may perform direct communication (e.g. Vehicle-to-Vehicle (V2V)/Vehicle-to-everything (V2X) communication). The IoT device (e.g., a sensor) may perform direct communication with other IoT devices (e.g., sensors) or other wireless devices100ato100f. Wireless communication/connections150a,150b, or150cmay be established between the wireless devices100ato100f/BS200, or BS200/BS200. Herein, the wireless communication/connections may be established through various RATs (e.g., 5G NR) such as uplink/downlink communication150a, sidelink communication150b(or, D2D communication), or inter BS communication (e.g., relay, Integrated Access Backhaul (IAB)). The wireless devices and the BSs/the wireless devices may transmit/receive radio signals to/from each other through the wireless communication/connections150aand150b. For example, the wireless communication/connections150aand150bmay transmit/receive signals through various physical channels. To this end, at least a part of various configuration information configuring processes, various signal processing processes (e.g., channel encoding/decoding, modulation/demodulation, and resource mapping/demapping), and resource allocating processes, for transmitting/receiving radio signals, may be performed based on the various proposals of the present disclosure. FIG.18shows wireless devices, based on an embodiment of the present disclosure. Referring toFIG.18, a first wireless device100and a second wireless device200may transmit radio signals through a variety of RATs (e.g., LTE and NR). Herein, {the first wireless device100and the second wireless device200} may correspond to {the wireless device100xand the BS200} and/or {the wireless device100xand the wireless device100x} ofFIG.17. The first wireless device100may include one or more processors102and one or more memories104and additionally further include one or more transceivers106and/or one or more antennas108. The processor(s)102may control the memory(s)104and/or the transceiver(s)106and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. For example, the processor(s)102may process information within the memory(s)104to generate first information/signals and then transmit radio signals including the first information/signals through the transceiver(s)106. The processor(s)102may receive radio signals including second information/signals through the transceiver106and then store information obtained by processing the second information/signals in the memory(s)104. The memory(s)104may be connected to the processor(s)102and may store a variety of information related to operations of the processor(s)102. For example, the memory(s)104may store software code including commands for performing a part or the entirety of processes controlled by the processor(s)102or for performing the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. Herein, the processor(s)102and the memory(s)104may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver(s)106may be connected to the processor(s)102and transmit and/or receive radio signals through one or more antennas108. Each of the transceiver(s)106may include a transmitter and/or a receiver. The transceiver(s)106may be interchangeably used with Radio Frequency (RF) unit(s). In the present disclosure, the wireless device may represent a communication modem/circuit/chip. The second wireless device200may include one or more processors202and one or more memories204and additionally further include one or more transceivers206and/or one or more antennas208. The processor(s)202may control the memory(s)204and/or the transceiver(s)206and may be configured to implement the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. For example, the processor(s)202may process information within the memory(s)204to generate third information/signals and then transmit radio signals including the third information/signals through the transceiver(s)206. The processor(s)202may receive radio signals including fourth information/signals through the transceiver(s)106and then store information obtained by processing the fourth information/signals in the memory(s)204. The memory(s)204may be connected to the processor(s)202and may store a variety of information related to operations of the processor(s)202. For example, the memory(s)204may store software code including commands for performing a part or the entirety of processes controlled by the processor(s)202or for performing the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. Herein, the processor(s)202and the memory(s)204may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver(s)206may be connected to the processor(s)202and transmit and/or receive radio signals through one or more antennas208. Each of the transceiver(s)206may include a transmitter and/or a receiver. The transceiver(s)206may be interchangeably used with RF unit(s). In the present disclosure, the wireless device may represent a communication modem/circuit/chip. Hereinafter, hardware elements of the wireless devices100and200will be described more specifically. One or more protocol layers may be implemented by, without being limited to, one or more processors102and202. For example, the one or more processors102and202may implement one or more layers (e.g., functional layers such as PHY, MAC, RLC, PDCP, RRC, and SDAP). The one or more processors102and202may generate one or more Protocol Data Units (PDUs) and/or one or more Service Data Unit (SDUs) according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. The one or more processors102and202may generate messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. The one or more processors102and202may generate signals (e.g., baseband signals) including PDUs, SDUs, messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document and provide the generated signals to the one or more transceivers106and206. The one or more processors102and202may receive the signals (e.g., baseband signals) from the one or more transceivers106and206and acquire the PDUs, SDUs, messages, control information, data, or information according to the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document. The one or more processors102and202may be referred to as controllers, microcontrollers, microprocessors, or microcomputers. The one or more processors102and202may be implemented by hardware, firmware, software, or a combination thereof. As an example, one or more Application Specific Integrated Circuits (ASICs), one or more Digital Signal Processors (DSPs), one or more Digital Signal Processing Devices (DSPDs), one or more Programmable Logic Devices (PLDs), or one or more Field Programmable Gate Arrays (FPGAs) may be included in the one or more processors102and202. The descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be implemented using firmware or software and the firmware or software may be configured to include the modules, procedures, or functions. Firmware or software configured to perform the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be included in the one or more processors102and202or stored in the one or more memories104and204so as to be driven by the one or more processors102and202. The descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document may be implemented using firmware or software in the form of code, commands, and/or a set of commands. The one or more memories104and204may be connected to the one or more processors102and202and store various types of data, signals, messages, information, programs, code, instructions, and/or commands. The one or more memories104and204may be configured by Read-Only Memories (ROMs), Random Access Memories (RAMs), Electrically Erasable Programmable Read-Only Memories (EPROMs), flash memories, hard drives, registers, cash memories, computer-readable storage media, and/or combinations thereof. The one or more memories104and204may be located at the interior and/or exterior of the one or more processors102and202. The one or more memories104and204may be connected to the one or more processors102and202through various technologies such as wired or wireless connection. The one or more transceivers106and206may transmit user data, control information, and/or radio signals/channels, mentioned in the methods and/or operational flowcharts of this document, to one or more other devices. The one or more transceivers106and206may receive user data, control information, and/or radio signals/channels, mentioned in the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document, from one or more other devices. For example, the one or more transceivers106and206may be connected to the one or more processors102and202and transmit and receive radio signals. For example, the one or more processors102and202may perform control so that the one or more transceivers106and206may transmit user data, control information, or radio signals to one or more other devices. The one or more processors102and202may perform control so that the one or more transceivers106and206may receive user data, control information, or radio signals from one or more other devices. The one or more transceivers106and206may be connected to the one or more antennas108and208and the one or more transceivers106and206may be configured to transmit and receive user data, control information, and/or radio signals/channels, mentioned in the descriptions, functions, procedures, proposals, methods, and/or operational flowcharts disclosed in this document, through the one or more antennas108and208. In this document, the one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (e.g., antenna ports). The one or more transceivers106and206may convert received radio signals/channels etc. from RF band signals into baseband signals in order to process received user data, control information, radio signals/channels, etc. using the one or more processors102and202. The one or more transceivers106and206may convert the user data, control information, radio signals/channels, etc. processed using the one or more processors102and202from the base band signals into the RF band signals. To this end, the one or more transceivers106and206may include (analog) oscillators and/or filters. FIG.19shows a signal process circuit for a transmission signal, based on an embodiment of the present disclosure. Referring toFIG.19, a signal processing circuit1000may include scramblers1010, modulators1020, a layer mapper1030, a precoder1040, resource mappers1050, and signal generators1060. An operation/function ofFIG.19may be performed, without being limited to, the processors102and202and/or the transceivers106and206ofFIG.18. Hardware elements ofFIG.19may be implemented by the processors102and202and/or the transceivers106and206ofFIG.18. For example, blocks1010to1060may be implemented by the processors102and202ofFIG.18. Alternatively, the blocks1010to1050may be implemented by the processors102and202ofFIG.18and the block1060may be implemented by the transceivers106and206ofFIG.18. Codewords may be converted into radio signals via the signal processing circuit1000ofFIG.19. Herein, the codewords are encoded bit sequences of information blocks. The information blocks may include transport blocks (e.g., a UL-SCH transport block, a DL-SCH transport block). The radio signals may be transmitted through various physical channels (e.g., a PUSCH and a PDSCH). Specifically, the codewords may be converted into scrambled bit sequences by the scramblers1010. Scramble sequences used for scrambling may be generated based on an initialization value, and the initialization value may include ID information of a wireless device. The scrambled bit sequences may be modulated to modulation symbol sequences by the modulators1020. A modulation scheme may include pi/2-Binary Phase Shift Keying (pi/2-BPSK), m-Phase Shift Keying (m-PSK), and m-Quadrature Amplitude Modulation (m-QAM). Complex modulation symbol sequences may be mapped to one or more transport layers by the layer mapper1030. Modulation symbols of each transport layer may be mapped (precoded) to corresponding antenna port(s) by the precoder1040. Outputs z of the precoder1040may be obtained by multiplying outputs y of the layer mapper1030by an N*M precoding matrix W. Herein, N is the number of antenna ports and M is the number of transport layers. The precoder1040may perform precoding after performing transform precoding (e.g., DFT) for complex modulation symbols. Alternatively, the precoder1040may perform precoding without performing transform precoding. The resource mappers1050may map modulation symbols of each antenna port to time-frequency resources. The time-frequency resources may include a plurality of symbols (e.g., a CP-OFDMA symbols and DFT-s-OFDMA symbols) in the time domain and a plurality of subcarriers in the frequency domain. The signal generators1060may generate radio signals from the mapped modulation symbols and the generated radio signals may be transmitted to other devices through each antenna. For this purpose, the signal generators1060may include Inverse Fast Fourier Transform (IFFT) modules, Cyclic Prefix (CP) inserters, Digital-to-Analog Converters (DACs), and frequency up-converters. Signal processing procedures for a signal received in the wireless device may be configured in a reverse manner of the signal processing procedures1010to1060ofFIG.19. For example, the wireless devices (e.g.,100and200ofFIG.18) may receive radio signals from the exterior through the antenna ports/transceivers. The received radio signals may be converted into baseband signals through signal restorers. To this end, the signal restorers may include frequency downlink converters, Analog-to-Digital Converters (ADCs), CP remover, and Fast Fourier Transform (FFT) modules. Next, the baseband signals may be restored to codewords through a resource demapping procedure, a postcoding procedure, a demodulation processor, and a descrambling procedure. The codewords may be restored to original information blocks through decoding. Therefore, a signal processing circuit (not illustrated) for a reception signal may include signal restorers, resource demappers, a postcoder, demodulators, descramblers, and decoders. FIG.20shows another example of a wireless device, based on an embodiment of the present disclosure. The wireless device may be implemented in various forms according to a use-case/service (refer toFIG.17). Referring toFIG.20, wireless devices100and200may correspond to the wireless devices100and200ofFIG.18and may be configured by various elements, components, units/portions, and/or modules. For example, each of the wireless devices100and200may include a communication unit110, a control unit120, a memory unit130, and additional components140. The communication unit may include a communication circuit112and transceiver(s)114. For example, the communication circuit112may include the one or more processors102and202and/or the one or more memories104and204ofFIG.18. For example, the transceiver(s)114may include the one or more transceivers106and206and/or the one or more antennas108and208ofFIG.18. The control unit120is electrically connected to the communication unit110, the memory130, and the additional components140and controls overall operation of the wireless devices. For example, the control unit120may control an electric/mechanical operation of the wireless device based on programs/code/commands/information stored in the memory unit130. The control unit120may transmit the information stored in the memory unit130to the exterior (e.g., other communication devices) via the communication unit110through a wireless/wired interface or store, in the memory unit130, information received through the wireless/wired interface from the exterior (e.g., other communication devices) via the communication unit110. The additional components140may be variously configured according to types of wireless devices. For example, the additional components140may include at least one of a power unit/battery, input/output (I/O) unit, a driving unit, and a computing unit. The wireless device may be implemented in the form of, without being limited to, the robot (100aofFIG.17), the vehicles (100b-1and100b-2ofFIG.17), the XR device (100cofFIG.17), the hand-held device (100dofFIG.17), the home appliance (100eofFIG.17), the IoT device (100fofFIG.17), a digital broadcast terminal, a hologram device, a public safety device, an MTC device, a medicine device, a fintech device (or a finance device), a security device, a climate/environment device, the AI server/device (400ofFIG.17), the BSs (200ofFIG.17), a network node, etc. The wireless device may be used in a mobile or fixed place according to a use-example/service. InFIG.20, the entirety of the various elements, components, units/portions, and/or modules in the wireless devices100and200may be connected to each other through a wired interface or at least a part thereof may be wirelessly connected through the communication unit110. For example, in each of the wireless devices100and200, the control unit120and the communication unit110may be connected by wire and the control unit120and first units (e.g.,130and140) may be wirelessly connected through the communication unit110. Each element, component, unit/portion, and/or module within the wireless devices100and200may further include one or more elements. For example, the control unit120may be configured by a set of one or more processors. As an example, the control unit120may be configured by a set of a communication control processor, an application processor, an Electronic Control Unit (ECU), a graphical processing unit, and a memory control processor. As another example, the memory130may be configured by a Random Access Memory (RAM), a Dynamic RAM (DRAM), a Read Only Memory (ROM)), a flash memory, a volatile memory, a non-volatile memory, and/or a combination thereof. Hereinafter, an example of implementingFIG.20will be described in detail with reference to the drawings. FIG.21shows a hand-held device, based on an embodiment of the present disclosure. The hand-held device may include a smartphone, a smartpad, a wearable device (e.g., a smartwatch or a smartglasses), or a portable computer (e.g., a notebook). The hand-held device may be referred to as a mobile station (MS), a user terminal (UT), a Mobile Subscriber Station (MSS), a Subscriber Station (SS), an Advanced Mobile Station (AMS), or a Wireless Terminal (WT). Referring toFIG.21, a hand-held device100may include an antenna unit108, a communication unit110, a control unit120, a memory unit130, a power supply unit140a, an interface unit140b, and an I/O unit140c. The antenna unit108may be configured as a part of the communication unit110. Blocks110to130/140ato140ccorrespond to the blocks110to130/140ofFIG.20, respectively. The communication unit110may transmit and receive signals (e.g., data and control signals) to and from other wireless devices or BSs. The control unit120may perform various operations by controlling constituent elements of the hand-held device100. The control unit120may include an Application Processor (AP). The memory unit130may store data/parameters/programs/code/commands needed to drive the hand-held device100. The memory unit130may store input/output data/information. The power supply unit140amay supply power to the hand-held device100and include a wired/wireless charging circuit, a battery, etc. The interface unit140bmay support connection of the hand-held device100to other external devices. The interface unit140bmay include various ports (e.g., an audio I/O port and a video I/O port) for connection with external devices. The I/O unit140cmay input or output video information/signals, audio information/signals, data, and/or information input by a user. The I/O unit140cmay include a camera, a microphone, a user input unit, a display unit140d, a speaker, and/or a haptic module. As an example, in the case of data communication, the I/O unit140cmay acquire information/signals (e.g., touch, text, voice, images, or video) input by a user and the acquired information/signals may be stored in the memory unit130. The communication unit110may convert the information/signals stored in the memory into radio signals and transmit the converted radio signals to other wireless devices directly or to a BS. The communication unit110may receive radio signals from other wireless devices or the BS and then restore the received radio signals into original information/signals. The restored information/signals may be stored in the memory unit130and may be output as various types (e.g., text, voice, images, video, or haptic) through the I/O unit140c. FIG.22shows a vehicle or an autonomous vehicle, based on an embodiment of the present disclosure. The vehicle or autonomous vehicle may be implemented by a mobile robot, a car, a train, a manned/unmanned Aerial Vehicle (AV), a ship, etc. Referring toFIG.22, a vehicle or autonomous vehicle100may include an antenna unit108, a communication unit110, a control unit120, a driving unit140a, a power supply unit140b, a sensor unit140c, and an autonomous driving unit140d. The antenna unit108may be configured as a part of the communication unit110. The blocks110/130/140ato140dcorrespond to the blocks110/130/140ofFIG.20, respectively. The communication unit110may transmit and receive signals (e.g., data and control signals) to and from external devices such as other vehicles, BSs (e.g., gNBs and road side units), and servers. The control unit120may perform various operations by controlling elements of the vehicle or the autonomous vehicle100. The control unit120may include an Electronic Control Unit (ECU). The driving unit140amay cause the vehicle or the autonomous vehicle100to drive on a road. The driving unit140amay include an engine, a motor, a powertrain, a wheel, a brake, a steering device, etc. The power supply unit140bmay supply power to the vehicle or the autonomous vehicle100and include a wired/wireless charging circuit, a battery, etc. The sensor unit140cmay acquire a vehicle state, ambient environment information, user information, etc. The sensor unit140cmay include an Inertial Measurement Unit (IMU) sensor, a collision sensor, a wheel sensor, a speed sensor, a slope sensor, a weight sensor, a heading sensor, a position module, a vehicle forward/backward sensor, a battery sensor, a fuel sensor, a tire sensor, a steering sensor, a temperature sensor, a humidity sensor, an ultrasonic sensor, an illumination sensor, a pedal position sensor, etc. The autonomous driving unit140dmay implement technology for maintaining a lane on which a vehicle is driving, technology for automatically adjusting speed, such as adaptive cruise control, technology for autonomously driving along a determined path, technology for driving by automatically setting a path if a destination is set, and the like. For example, the communication unit110may receive map data, traffic information data, etc. from an external server. The autonomous driving unit140dmay generate an autonomous driving path and a driving plan from the obtained data. The control unit120may control the driving unit140asuch that the vehicle or the autonomous vehicle100may move along the autonomous driving path according to the driving plan (e.g., speed/direction control). In the middle of autonomous driving, the communication unit110may aperiodically/periodically acquire recent traffic information data from the external server and acquire surrounding traffic information data from neighboring vehicles. In the middle of autonomous driving, the sensor unit140cmay obtain a vehicle state and/or surrounding environment information. The autonomous driving unit140dmay update the autonomous driving path and the driving plan based on the newly obtained data/information. The communication unit110may transfer information about a vehicle position, the autonomous driving path, and/or the driving plan to the external server. The external server may predict traffic information data using AI technology, etc., based on the information collected from vehicles or autonomous vehicles and provide the predicted traffic information data to the vehicles or the autonomous vehicles. Claims in the present description can be combined in a various way. For instance, technical features in method claims of the present description can be combined to be implemented or performed in an apparatus, and technical features in apparatus claims can be combined to be implemented or performed in a method. Further, technical features in method claim(s) and apparatus claim(s) can be combined to be implemented or performed in an apparatus. Further, technical features in method claim(s) and apparatus claim(s) can be combined to be implemented or performed in a method.
115,090
11943649
DETAILED DESCRIPTION FIG.1throughFIG.16, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device. The following documents and standards descriptions are hereby incorporated by reference into the present disclosure as if fully set forth herein: 3GPP TS 36.211 v16.1.0, “E-UTRA, Physical channels and modulation;” 3GPP TS 36.212 v16.1.0, “E-UTRA, Multiplexing and Channel coding;” 3GPP TS 36.213 v16.1.0, “E-UTRA, Physical Layer Procedures;” 3GPP TS 36.321 v16.1.0, “E-UTRA, Medium Access Control (MAC) protocol specification;” 3GPP TS 36.331 v16.1.0, “E-UTRA, Radio Resource Control (RRC) protocol specification;” 3GPP TR 22.891 v14.2.0; 3GPP TS 38.211 v16.1.0, “E-UTRA, NR, Physical channels and modulation;” 3GPP TS 38.213 v16.1.0, “E-UTRA, NR, Physical Layer Procedures for control;” 3GPP TS 38.214 v16.1.0, “E-UTRA, NR, Physical layer procedures for data;” and 3GPP TS 38.212 v16.1.0, “E-UTRA, NR, Multiplexing and channel coding.” Aspects, features, and advantages of the disclosure are readily apparent from the following detailed description, simply by illustrating a number of particular embodiments and implementations, including the best mode contemplated for carrying out the disclosure. The disclosure is also capable of other and different embodiments, and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive. The disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. In the following, for brevity, both FDD and TDD are considered as the duplex method for both DL and UL signaling. Although exemplary descriptions and embodiments to follow assume orthogonal frequency division multiplexing (OFDM) or orthogonal frequency division multiple access (OFDMA), the present disclosure can be extended to other OFDM-based transmission waveforms or multiple access schemes such as filtered OFDM (F-OFDM). To meet the demand for wireless data traffic having increased since deployment of 4G communication systems, efforts have been made to develop an improved 5G or pre-5G communication system. Therefore, the 5G or pre-5G communication system is also called a “beyond 4G network” or a “post LTE system.” The 5G communication system is considered to be implemented in higher frequency (mmWave) bands, e.g., 60 GHz bands, so as to accomplish higher data rates. To decrease propagation loss of the radio waves and increase the transmission coverage, the beamforming, massive multiple-input multiple-output (MIMO), full dimensional MIMO (FD-MIMO), array antenna, an analog beam forming, large scale antenna techniques and the like are discussed in 5G communication systems. In addition, in 5G communication systems, development for system network improvement is under way based on advanced small cells, cloud radio access networks (RANs), ultra-dense networks, device-to-device (D2D) communication, wireless backhaul communication, moving network, cooperative communication, coordinated multi-points (CoMP) transmission and reception, interference mitigation and cancellation and the like. In the 5G system, hybrid frequency shift keying and quadrature amplitude modulation (FQAM) and sliding window superposition coding (SWSC) as an adaptive modulation and coding (AMC) technique, and filter bank multi carrier (FBMC), non-orthogonal multiple access (NOMA), and sparse code multiple access (SCMA) as an advanced access technology have been developed. FIGS.1-4Bbelow describe various embodiments implemented in wireless communications systems and with the use of orthogonal frequency division multiplexing (OFDM) or orthogonal frequency division multiple access (OFDMA) communication techniques. The descriptions ofFIGS.1-3are not meant to imply physical or architectural limitations to the manner in which different embodiments may be implemented. Different embodiments of the present disclosure may be implemented in any suitably-arranged communications system. The present disclosure covers several components which can be used in conjunction or in combination with one another, or can operate as standalone schemes. FIG.1illustrates an example wireless network according to embodiments of the present disclosure. The embodiment of the wireless network shown inFIG.1is for illustration only. Other embodiments of the wireless network100could be used without departing from the scope of this disclosure. As shown inFIG.1, the wireless network includes a gNB101, a gNB102, and a gNB103. The gNB101communicates with the gNB102and the gNB103. The gNB101also communicates with at least one network130, such as the Internet, a proprietary Internet Protocol (IP) network, or other data network. The gNB102provides wireless broadband access to the network130for a first plurality of user equipments (UEs) within a coverage area120of the gNB102. The first plurality of UEs includes a UE111, which may be located in a small business (SB); a UE112, which may be located in an enterprise (E); a UE113, which may be located in a WiFi hotspot (HS); a UE114, which may be located in a first residence (R); a UE115, which may be located in a second residence (R); and a UE116, which may be a mobile device (M), such as a cell phone, a wireless laptop, a wireless PDA, or the like. The gNB103provides wireless broadband access to the network130for a second plurality of UEs within a coverage area125of the gNB103. The second plurality of UEs includes the UE115and the UE116. In some embodiments, one or more of the gNBs101-103may communicate with each other and with the UEs111-116using 5G, LTE, LTE-A, WiMAX, WiFi, or other wireless communication techniques. Depending on the network type, the term “base station” or “BS” can refer to any component (or collection of components) configured to provide wireless access to a network, such as transmit point (TP), transmit-receive point (TRP), an enhanced base station (eNodeB or eNB), a 5G base station (gNB), a macrocell, a femtocell, a WiFi access point (AP), or other wirelessly enabled devices. Base stations may provide wireless access in accordance with one or more wireless communication protocols, e.g., 5G 3GPP new radio interface/access (NR), long term evolution (LTE), LTE advanced (LTE-A), high speed packet access (HSPA), Wi-Fi 802.11a/b/g/n/ac, etc. For the sake of convenience, the terms “BS” and “TRP” are used interchangeably in this patent document to refer to network infrastructure components that provide wireless access to remote terminals. Also, depending on the network type, the term “user equipment” or “UE” can refer to any component such as “mobile station,” “subscriber station,” “remote terminal,” “wireless terminal,” “receive point,” or “user device.” For the sake of convenience, the terms “user equipment” and “UE” are used in this patent document to refer to remote wireless equipment that wirelessly accesses a BS, whether the UE is a mobile device (such as a mobile telephone or smartphone) or is normally considered a stationary device (such as a desktop computer or vending machine). Dotted lines show the approximate extents of the coverage areas120and125, which are shown as approximately circular for the purposes of illustration and explanation only. It should be clearly understood that the coverage areas associated with gNBs, such as the coverage areas120and125, may have other shapes, including irregular shapes, depending upon the configuration of the gNBs and variations in the radio environment associated with natural and man-made obstructions. As described in more detail below, one or more of the UEs111-116include circuitry, programing, or a combination thereof, for receiving aperiodic CSI-RS to determine and report CSI for communications in a wireless communication system. In certain embodiments, and one or more of the gNBs101-103includes circuitry, programing, or a combination thereof, for transmitting aperiodic CSI-RS to acquire CSI in a wireless communication system. AlthoughFIG.1illustrates one example of a wireless network, various changes may be made toFIG.1. For example, the wireless network could include any number of gNBs and any number of UEs in any suitable arrangement. Also, the gNB101could communicate directly with any number of UEs and provide those UEs with wireless broadband access to the network130. Similarly, each gNB102-103could communicate directly with the network130and provide UEs with direct wireless broadband access to the network130. Further, the gNBs101,102, and/or103could provide access to other or additional external networks, such as external telephone networks or other types of data networks. FIG.2illustrates an example gNB102according to embodiments of the present disclosure. The embodiment of the gNB102illustrated inFIG.2is for illustration only, and the gNBs101and103ofFIG.1could have the same or similar configuration. However, gNBs come in a wide variety of configurations, andFIG.2does not limit the scope of this disclosure to any particular implementation of a gNB. As shown inFIG.2, the gNB102includes multiple antennas205a-205n, multiple RF transceivers210a-210n, transmit (TX) processing circuitry215, and receive (RX) processing circuitry220. The gNB102also includes a controller/processor225, a memory230, and a backhaul or network interface235. The RF transceivers210a-210nreceive, from the antennas205a-205n, incoming RF signals, such as signals transmitted by UEs in the network100. The RF transceivers210a-210ndown-convert the incoming RF signals to generate IF or baseband signals. The IF or baseband signals are sent to the RX processing circuitry220, which generates processed baseband signals by filtering, decoding, and/or digitizing the baseband or IF signals. The RX processing circuitry220transmits the processed baseband signals to the controller/processor225for further processing. The TX processing circuitry215receives analog or digital data (such as voice data, web data, e-mail, or interactive video game data) from the controller/processor225. The TX processing circuitry215encodes, multiplexes, and/or digitizes the outgoing baseband data to generate processed baseband or IF signals. The RF transceivers210a-210nreceive the outgoing processed baseband or IF signals from the TX processing circuitry215and up-converts the baseband or IF signals to RF signals that are transmitted via the antennas205a-205n. The controller/processor225can include one or more processors or other processing devices that control the overall operation of the gNB102. For example, the controller/processor225could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceivers210a-210n, the RX processing circuitry220, and the TX processing circuitry215in accordance with well-known principles. The controller/processor225could support additional functions as well, such as more advanced wireless communication functions. For instance, the controller/processor225could support beam forming or directional routing operations in which outgoing signals from multiple antennas205a-205nare weighted differently to effectively steer the outgoing signals in a desired direction. Any of a wide variety of other functions could be supported in the gNB102by the controller/processor225. The controller/processor225is also capable of executing programs and other processes resident in the memory230, such as an OS. The controller/processor225can move data into or out of the memory230as required by an executing process. The controller/processor225is also coupled to the backhaul or network interface235. The backhaul or network interface235allows the gNB102to communicate with other devices or systems over a backhaul connection or over a network. The interface235could support communications over any suitable wired or wireless connection(s). For example, when the gNB102is implemented as part of a cellular communication system (such as one supporting 5G, LTE, or LTE-A), the interface235could allow the gNB102to communicate with other gNBs over a wired or wireless backhaul connection. When the gNB102is implemented as an access point, the interface235could allow the gNB102to communicate over a wired or wireless local area network or over a wired or wireless connection to a larger network (such as the Internet). The interface235includes any suitable structure supporting communications over a wired or wireless connection, such as an Ethernet or RF transceiver. The memory230is coupled to the controller/processor225. Part of the memory230could include a RAM, and another part of the memory230could include a Flash memory or other ROM. AlthoughFIG.2illustrates one example of gNB102, various changes may be made toFIG.2. For example, the gNB102could include any number of each component shown inFIG.2. As a particular example, an access point could include a number of interfaces235, and the controller/processor225could support routing functions to route data between different network addresses. As another particular example, while shown as including a single instance of TX processing circuitry215and a single instance of RX processing circuitry220, the gNB102could include multiple instances of each (such as one per RF transceiver). Also, various components inFIG.2could be combined, further subdivided, or omitted and additional components could be added according to particular needs. FIG.3illustrates an example UE116according to embodiments of the present disclosure. The embodiment of the UE116illustrated inFIG.3is for illustration only, and the UEs111-115ofFIG.1could have the same or similar configuration. However, UEs come in a wide variety of configurations, andFIG.3does not limit the scope of this disclosure to any particular implementation of a UE. As shown inFIG.3, the UE116includes an antenna305, a radio frequency (RF) transceiver310, TX processing circuitry315, a microphone320, and receive (RX) processing circuitry325. The UE116also includes a speaker330, a processor340, an input/output (I/O) interface (IF)345, a touchscreen350, a display355, and a memory360. The memory360includes an operating system (OS)361and one or more applications362. The RF transceiver310receives, from the antenna305, an incoming RF signal transmitted by a gNB of the network100. The RF transceiver310down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal. The IF or baseband signal is sent to the RX processing circuitry325, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The RX processing circuitry325transmits the processed baseband signal to the speaker330(such as for voice data) or to the processor340for further processing (such as for web browsing data). The TX processing circuitry315receives analog or digital voice data from the microphone320or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the processor340. The TX processing circuitry315encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The RF transceiver310receives the outgoing processed baseband or IF signal from the TX processing circuitry315and up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna305. The processor340can include one or more processors or other processing devices and execute the OS361stored in the memory360in order to control the overall operation of the UE116. For example, the processor340could control the reception of forward channel signals and the transmission of reverse channel signals by the RF transceiver310, the RX processing circuitry325, and the TX processing circuitry315in accordance with well-known principles. In some embodiments, the processor340includes at least one microprocessor or microcontroller. The processor340is also capable of executing other processes and programs resident in the memory360, such as processes for CSI-RS measurement and for CSI feedback on uplink channel. The processor340can move data into or out of the memory360as required by an executing process. In some embodiments, the processor340is configured to execute the applications362based on the OS361or in response to signals received from gNBs or an operator. The processor340is also coupled to the I/O interface345, which provides the UE116with the ability to connect to other devices, such as laptop computers and handheld computers. The I/O interface345is the communication path between these accessories and the processor340. The processor340is also coupled to the touchscreen350and the display355. The operator of the UE116can use the touchscreen350to enter data into the UE116. The display355may be a liquid crystal display, light emitting diode display, or other display capable of rendering text and/or at least limited graphics, such as from web sites. The memory360is coupled to the processor340. Part of the memory360could include a random access memory (RAM), and another part of the memory360could include a Flash memory or other read-only memory (ROM). AlthoughFIG.3illustrates one example of UE116, various changes may be made toFIG.3. For example, various components inFIG.3could be combined, further subdivided, or omitted and additional components could be added according to particular needs. As a particular example, the processor340could be divided into multiple processors, such as one or more central processing units (CPUs) and one or more graphics processing units (GPUs). Also, whileFIG.3illustrates the UE116configured as a mobile telephone or smartphone, UEs could be configured to operate as other types of mobile or stationary devices. FIG.4Ais a high-level diagram of transmit path circuitry. For example, the transmit path circuitry may be used for an orthogonal frequency division multiple access (OFDMA) communication.FIG.4Bis a high-level diagram of receive path circuitry. For example, the receive path circuitry may be used for an orthogonal frequency division multiple access (OFDMA) communication. InFIGS.4A and4B, for downlink communication, the transmit path circuitry may be implemented in a base station (gNB)102or a relay station, and the receive path circuitry may be implemented in a user equipment (e.g., user equipment116ofFIG.1). In other examples, for uplink communication, the receive path circuitry450may be implemented in a base station (e.g., gNB102ofFIG.1) or a relay station, and the transmit path circuitry may be implemented in a user equipment (e.g., user equipment116ofFIG.1). Transmit path circuitry comprises channel coding and modulation block405, serial-to-parallel (S-to-P) block410, Size N Inverse Fast Fourier Transform (IFFT) block415, parallel-to-serial (P-to-S) block420, add cyclic prefix block425, and up-converter (UC)430. Receive path circuitry450comprises down-converter (DC)455, remove cyclic prefix block460, serial-to-parallel (S-to-P) block465, Size N Fast Fourier Transform (FFT) block470, parallel-to-serial (P-to-S) block475, and channel decoding and demodulation block480. At least some of the components inFIGS.4A400and4B450may be implemented in software, while other components may be implemented by configurable hardware or a mixture of software and configurable hardware. In particular, it is noted that the FFT blocks and the IFFT blocks described in this disclosure document may be implemented as configurable software algorithms, where the value of Size N may be modified according to the implementation. Furthermore, although this disclosure is directed to an embodiment that implements the Fast Fourier Transform and the Inverse Fast Fourier Transform, this is by way of illustration only and may not be construed to limit the scope of the disclosure. It may be appreciated that in an alternate embodiment of the present disclosure, the Fast Fourier Transform functions and the Inverse Fast Fourier Transform functions may easily be replaced by discrete Fourier transform (DFT) functions and inverse discrete Fourier transform (IDFT) functions, respectively. It may be appreciated that for DFT and IDFT functions, the value of the N variable may be any integer number (i.e., 1, 4, 3, 4, etc.), while for FFT and IFFT functions, the value of the N variable may be any integer number that is a power of two (i.e., 1, 2, 4, 8, 16, etc.). In transmit path circuitry400, channel coding and modulation block405receives a set of information bits, applies coding (e.g., LDPC coding) and modulates (e.g., quadrature phase shift keying (QPSK) or quadrature amplitude modulation (QAM)) the input bits to produce a sequence of frequency-domain modulation symbols. Serial-to-parallel block410converts (i.e., de-multiplexes) the serial modulated symbols to parallel data to produce N parallel symbol streams where N is the IFFT/FFT size used in BS102and UE116. Size N IFFT block415then performs an IFFT operation on the N parallel symbol streams to produce time-domain output signals. Parallel-to-serial block420converts (i.e., multiplexes) the parallel time-domain output symbols from Size N IFFT block415to produce a serial time-domain signal. Add cyclic prefix block425then inserts a cyclic prefix to the time-domain signal. Finally, up-converter430modulates (i.e., up-converts) the output of add cyclic prefix block425to RF frequency for transmission via a wireless channel. The signal may also be filtered at baseband before conversion to RF frequency. The transmitted RF signal arrives at the UE116after passing through the wireless channel, and reverse operations to those at gNB102are performed. Down-converter455down-converts the received signal to baseband frequency, and remove cyclic prefix block460removes the cyclic prefix to produce the serial time-domain baseband signal. Serial-to-parallel block465converts the time-domain baseband signal to parallel time-domain signals. Size N FFT block470then performs an FFT algorithm to produce N parallel frequency-domain signals. Parallel-to-serial block475converts the parallel frequency-domain signals to a sequence of modulated data symbols. Channel decoding and demodulation block480demodulates and then decodes the modulated symbols to recover the original input data stream. Each of gNBs101-103may implement a transmit path that is analogous to transmitting in the downlink to user equipment111-116and may implement a receive path that is analogous to receiving in the uplink from user equipment111-116. Similarly, each one of user equipment111-116may implement a transmit path corresponding to the architecture for transmitting in the uplink to gNBs101-103and may implement a receive path corresponding to the architecture for receiving in the downlink from gNBs101-103. The 5G communication system use cases have been identified and described. Those use cases can be roughly categorized into three different groups. In one example, enhanced mobile broadband (eMBB) is determined to do with high bits/sec requirement, with less stringent latency and reliability requirements. In another example, ultra reliable and low latency (URLL) is determined with less stringent bits/sec requirement. In yet another example, massive machine type communication (mMTC) is determined that a number of devices can be as many as 100,000 to 1 million per km2, but the reliability/throughput/latency requirement could be less stringent. This scenario may also involve power efficiency requirement as well, in that the battery consumption may be minimized as possible. A communication system includes a downlink (DL) that conveys signals from transmission points such as base stations (BSs) or NodeBs to user equipments (UEs) and an Uplink (UL) that conveys signals from UEs to reception points such as NodeBs. A UE, also commonly referred to as a terminal or a mobile station, may be fixed or mobile and may be a cellular phone, a personal computer device, or an automated device. An eNodeB, which is generally a fixed station, may also be referred to as an access point or other equivalent terminology. For LTE systems, a NodeB is often referred as an eNodeB. In a communication system, such as LTE system, DL signals can include data signals conveying information content, control signals conveying DL control information (DCI), and reference signals (RS) that are also known as pilot signals. An eNodeB transmits data information through a physical DL shared channel (PDSCH). An eNodeB transmits DCI through a physical DL control channel (PDCCH) or an Enhanced PDCCH (EPDCCH). An eNodeB transmits acknowledgement information in response to data transport block (TB) transmission from a UE in a physical hybrid ARQ indicator channel (PHICH). An eNodeB transmits one or more of multiple types of RS including a UE-common RS (CRS), a channel state information RS (CSI-RS), or a demodulation RS (DMRS). A CRS is transmitted over a DL system bandwidth (BW) and can be used by UEs to obtain a channel estimate to demodulate data or control information or to perform measurements. To reduce CRS overhead, an eNodeB may transmit a CSI-RS with a smaller density in the time and/or frequency domain than a CRS. DMRS can be transmitted only in the BW of a respective PDSCH or EPDCCH and a UE can use the DMRS to demodulate data or control information in a PDSCH or an EPDCCH, respectively. A transmission time interval for DL channels is referred to as a subframe and can have, for example, duration of 1 millisecond. DL signals also include transmission of a logical channel that carries system control information. A BCCH is mapped to either a transport channel referred to as a broadcast channel (BCH) when the DL signals convey a master information block (MIB) or to a DL shared channel (DL-SCH) when the DL signals convey a System Information Block (SIB). Most system information is included in different SIBs that are transmitted using DL-SCH. A presence of system information on a DL-SCH in a subframe can be indicated by a transmission of a corresponding PDCCH conveying a codeword with a cyclic redundancy check (CRC) scrambled with system information RNTI (SI-RNTI). Alternatively, scheduling information for a SIB transmission can be provided in an earlier SIB and scheduling information for the first SIB (SIB-1) can be provided by the MIB. DL resource allocation is performed in a unit of subframe and a group of physical resource blocks (PRBs). A transmission BW includes frequency resource units referred to as resource blocks (RBs). Each RB includes NscRBsub-carriers, or resource elements (REs), such as 12 REs. A unit of one RB over one subframe is referred to as a PRB. A UE can be allocated MPDSCHRBs for a total of MscPDSCH=MPDSCH·NscRBREs for the PDSCH transmission BW. UL signals can include data signals conveying data information, control signals conveying UL control information (UCI), and UL RS. UL RS includes DMRS and Sounding RS (SRS). A UE transmits DMRS only in a BW of a respective PUSCH or PUCCH. An eNodeB can use a DMRS to demodulate data signals or UCI signals. A UE transmits SRS to provide an eNodeB with an UL CSI. A UE transmits data information or UCI through a respective physical UL shared channel (PUSCH) or a Physical UL control channel (PUCCH). If a UE needs to transmit data information and UCI in a same UL subframe, the UE may multiplex both in a PUSCH. UCI includes Hybrid Automatic Repeat request acknowledgement (HARQ-ACK) information, indicating correct (ACK) or incorrect (NACK) detection for a data TB in a PDSCH or absence of a PDCCH detection (DTX), scheduling request (SR) indicating whether a UE has data in the UE's buffer, rank indicator (RI), and channel state information (CSI) enabling an eNodeB to perform link adaptation for PDSCH transmissions to a UE. HARQ-ACK information is also transmitted by a UE in response to a detection of a PDCCH/EPDCCH indicating a release of semi-persistently scheduled PDSCH. An UL subframe includes two slots. Each slot includes NsymbULsymbols for transmitting data information, UCI, DMRS, or SRS. A frequency resource unit of an UL system BW is an RB. A UE is allocated NRBRBs for a total of NRB·NscRBREs for a transmission BW. For a PUCCH, NRB=1. A last subframe symbol can be used to multiplex SRS transmissions from one or more UEs. A number of subframe symbols that are available for data/UCI/DMRS transmission is Nsymb=2·(NsymbUL−1)−NSRS, where NSRS=1 if a last subframe symbol is used to transmit SRS and NSRS=0 otherwise. FIG.5illustrates a transmitter block diagram500for a PDSCH in a subframe according to embodiments of the present disclosure. The embodiment of the transmitter block diagram500illustrated inFIG.5is for illustration only. One or more of the components illustrated inFIG.5can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions.FIG.5does not limit the scope of this disclosure to any particular implementation of the transmitter block diagram500. As shown inFIG.5, information bits510are encoded by encoder520, such as a turbo encoder, and modulated by modulator530, for example using quadrature phase shift keying (QPSK) modulation. A serial to parallel (S/P) converter540generates M modulation symbols that are subsequently provided to a mapper550to be mapped to REs selected by a transmission BW selection unit555for an assigned PDSCH transmission BW, unit560applies an Inverse fast Fourier transform (IFFT), the output is then serialized by a parallel to serial (P/S) converter570to create a time domain signal, filtering is applied by filter580, and a signal transmitted590. Additional functionalities, such as data scrambling, cyclic prefix insertion, time windowing, interleaving, and others are well known in the art and are not shown for brevity. FIG.6illustrates a receiver block diagram600for a PDSCH in a subframe according to embodiments of the present disclosure. The embodiment of the diagram600illustrated inFIG.6is for illustration only. One or more of the components illustrated inFIG.6can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions.FIG.6does not limit the scope of this disclosure to any particular implementation of the diagram600. As shown inFIG.6, a received signal610is filtered by filter620, REs630for an assigned reception BW are selected by BW selector635, unit640applies a fast Fourier transform (FFT), and an output is serialized by a parallel-to-serial converter650. Subsequently, a demodulator660coherently demodulates data symbols by applying a channel estimate obtained from a DMRS or a CRS (not shown), and a decoder670, such as a turbo decoder, decodes the demodulated data to provide an estimate of the information data bits680. Additional functionalities such as time-windowing, cyclic prefix removal, de-scrambling, channel estimation, and de-interleaving are not shown for brevity. FIG.7illustrates a transmitter block diagram700for a PUSCH in a subframe according to embodiments of the present disclosure. The embodiment of the block diagram700illustrated inFIG.7is for illustration only. One or more of the components illustrated inFIG.5can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions.FIG.7does not limit the scope of this disclosure to any particular implementation of the block diagram700. As shown inFIG.7, information data bits710are encoded by encoder720, such as a turbo encoder, and modulated by modulator730. A discrete Fourier transform (DFT) unit740applies a DFT on the modulated data bits, REs750corresponding to an assigned PUSCH transmission BW are selected by transmission BW selection unit755, unit760applies an IFFT and, after a cyclic prefix insertion (not shown), filtering is applied by filter770and a signal transmitted780. FIG.8illustrates a receiver block diagram800for a PUSCH in a subframe according to embodiments of the present disclosure. The embodiment of the block diagram800illustrated inFIG.8is for illustration only. One or more of the components illustrated inFIG.8can be implemented in specialized circuitry configured to perform the noted functions or one or more of the components can be implemented by one or more processors executing instructions to perform the noted functions.FIG.8does not limit the scope of this disclosure to any particular implementation of the block diagram800. As shown inFIG.8, a received signal810is filtered by filter820. Subsequently, after a cyclic prefix is removed (not shown), unit830applies a FFT, REs840corresponding to an assigned PUSCH reception BW are selected by a reception BW selector845, unit850applies an inverse DFT (IDFT), a demodulator860coherently demodulates data symbols by applying a channel estimate obtained from a DMRS (not shown), a decoder870, such as a turbo decoder, decodes the demodulated data to provide an estimate of the information data bits880. FIG.9illustrates an example antenna blocks900according to embodiments of the present disclosure. The embodiment of the antenna blocks900illustrated inFIG.9is for illustration only.FIG.9does not limit the scope of this disclosure to any particular implementation of the antenna blocks900. The 3GPP LTE and NR specifications support up to 32 CSI-RS antenna ports which enable an eNB to be equipped with a large number of antenna elements (such as 64 or 128). In this case, a plurality of antenna elements is mapped onto one CSI-RS port. For next generation cellular systems such as 5G, the maximum number of CSI-RS ports can either remain the same or increase. For mmWave bands, although the number of antenna elements can be larger for a given form factor, the number of CSI-RS ports—which can correspond to the number of digitally precoded ports—tends to be limited due to hardware constraints (such as the feasibility to install a large number of ADCs/DACs at mmWave frequencies) as illustrated inFIG.9. In this case, one CSI-RS port is mapped onto a large number of antenna elements which can be controlled by a bank of analog phase shifters901. One CSI-RS port can then correspond to one sub-array which produces a narrow analog beam through analog beamforming905. This analog beam can be configured to sweep across a wider range of angles920by varying the phase shifter bank across symbols or subframes. The number of sub-arrays (equal to the number of RF chains) is the same as the number of CSI-RS ports NCSI-PORT. A digital beamforming unit910performs a linear combination across NCSI-PORTanalog beams to further increase precoding gain. While analog beams are wideband (hence not frequency-selective), digital precoding can be varied across frequency sub-bands or resource blocks. Receiver operation can be conceived analogously. The UL SU-MIMO transmission is supported using a codebook-based transmission scheme. In LTE UL codebook, pre-coders with antenna selection has been supported in order to keep peak-to-average power ratio (PAPR) low and cubic-metric (CM) for rank >1 small. Antenna selection offers performance improvement in some scenarios, especially for SC-FDMA based UL in LTE. In 5G NR systems, two UL transmission schemes are supported, namely codebook-based and non-codebook-based. The codebook-based transmission scheme is based on an UL codebook similar to LTE. The NR UL codebook, however, is dependent on whether or not the UE is capable to transmit UL data (PUSCH) using all of, or a subset of antenna ports. For example, the UE can be capable of at least one of full-coherent (all antenna ports), partial-coherent (a subset of antenna ports), or non-coherent UL transmission (a single antenna port) to transmit a layer in UL. The 5G NR UL codebook has been designed keeping this UE coherence capability in mind. In both LTE and NR, an UL grant (containing DCI format 4 for LTE and DCI format 0_1 for NR) includes a single TPMI field (along with TRI) which indicates the single precoding vector or matrix (from the UL codebook) a UE shall use for the scheduled UL transmission. Therefore, when multiple PRBs are allocated to the UE, a single precoding matrix indicated by the PMI implies that wideband UL precoding is utilized. Despite its simplicity, this is clearly sub-optimal since typical UL channel is frequency-selective and a UE is frequency scheduled to transmit using multiple PRBs. Yet another drawback of UL SU-MIMO is the lack of support for scenarios where accurate UL-CSI is unavailable at the eNB or gNB (which is important for properly operating codebook-based transmission). This situation can happen in scenarios with high-mobility UEs or bursty inter-cell interference in cells with poor isolation. Therefore, there is a need for designing new components to enable more efficient support for UL MIMO for the following reasons. First, the support for frequency-selective (or subband) precoding for UL MIMO is desired whenever possible. Second, UL MIMO should offer competitive performance even when accurate UL-CSI is unavailable at the eNB. Third, the proposed UL MIMO solution should be able to exploit UL-DL reciprocity where CSI-RS is utilized by the UE to provide UL-CSI estimation for TDD and FDD (with partial UL-DL reciprocity) scenarios. As described in U.S. patent application Ser. No. 15/491,927, filed Apr. 19, 2017 and entitled “Method and Apparatus for Enabling Uplink MIM,” which is incorporated herein by reference in its entirety, such efficient UL MIMO operations and components have been proposed. Similar to LTE, MIMO has been identified as an essential feature for 5G NR in order to achieve high system throughput requirements. One of the key components of a MIMO transmission scheme is the accurate CSI acquisition at the eNB (or TRP). For MU-MIMO, in particular, the availability of accurate CSI is necessary in order to guarantee high MU performance. For TDD systems, the CSI can be acquired using the SRS transmission (from a UE) relying on the channel reciprocity. For FDD systems, on the other hand, it can be acquired using the CSI-RS transmission from eNB, and CSI-RS measurement and CSI feedback from UE. In NR, two CSI reporting mechanisms are supported, Type I for low resolution CSI reporting and Type II for high resolution CSI reporting. In this disclosure, the term “measurement RS” is used to denote SRS or CSI-RS used for CSI measurement/reporting. The measurement RS (SRS or CSI-RS) can be dynamically triggered by the NW/gNB (e.g., via DCI in case of aperiodic RS), preconfigured with a certain time-domain behavior (such as periodicity and offset, in case of periodic RS), or a combination of such pre-configuration and activation/deactivation (in case of semi-persistent RS). FIG.10illustrates an aperiodic CSI-RS measurement and aperiodic CSI reporting operation1000according to embodiments of the present disclosure. The embodiment of the aperiodic CSI-RS measurement and aperiodic CSI reporting operation1000illustrated inFIG.10is for illustration only.FIG.10does not limit the scope of this disclosure to any particular implementation of the aperiodic CSI-RS measurement and aperiodic CSI reporting operation1000. When measurement RS is CSI-RS, an aperiodic CSI-RS transmission linked with an aperiodic CSI reporting is triggered via the CSI request field in DCI carried on PDCCH. In one example illustrated inFIG.10, an aperiodic CSI-RS measurement and aperiodic CSI reporting operation1000starts with the gNB/NW signaling to a UE an aperiodic CSI-RS (AP-CSI-RS) trigger or indication (step1001). This trigger or indication can be included in a DCI (either UL-related or DL-related, either separately or jointly signaled with an aperiodic CSI request/trigger) and indicate transmission of AP-CSI-RS in a same (zero time offset) or later slot/sub-frame (>0 time offset). Upon receiving the AP-CSI-RS transmitted by the gNB/NW (step1002), the UE measures the AP-CSI-RS and, in turn, calculates and reports an aperiodic CSI (step1003) comprising, for example, all or a subset of RI, CQI, PMI, LI, and CRI. Upon receiving the CSI report from the UE, the NW can use the CSI report for data (PDSCH) transmission (step1004), and the UE can receive the data (PDSCH) transmission (step1005). Let μCSIRSand μPDCCHbe the subcarrier spacing (SCS) configurations for CSI-RS and PDCCH, respectively. In one example, μCSIRSand μPDCCHtake a value from {0, 1, 2, 3, 4} which correspond to (or indicate) subcarrier spacing values {15 kHz, 30 kHz, 60 kHz, 120 kHz}. For subcarrier spacing configuration μ, slots are numbered nsμ∈{0, . . . , Nslotsubframe,μ−1} in increasing order within a subframe and ns,fμ∈{0, . . . , Nslotframe,μ−1} in increasing order within a frame. There are Nsymbslotconsecutive OFDM symbols in a slot where Nsymbslotdepends on the cyclic prefix as given by Table 1 and Table 2. The start of slot nsμin a subframe is aligned in time with the start of OFDM symbol nsμNsymbslotin the same subframe. TABLE 1Number of OFDM symbols per slot, slots per frame,and slots per subframe for normal cyclic prefix.μNsymbslotNslotframe, μNslotsubframe, μ01410111420221440431480841416016 TABLE 2Number of OFDM symbols per slot, slots per frame,and slots per subframe for extended cyclic prefix.μNsymbslotNslotframe, μNslotsubframe, μ212404 When μCSIRS=μPDCCH, the numerologies of PDCCH and CSI-RS are the same, hence the time offset for AP-CSI-RS transmission, as shown inFIG.10, is the same in two numerologies. When μCSIRS≠μPDCCH, however, the numerologies of PDCCH and CSI-RS are different, hence the time offset for AP-CSI-RS transmission, as shown inFIG.10, can only be in one of the two numerologies. It is unclear which of the two numerologies is used for the time offset, and what are the additional steps required to determine in this case of mixed numerologies. This disclosure proposes example embodiments to address these questions. In one embodiment 1, for each aperiodic CSI-RS resource in a CSI-RS resource set associated with each CSI triggering state, the UE is indicated the quasi co-location (QCL) configuration of quasi co-location RS source(s) and quasi co-location type(s), as described in NR, through higher layer signaling of qcl-info which contains a list of references to TCI-State's for the aperiodic CSI-RS resources associated with the CSI triggering state. If a State referred to in the list is configured with a reference to an RS associated with ‘QCL-TypeD’, that RS may be an SS/PBCH block located in the same or different CC/DL BWP or a CSI-RS resource configured as periodic or semi-persistent located in the same or different CC/DL BWP. The UE applies the QCL assumption when receiving the aperiodic CSI-RS based on a condition on the scheduling offset (S) between the last symbol of the PDCCH carrying the triggering DCI and the first symbol of the aperiodic CSI-RS resources in a NZP-CSI-RS-ResourceSet configured without higher layer parameter trs-Info and without the higher layer parameter repetition. At least one of the following sub-embodiments can be used. Note that the unit of the scheduling offset (S) is OFDM symbol(s). In sub-embodiment 1A, the UE does not expect that the SCS associated with the PDCCH carrying the triggering DCI is greater than the CSI-RS SCS, i.e., μPDCCH≤μCSI-RS, and the scheduling offset is defined in the numerology of the aperiodic CSI-RS, μCSI-RS. When scheduling offset is smaller than a threshold α, i.e., δ<α,if there is any other DL signal with an indicated TCI state in the same symbols as the CSI-RS, the UE applies the QCL assumption of the other DL signal also when receiving the aperiodic CSI-RS. The other DL signal refers to PDSCH scheduled with offset larger than or equal to the threshold timeDurationForQCL, as defined in NR specification, aperiodic CSI-RS scheduled with offset larger than or equal to α, when the UE reported threshold beamSwitch Timing is one of the values {14, 28, 48}, periodic CSI-RS, semi-persistent CSI-RS;else, when receiving the aperiodic CSI-RS, the UE applies the QCL assumption used for the CORESET associated with a monitored search space with the lowest CORESET-ID in the latest slot in which one or more CORESETs within the active BWP of the serving cell are monitored. When scheduling offset is equal to or greater than the threshold α, i.e., δ≥α,the UE is expected to apply the QCL assumptions in the indicated TCI states for the aperiodic CSI-RS resources in the CSI triggering state indicated by the CSI trigger field in DCI. The threshold α is determined according to at least one of the following examples. In one example 1A-1, the threshold α=Y+d, whereY is the UE reported threshold beamSwitch Timing, as defined in NR specification, which takes a value from a set including {14, 28, 48}, d=0⁢if⁢the⁢PDCCH⁢SCS⁢is⁢equal⁢theCSI-RS⁢SCS⁢(μPDCCH=μCSI-RS)⁢and⁢d=14⁢2μCSIRS2μPDCCH⁢or⁢⌈14⁢2μCSIRS2μPDCCH⌉⁢or⁢⌊14⁢2μCSIRS2μPDCCH⌋⁢or⁢14⁢⌈2μCSIRS2μPDCCH⌉⁢or⁢14⁢⌊2μCSIRS2μPDCCH⌋otherwise. In one example 1A-2, the threshold α=Y×d, whereY is defined in example 1A-1, d=2μCSI-RS2μPDCCH=2μCSI-RS-μPDCCH⁢or⁢d=⌈2μCSI-RS2μPDCCH⌉=⌈2μCSI-RS-μPDCCH⌉⁢or⁢d=⌊2μCSI-RS2μPDCCH⌋=⌊2μCSI-RS-μPDCCH⌋. In one example 1A-3, the threshold α=Y×d, whereY is defined in example 1A-1, d=max(1,2μCSI-RS2μPDCCH)=2m⁢a⁢x⁡(0,μCSI-RS-μPDCCH). In one example 1A-4, the threshold α=Y+d, whereY is defined in example 1A-1, d=(2μCSI-RS2μPDCCH)⁢M=(2μCSI-RS-μPDCCH)⁢M⁢or⁢d=⌈2μCSI-RS2μPDCCH⌉⁢M=⌈2μCSI-RS-μPDCCH⌉⁢M⁢or⁢d=⌊2μCSI-RS2μPDCCH⌋⁢M=⌊2μCSI-RS-μPDCCH⌋⁢M. In one example 1A-5, the threshold α=Y+d, whereY is defined in example 1A-1, d=(2μCSI-RS2μPDCCH-1)⁢M=(2μCSI-RS-μPDCCH-1)⁢M⁢or⁢d=(⌈2μCSI-RS2μPDCCH⌉-1)⁢M=(⌈2μCSI-RS-μPDCCH⌉-1)⁢M⁢or⁢d=(⌊2μCSI-RS2μPDCCH⌋-1)⁢M=(⌊2μCSI-RS-μPDCCH⌋-1)⁢M. In one example 1A-6, the threshold α=Y+d, whereY is defined in example 1A-1, d=(max(1,2μCSI-RS2μPDCCH)-1)⁢M=max(0,2μCSI-RS2μPDCCH-1)⁢M=(2m⁢a⁢x⁡(0,μCSI-RS-μPDCCH)-1)⁢M. In one example 1A-7, the threshold α=Y+d, whereY is defined in example 1A-1,d=0 if μPDCCH=μCSI-RSand d=M otherwise. The parameter M in example 1A-4 through 1A-7 is determined according to at least one of the following alternatives (Alt).In one alternative Alt 1A-0: M=Y.In one alternative Alt 1A-1: M=14.In one alternative Alt 1A-2: M=12.In one alternative Alt 1A-3: M depends on Y; for example, M=14 if Y=14 or 28, and M=12 if Y=48.In one alternative Alt 1A-4: M depends on μCSI-RS, for example, M=14 if μCSI-RS≠2, and M=12 if μCSI-RS=2.In one alternative Alt 1A-5: M=m OFDM symbols; for example, m is the span, in number of OFDM symbols, of the PDCCH monitoring occasion in which the triggering DCI is received.In one alternative Alt 1A-6: M=m OFDM symbols, and m is configured, for example, via higher layer or more dynamic MAC CE based or DCI based signaling, either explicitly (using a new state or configuration parameter) or implicitly (using one of the existing states or configuration parameters).In one alternative Alt 1A-7: M=m OFDM symbols, and m is reported by the UE, for example, as part of the UE capability signaling.In one alternative Alt 1A-8: M=m OFDM symbols, and m is fixed (e.g., 12 or 14). In one example, m=A, where the A value is given by at least one of the examples in 3A-6 or 3A-6a. For instance, in another example, the m value is given by the following:μPDCCH=0 (i.e., 15 kHz SCS): m=4 symbols;μPDCCH=1 (i.e., 30 kHz SCS): m=4 symbols;μPDCCH=2 (i.e., 60 kHz SCS): m=8 symbols;μPDCCH=3 (i.e., 120 kHz SCS): m=8 or 12 symbols.In another example, the m value is given by the following:t=1: m=4 symbols;t=2: m=4 symbols;t=4: m=4 symbols;t=8: m=8 symbols;t=16: m=8 or 12 symbols;where t=2μCSI-RS2μPDCCH⁢or⁢⌈2μCSI-RS2μPDCCH⌉⁢or⁢⌊2μCSI-RS2μPDCCH⌋.In these examples, the m value can either be without the quantization step (cf. Ex 3A-6a-1) or with the quantization step (cf. Ex 3A-6a-2). In one example 1A-8, the threshold α=Y(1+d), whereY is defined in example 1A-1,d is according to one of example 1A-1, 1A-4, 1A-5, 1A-6, or 1A-7. In one example 1A-9, the threshold α=Yd, whereY is defined in example 1A-1,d is determined according to one of example 1A-1, 1A-4, 1A-5, 1A-6, or 1A-7. In sub-embodiment 1B, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values, and the scheduling offset is defined in the numerology of the aperiodic CSI-RS μCSI-RS. The rest of the details are the same as or analogous to those in sub-embodiment 1A (including all examples and alternatives) except that the condition “if the PDCCH SCS is equal the CSI-RS SCS (μPDCCH=μCSI-RS)” in some of the above examples (example 1A-1 through example 1A-9) is replaced with the condition “if the PDCCH SCS is larger than or equal the CSI-RS SCS (μPDCCH≥μCSI-RS)”. In sub-embodiment 1C, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values. The scheduling offset is defined based on the maximum subcarrier spacing between the PDCCH and the aperiodic CSI-RS. Hence, when μPDCCH≤μCSI-RS, the scheduling offset is defined in the numerology of the aperiodic CSI-RS μCSI-RSand the rest of the details are the same as or analogous to those in sub-embodiment 1A (including all examples and alternatives). When μPDCCH>μCSI-RS, the scheduling offset is defined in the numerology of the PDCCH μPDCCHand the rest of the details are the same as in sub-embodiment 1A (including all examples and alternatives) except that μCSI-RSand μPDCCHare swapped everywhere, i.e., μCSI-RSis replaced with μPDCCHand μPDCCHis replaced with μCSI-RS. In sub-embodiment 1D, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values. The scheduling offset is defined based on the minimum subcarrier spacing between the PDCCH and the aperiodic CSI-RS. Hence, when μPDCCH>μCSI-RS, the scheduling offset is defined in the numerology of the aperiodic CSI-RS μCSI-RSand the rest of the details are the same as or analogous to those in sub-embodiment 1A (including all examples and alternatives). When μPDCCH≤μCSI-RS, the scheduling offset is defined in the numerology of the PDCCH μPDCCHand the rest of the details are the same as or analogous to those in sub-embodiment 1A (including all examples and alternatives) except that μCSI-RSand μPDCCHare swapped everywhere, i.e., μCSI-RSis replaced with μPDCCHand μPDCCHis replaced with μCSI-RS. In one embodiment 2, when aperiodic CSI-RS is used with aperiodic CSI reporting, the CSI-RS triggering offset X is configured per resource set by the higher layer parameter aperiodicTriggeringOffset. The CSI-RS triggering offset has the values of {0, 1, 2, 3, 4, 16, 24} slots. Note that the unit of the CSI-RS triggering offset is slot(s). The aperiodic CSI-RS is transmitted in slot n′+X, where X is the CSI-RS triggering offset in the numerology of CSI-RS according to the higher layer parameter aperiodicTriggeringOffset, and n′ is the reference slot used to apply the slot offset for AP-CSI-RS transmission. If all the associated trigger states do not have the higher layer parameter qcl-Type set to ‘QCL-TypeD’ in the corresponding TCI states and the PDCCH SCS is equal to the CSI-RS SCS, the CSI-RS triggering offset X is fixed to zero. The value n′ depends on whether μPDCCH=μCSI-RSor μPDCCH≠μCSI-RS. At least one of the following sub-embodiments can be used. In one sub-embodiment 2A, the UE does not expect that the SCS associated with the PDCCH carrying the triggering DCI is greater than the CSI-RS SCS, i.e., μPDCCH≤μCSI-RS, and the slot offset is defined in the numerology of the aperiodic CSI-RS, μCSI-RS. Let n be the slot with the triggering DCI in the numerology of the PDCCH containing the triggering DCI. The reference slot n′ is then determined according to at least one of the following examples. In one example 2A-1,n′=n if the PDCCH SCS is equal to the CSI-RS SCS (μPDCCH=μCSI-RS)⁢and⁢n′=(n+1)⁢nμCSIRS2μPDCCH⁢or⁢⌈(n+1)⁢2μCSIRS2μPDCCH⌉⁢or⁢⌊(n+1)⁢2μCSIRS2μPDCCH⌋otherwise. In one example 2A-2,n′=n if μPDCCHμCSI-RS n′=(n+1)⁢2μCSI-RS2μPDCCH-1⁢or⁢⌈(n+1)⁢2μCSI-RS2μPDCCH-1⌉⁢or⁢⌊(n+1)⁢2μCSI-RS2μPDCCH-1⌋otherwise. In one example 2A-3, n′=n×2μCSI-RS2μPDCCH=n×2μCSI-RS-μPDCCH⁢or⁢n′=⌈n×2μCSI-RS2μPDCCH⌉=⌈n×2μCSI-RS-μPDCCH⌉⁢or⁢n′=⌊n×2μCSI-RS2m⁢PDCCH⌋=⌊n×2μCSI-RS-μPDCCH⌋. In one example 2A-4, n′=(n+1)⁢2μCSI-RS2μPDCCH=(n+1)⁢2μCSI-RS-μPDCCH⁢or⁢n′=⌈(n+1)⁢2μCSI-RS2μPDCCH⌉=⌈(n+1)⁢2μCSI-RS-μPDCCH⌉⁢or⁢n′=⌊(n+1)⁢2μCSI-RS2μPDCCH⌋=⌊(n+1)⁢2μCSI-RS-μPDCCH⌋. In one example 2A-5, n′=n×max(1,2μCSI-RS2μPDCCH)=n×2m⁢a⁢x⁡(0,μCSI-RS-μPDCCH). In one example 2A-6, n′=(n+1)⁢max(1,2μCSI-RS2μPDCCH)=(n+1)⁢2m⁢a⁢x⁡(0,μCSI-RS-μPDCCH). In one example 2A-7, n′=(n+e)⁢2μCSI-RS2μPDCCH=(n+e)⁢2μCSI-RS-μPDCCH⁢or⁢n′=⌈(n+1⁢e)⁢2μCSI-RS2μPDCCH⌉=⌈(n+e)⁢2μCSI-RS-μPDCCH⌉⁢or⁢n′=⌊(n+e)⁢2μCSI-RS2μPDCCH⌋=⌊(n+e)⁢2μCSI-RS-μPDCCH⌋, where e is an indicator which takes a value e=0 when μPDCCH=CSI-RSand another value e=1 otherwise. In one example 2A-8, n′=(n+e)⁢max(1,2μCSI-RS2μPDCCH)=(n+e)⁢2m⁢a⁢x⁡(0,μCSI-RS-μPDCCH), where e is an indicator which takes a value e=0 when μPDCCH=μCSI-RSand another value e=1 otherwise. In one sub-embodiment 2B, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values, and the slot offset is defined in the numerology of the aperiodic CSI-RS μCSI-RS. The rest of the details are the same as or analogous to those in sub-embodiment 2A (including all examples and alternatives) except that the condition “if the PDCCH SCS is equal the CSI-RS SCS (μPDCCH=μCSI-RS)” in some of the above examples (example 2A-1 through example 2A-8) is replaced with the condition “if the PDCCH SCS is larger than or equal the CSI-RS SCS (μPDCCH≥μCSI-RS)”. In one sub-embodiment 2C, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values. The slot offset is defined based on the maximum subcarrier spacing between the PDCCH and the aperiodic CSI-RS. Hence, when μPDCCH≤μCSI-RS, the slot offset is defined in the numerology of the aperiodic CSI-RS μCSI-RSand the rest of the details are the same as or analogous to those in sub-embodiment 2A (including all examples and alternatives). When μPDCCH>μCSI-RS, the slot offset is defined in the numerology of the PDCCH μPDCCHand the rest of the details are the same as or analogous to those in sub-embodiment 2A (including all examples and alternatives) except that μCSI-RSand μPDCCHare swapped everywhere, i.e., μCSI-RSis replaced with μPDCCHand μPDCCHis replaced with μCSI-RS. In one sub-embodiment 2D, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values. The slot offset is defined based on the minimum subcarrier spacing between the PDCCH and the aperiodic CSI-RS. Hence, when μPDCCH>μCSI-RS, the slot offset is defined in the numerology of the aperiodic CSI-RS μCSI-RSand the rest of the details are the same as or analogous to those in sub-embodiment 2A (including all examples and alternatives). When μPDCCH≤μCSI-RS, the slot offset is defined in the numerology of the PDCCH μPDCCHand the rest of the details are the same as or analogous to those in sub-embodiment 2A (including all examples and alternatives) except that μCSI-RSand μPDCCHare swapped everywhere, i.e., μCSI-RSis replaced with μPDCCHand μPDCCHis replaced with μCSI-RS. In one sub-embodiment 2E, the CSI-RS triggering offset X in some embodiments of this disclosure, takes a value from a set S, where the unit for X is slots in the numerology of the CSI-RS, and the set S includes {0, 1, 2, 3, 4, 16, 24}. The set S also includes additional values in an another set T, where the another set T is according to at least one of the following alternatives.In one alternative Alt 2E-1: T is empty, i.e., the set S={0, 1, 2, 3, 4, 16, 24}.In one alternative Alt 2E-2: T={5, 6, . . . , 15, 17, 18, . . . , 23, 25, 26, . . . , Z}, i.e., the set S={0, 1, 2, 3, 4, . . . , Z}. Here, Z is either fixed (e.g., to 31 or 32) or configured (e.g., from 31 or 32).In one alternative Alt 2E-3: T {5, 6, . . . , 15, 17, 18, . . . , Z}, i.e., the set S={0, 1, 2, 3, 4, . . . , Z}. Here, Z is either fixed (e.g., to 24 or 31 or 32) or configured (e.g., from 24 or 32).In one alternative Alt 2E-4: T={8}, i.e., the set S={0, 1, 2, 3, 4, 8, 16, 24}.In one alternative Alt 2E-5: T={6, 8}, i.e., the set S={0, 1, 2, 3, 4, 6, 8, 16, 24}.In one alternative Alt 2E-6: T={8, 12}, i.e., the set S={0, 1, 2, 3, 4, 8, 12, 16, 24}.In one alternative Alt 2E-7: T={8, 32}, i.e., the set S={0, 1, 2, 3, 4, 8, 16, 24, 32}.In one alternative Alt 2E-8: T={6, 8, 12}, i.e., the set S={0, 1, 2, 3, 4, 6, 8, 12, 16, 24}.In one alternative Alt 2E-9: T={8, 12, 32}, i.e., the set S={0, 1, 2, 3, 4, 8, 12, 16, 24, 32}.In one alternative Alt 2E-10: T={6, 8, 12, 32}, i.e., the set S={0, 1, 2, 3, 4, 6, 8, 12, 16, 24, 32}.In one alternative Alt 2E-11: T={Z}, i.e., the set S={0, 1, 2, 3, 4, 16, 24, Z}. Here, Z is either fixed (e.g., to a value from {6, 8, 12, 32}) or configured (e.g., from {6, 8, 12, 32}).In one alternative Alt 2E-12: T={Z1, Z2}, i.e., the set S={0, 1, 2, 3, 4, 16, 24, Z1, Z2}. Here, Z1and Z2are either fixed (e.g., to two values from {6, 8, 12, 32}) or configured (e.g., from {6, 8, 12, 32}).In one alternative Alt 2E-13: T={Z1, Z2, Z3}, i.e., the set S={0, 1, 2, 3, 4, 16, 24, Z1, Z2, Z3}. Here, Z1, Z2, and Z3are either fixed (e.g., to three values from {6, 8, 12, 32}) or configured (e.g., from {6, 8, 12, 32}).In one alternative Alt 2E-14: T={Z1, Z2, Z3, Z4}, i.e., the set S={0, 1, 2, 3, 4, 16, 24, Z1, Z2, Z3, Z4}. Here, Z1, Z2, Z3, and Z4are either fixed (e.g., to four values from {6, 8, 12, 20, 28, 32}) or configured (e.g., from {6, 8, 12, 20, 28, 32}). In one sub-embodiment 2F, the set S includes additional values according to Alt 2E-1 through Alt 2E-14 of sub-embodiment 2E only when a certain condition is satisfied. For example, the certain condition can be based on the values for μPDCCHand μCSI-RS. At least one of the following alternatives can be used for the certain condition. In one alternative Alt 2F-1, the set S includes additional values in the set T for both cases when μPDCCH>μCSI-RSand μPDCCH<μCSI-RS, where the set T is the same for both cases when μPDCCH>μCSI-RSand μPDCCH<μCSI-RS, and is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCHICSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one alternative Alt 2F-1a, the set S includes additional values in the set T for both cases when μPDCCH>μCSI-RSand μPDCCH<μCSI-RS, where the set T can be different for both cases when μPDCCH>ICSI-RSand μPDCCH<μCSI-RS, and is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCHICSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one alternative Alt 2F-2, the set S includes additional values in the set T for both cases when μPDCCH>μCSI-RSand μPDCCH<μCSI-RS, where the set T is the same for both cases when μPDCCH>μCSI-RSand μPDCCH≤μCSI-RS, and is according to at least one of Alt 2E-1 through Alt 2E-13. In one alternative Alt 2F-2a, the set S includes additional values in the set T for both cases when μPDCCH>μCSI-RSand μPDCCH≤μCSI-RS, where the set T can be different for both cases when μPDCCH>ICSI-RSand μPDCCH≤μCSI-RS, and is according to at least one of Alt 2E-1 through Alt 2E-13. In one alternative Alt 2F-3, the set S includes additional values in the set T for both cases when μPDCCH≥μCSI-RSand MPDCCH<, where the set T is the same for both cases when μPDCCH≥μCSI-RSand μPDCCH<μCSI-RS, and is according to at least one of Alt 2E-1 through Alt 2E-13. In one alternative Alt 2F-3a, the set S includes additional values in the set T for both cases when μPDCCH≥μCSI-RSand μPDCCH<μCSI-RS, where the set T can be different for both cases when μPDCCH≥μCSI-RSand μPDCCH<μCSI-RS, and is according to at least one of Alt 2E-1 through Alt 2E-13. In one alternative Alt 2F-4, the set S includes additional values in the set T only when μPDCCH>μCSI-RS, where the set T is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCH<μCSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one alternative Alt 2F-5, the set S includes additional values in the set T only when μPDCCH<μCSI-RS, where the set T is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCH>μCSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one alternative Alt 2F-6, the set S includes additional values in the set T only when μPDCCH≥μCSI-RS, where the set T is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCH<μCSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one alternative Alt 2F-7, the set S includes additional values in the set T only when μPDCCH≤μCSI-RS, where the set T is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCH>μCSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one alternative Alt 2F-8, the set S includes additional values in the set T only when μPDCCH>μCSI-RS, where the set T is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCH≤μCSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one alternative Alt 2F-9, the set S includes additional values in the set T only when μPDCCH<μCSI-RS, where the set T is according to at least one of Alt 2E-1 through Alt 2E-13. When μPDCCH≥μCSI-RS, the set S={0, 1, 2, 3, 4, 16, 24}. In one embodiment 3, let k be the number of (OFDM) symbols between the end of the PDCCH containing the triggering DCI and the CSI-RS. In order to avoid too short time between DCI decoding and start receiving the triggered CSI-RS at the UE, which could happen if k is too small, the UE processing can be relaxed. At least one of the following embodiments can be used for this purpose. In one sub-embodiment 3A, the UE does not expect that the SCS associated with the PDCCH carrying the triggering DCI is greater than the CSI-RS SCS, i.e., μPDCCH≤μCSI-RS, and the UE processing relaxation is defined in the numerology of the aperiodic CSI-RS, μCSI-RS. In one example, the UE processing relaxation is performed regardless of the values for μPDCCHand μCSI-RS. In another example, when μPDCCHICSI-RS, no processing relaxation is performed, and when μPDCCH<μCSI-RS, the UE processing relaxation is performed according to at least one of the following examples. In one example 3A-1, the UE does not expect that the PDCCH carrying the triggering DCI is contained in the last x symbols of the slot (in CSI-RS numerology), i.e., k≥x. In one example, x=10. In one example 3A-2, the UE is not required to process aperiodic CSI-RS if there are less than m×2μCSI-RS2μPDCCH⁢or⁢⌈m×2μCSI-RS2μPDCCH⌉⁢or⁢⌊m×2μCSI-RS2μPDCCH⌋ symbols between the end of the PDCCH containing the triggering DCI and the beginning of CSI-RS, i.e., k<m×2μCSI-RS2μPDCCH⁢or⁢⌈m×2μCSI-RS2μPDCCH⌉⁢or⁢⌊m×2μCSI-RS2μPDCCH⌋. Here, m is defined according to at least one of Alt1A-5, Alt1A-6, Alt1A-7, and Alt1A-8 or m is fixed. In one example 3A-3, the CSI-RS triggering offset X is always larger than zero. In one example 3A-4, the UE processing is relaxed by y slots in CSI-RS numerology. In one example, y=1. In one example 3A-5, the slot offset is applied as follows.slot offset=max(1,X) if μPDCCH≠μCSI-RS, andslot offset=X if μPDCCH=μCSI-RS. In one example 3A-6, the UE processing relaxation is based on choosing an appropriate Beamswitchtiming Y (cf. embodiment 1). In one example 3A-6a, the UE processing relaxation is based on defining the earliest possible starting point for the CSI-RS transmission/reception (T). In one example, T=the end of the PDCCH+Δ or the end of the PDCCH+Δ×t, where t=2μCSI-RS2μPDCCH⁢or⁢⌈2μCSI-RS2μPDCCH⌉⁢or⁢⌊2μCSI-RS2μPDCCH⌋ and Δ is defined according to at least one of the following examples.In one example Ex 3A-6a-1: The Δ is determined as a number of symbols based on CSI-RS SCS counting from the end of the last symbol of the received PDCCH symbol to the beginning of the first symbol of the corresponding received CSI-RS, i.e., the UE is expected to be able to measure the aperiodic CSI RS, if the CSI-RS starts no earlier than at least Δ=Ncsirs PDCCH symbols after the end of the PDCCH triggering the aperiodic CSI-RS.In one example Ex 3A-6a-2: The Δ is determined as a number of symbols based on CSI-RS SCS counting from the end of the last symbol of the received PDCCH symbol to the beginning of the first symbol of the corresponding received CSI-RS, which is quantized (using the granularity of CSI-RS slot duration) to the next CSI-RS slot boundary, i.e., the UE is expected to be able to measure the aperiodic CSI RS, if the CSI-RS starts no earlier than the first symbol of the CSI-RS carrier's slot that starts at least Δ=Ncsirs PDCCH symbols after the end of the PDCCH triggering the aperiodic CSI-RS. When μPDCCH>μCSI-RS, for the UE processing relaxation time (T), the definition Ex 3A-6a-1 is used. In one example, the A value is given by the following:μPDCCH=0 (i.e., 15 kHz SCS): Δ=4 symbols;μPDCCH=1 (i.e., 30 kHz SCS): Δ=4 symbols;μPDCCH=2 (i.e., 60 kHz SCS): Δ=8 symbols;μPDCCH=3 (i.e., 120 kHz SCS): Δ=8 or 12 symbols. In another example, the A value is given by the following:t=1: Δ=4 symbols;t=2: Δ=4 symbols;t=4: Δ=4 symbols;t=8: Δ=8 symbols;t=16: Δ=8 or 12 symbols; where t=2μCSI-RS2μPDCCH⁢or⁢⌈2μCSI-RS2μPDCCH⌉⁢or⁢⌊2μCSI-RS2μPDCCH⌋. In another example, the Δ value is given by m×t, where t=2μCSI-RS2μPDCCH⁢or⁢⌈2μCSI-RS2μPDCCH⌉⁢or⁢⌊2μCSI-RS2μPDCCH⌋ and m is fixed, for example, to 4. In another example, the Δ value is fixed, for example, to 4. In these examples, the Δ value can either be without the quantization step (Ex 3A-6a-1) or with the quantization step (Ex 3A-6a-2). In one example 3A-7, the UE processing relaxation depends on the value X∈{0, . . . , 4, 16, 24} of aperiodicTriggeringOffsetIf X=0, the relaxation is performed according to at least one of Example 3A-1 through 3A-6 or 3A-6a.If X>0, no processing relaxation is performed. In one sub-embodiment 3AA, the UE does not expect that the SCS associated with the PDCCH carrying the triggering DCI is greater than the CSI-RS SCS, i.e., μPDCCH≤μCSI-RS, and the UE processing relaxation is defined in the numerology of the PDCCH, μPDCCH. In one example, the UE processing relaxation is performed regardless of the values for μPDCCHand μCSI-RS. In another example, when μPDCCH=μCSI-RS, no processing relaxation is performed, and when MPDCCH<μCSI-RS, the UE processing relaxation is performed according to at least one of the following examples. In one example 3AA-1, the UE does not expect that the PDCCH carrying the triggering DCI is contained in the last x symbols of the slot (in PDCCH numerology), i.e., k≥x. In one example, x=10. In one example 3AA-2, the UE is not required to process aperiodic CSI-RS if there are less than m×2μCSI-RS2μPDCCH⁢or⁢⌈m×2μCSI-RS2μPDCCH⌉⁢or⁢⌊m×2μCSI-RS2μPDCCH⌋ symbols between the end of the PDCCH containing the triggering DCI and the beginning of CSI-RS, i.e., k<m×2μCSI-RS2μPDCCH⁢or⁢⌈m×2μCSI-RS2μPDCCH⌉⁢or⁢⌊m×2μCSI-RS2μPDCCH⌋. Here, m is defined according to at least one of Alt1A-5, Alt1A-6, and Alt1A-7 or m is fixed. In one example 3AA-3, the CSI-RS triggering offset X is always larger than zero. In one example 3AA-4, the UE processing is relaxed by y slots in PDCCH numerology. In one example, y=1. In one example 3AA-5, the slot offset is applied as follows.slot offset=max(1,X) if μPDCCH≠μCSI-RS, andslot offset=X if μPDCCH=μCSI-RS. In one example 3AA-6, the UE processing relaxation is based on choosing an appropriate Beamswitchtiming Y (cf. embodiment 1). In one example 3AA-6a, the UE processing relaxation is based on defining the earliest possible starting point for the CSI-RS transmission/reception (T). In one example, T=the end of the PDCCH+Δ, or the end of the PDCCH+Δ×t, where t=2μCSI-RS2μPDCCH⁢or⁢⌈2μCSI-RS2μPDCCH⌉⁢or⁢⌊2μCSI-RS2μPDCCH⌋ and Δ is defined according to at least one of the following examples.In one example Ex 3AA-6a-1: The Δ is determined as a number of symbols based on PDCCH SCS counting from the end of the last symbol of the received PDCCH symbol to the beginning of the first symbol of the corresponding received CSI-RS, i.e., the UE is expected to be able to measure the aperiodic CSI RS, if the CSI-RS starts no earlier than at least Δ=Ncsirs PDCCH symbols after the end of the PDCCH triggering the aperiodic CSI-RS.In one example Ex 3AA-6a-2: The Δ is determined as a number of symbols based on PDCCH SCS counting from the end of the last symbol of the received PDCCH symbol to the beginning of the first symbol of the corresponding received CSI-RS, which is quantized (using the granularity of CSI-RS slot duration) to the next CSI-RS slot boundary, i.e., the UE is expected to be able to measure the aperiodic CSI RS, if the CSI-RS starts no earlier than the first symbol of the CSI-RS carrier's slot that starts at least Δ=Ncsirs PDCCH symbols after the end of the PDCCH triggering the aperiodic CSI-RS. When μPDCCH>μCSI-RS, for the UE processing relaxation time (T), the definition Ex 3AA-6a-1 is used. In one example, the Δ value is given by the following:μPDCCH=0 (i.e., 15 kHz SCS): Δ=4 symbols;μPDCCH=1 (i.e., 30 kHz SCS): Δ=4 symbols;μPDCCH=2 (i.e., 60 kHz SCS): Δ=8 symbols;μPDCCH=3 (i.e., 120 kHz SCS): Δ=8 or 12 symbols. In another example, the Δ value is given by the following:t=1: Δ=4 symbols;t=2: Δ=4 symbols;t=4: Δ=4 symbols;t=8: Δ=8 symbols;t=16: Δ=8 or 12 symbols; where t=2μCSI-RS2μPDCCH⁢or⁢⌈2μCSI-RS2μPDCCH⌉⁢or⁢⌊2μCSI-RS2μPDCCH⌋. In another example, the Δ value is given by m×t, where t=2μCSI-RS2μPDCCH⁢or⁢⌈2μCSI-RS2μPDCCH⌉⁢or⁢⌊2μCSI-RS2μPDCCH⌋ and m is fixed, for example, to 4. In another example, the Δ value is fixed, for example, to 4 or 8. In these examples, the Δ value can either be without the quantization step (Ex 3AA-6a-1) or with the quantization step (Ex 3AA-6a-2). In example 3AA-7, the UE processing relaxation depends on the value X∈{0, . . . , 4, 16, 24} of aperiodicTriggeringOffsetIf X=0, the relaxation is performed according to at least one of Example 3AA-1 through 3AA-6 or 3AA-6a.If X>0, no processing relaxation is performed. In one sub-embodiment 3B, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values, and the UE processing relaxation is defined in the numerology of the aperiodic CSI-RS μCSI-RS. The rest of the details are the same as or analogous to those in sub-embodiment 3A/3AA (including all examples and alternatives) except that the condition “if the PDCCH SCS is equal the CSI-RS SCS (μPDCCH=μCSI-RS)” in some of the above examples (example 3A-1/3AA-1 through example 3A-7/3AA-7) is replaced with the condition “if the PDCCH SCS is larger than or equal the CSI-RS SCS (μPDCCH≥μCSI-RS)”. In one sub-embodiment 3C, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values. The UE processing relaxation is defined based on the maximum subcarrier spacing between the PDCCH and the aperiodic CSI-RS. Hence, when μPDCCH≤μCSI-RS, the UE processing relaxation is defined in the numerology of the aperiodic CSI-RS μCSI-RSand the rest of the details are the same as or analogous to those in sub-embodiment 3A (including all examples and alternatives). When μPDCCH>μCSI-RS, the UE processing relaxation is defined in the numerology of the PDCCH μPDCCHand the rest of the details are the same as or analogous to those in sub-embodiment 3A (including all examples and alternatives) except that μCSI-RSand μPDCCHare swapped everywhere, i.e., μCSI-RSis replaced with μPDCCHand μPDCCHis replaced with μCSI-RS. In one sub-embodiment 3D, there is no restriction on PDCCH and CSI-RS SCSs, i.e., μPDCCHand μCSI-RScan take any values. The UE processing relaxation is defined based on the minimum subcarrier spacing between the PDCCH and the aperiodic CSI-RS. Hence, when μPDCCH>μCSI-RS, the UE processing relaxation is defined in the numerology of the aperiodic CSI-RS μCSI-RSand the rest of the details are the same as or analogous to those in sub-embodiment 3A (including all examples and alternatives). When μPDCCH≤μCSI-RS, the UE processing relaxation is defined in the numerology of the PDCCH μPDCCHand the rest of the details are the same as or analogous to those in sub-embodiment 3A (including all examples and alternatives) except that μCSI-RSand μPDCCHare swapped everywhere, i.e., μCSI-RSis replaced with μPDCCHand μPDCCHis replaced with μCSI-RS. In one embodiment 4A, the PDCCH containing the triggering DCI triggers an AP-SRS transmission by the UE. The embodiments 1 through 3 (on AP-CSI-RS reception) can be used (analogously) for AP-SRS transmission by the UE in a straightforward manner. Regarding AP-SRS, for a UE configured with one or more SRS resource configuration(s), and when the higher layer parameter resourceType in SRS-Resource is set to ‘aperiodic’:the UE receives a configuration of SRS resource sets;the UE receives a downlink DCI, a group common DCI, or an uplink DCI based command where a codepoint of the DCI may trigger one or more SRS resource set(s). For SRS in a resource set with usage set to ‘codebook’ or ‘antennaSwitching’, the minimal time interval between the last symbol of the PDCCH triggering the aperiodic SRS transmission and the first symbol of SRS resource is N2. Otherwise, the minimal time interval between the last symbol of the PDCCH triggering the aperiodic SRS transmission and the first symbol of SRS resource is N2+14. The minimal time interval in units of OFDM symbols is counted based on the minimum subcarrier spacing between the PDCCH and the aperiodic SRS.If the UE receives the DCI triggering aperiodic SRS in slot n, the UE transmits aperiodic SRS in each of the triggered SRS resource set(s) in slot ⌊n·2μSRS2μPDCCH⌋+khere k is configured via higher layer parameter slotOffset for each triggered SRS resources set and is based on the subcarrier spacing of the triggered SRS transmission, μSRSand μPDCCHare the subcarrier spacing configurations for triggered SRS and PDCCH carrying the triggering command respectively. According to this embodiment, the minimal time interval in units of OFDM symbols is counted based on the minimum subcarrier spacing between the PDCCH and the aperiodic SRS. Alternatively, the minimal time interval between the last symbol of the PDCCH triggering the aperiodic SRS transmission and the first symbol of SRS resource is t=N2+z×p or t=(N2+z)×p, where z=0 for SRS in a resource set with usage set to ‘codebook’ or ‘antennaSwitching’, and z=X>0 otherwise (i.e., for SRS in a resource set with usage set to ‘nonCodebook’ or ‘beamManagement’). In one example X=14. The parameter p is determined according to at least one of the following examples. In one example, p=2μSRS2μPDCCH⁢or⁢⌈2μSRS2μPDCCH⌉⁢or⁢⌊2μSRS2μPDCCH⌋⁢or⌈2μSRS2μPDCCH⌉⁢or⁢⌊2μSRS2μPDCCH⌋. In another example, p=(2μSRS-μPDCCH) or p┌2μSRS-μPDCCH┐ or p=└2μSRS-μPDCCH┘. In another example, p=max⁡(1,2μSRS2μPDCCH)=2max(0,μSRS-μPDCCH). Likewise, if the UE receives the DCI triggering aperiodic SRS in slot n, the UE transmits aperiodic SRS in each of the triggered SRS resource set(s) in slot n′+k where k is configured via higher layer parameter slotOffset for each triggered SRS resources set and n′ is determined according to at least one examples 2A-1 through 2A-8 except that μCSI-RSneeds to be replaced with μSRSin these examples. In one embodiment 4B, the PDCCH containing the triggering DCI triggers an aperiodic DL RS (e.g., CSI-RS) reception by the UE. The embodiments 1 through 3 (on AP-CSI-RS reception) can be used (analogously) for aperiodic DL RS (e.g., CSI-RS) reception by the UE in a straightforward manner. In one embodiment 4C, the PDCCH containing the triggering DCI triggers an aperiodic UL RS (e.g., SRS) transmission by the UE. The embodiments 1 through 3 (on AP-CSI-RS reception) can be used (analogously) for aperiodic UL RS (e.g., SRS) transmission by the UE in a straightforward manner. The UE may be configured with non-codebook based UL transmission when the higher layer parameter txConfig is set to ‘nonCodebook’. For non-codebook based transmission, PUSCH can be scheduled by DCI format 0_0, DCI format 0_1 or semi-statically configured to operate. The UE can determine its PUSCH precoder and transmission rank based on the SRI when multiple SRS resources are configured, where the SRI is given by the SRS resource indicator in DCI, or the SRI is given by srs-ResourceIndicator. The UE may use one or multiple SRS resources for SRS transmission, where, in an SRS resource set, the maximum number of SRS resources which can be configured to the UE for simultaneous transmission in the same symbol and the maximum number of SRS resources are UE capabilities. In one example, only one SRS port for each SRS resource is configured. In one example, only one SRS resource set can be configured with higher layer parameter usage in SRS-ResourceSet set to ‘nonCodebook’. In one example, the maximum number of SRS resources that can be configured for non-codebook based uplink transmission is 4. The indicated SRI in slot n is associated with the most recent transmission of SRS resource(s) identified by the SRI, where the SRS transmission is prior to the PDCCH carrying the SRI. For non-codebook based transmission, the UE can calculate the precoder used for the transmission of SRS based on measurement of an associated NZP CSI-RS resource. A UE can be configured with only one NZP CSI-RS resource for the SRS resource set with higher layer parameter usage in SRS-ResourceSet set to ‘nonCodebook’ if configured.If aperiodic SRS resource set is configured, the associated NZP-CSI-RS is indicated via SRS request field in DCI format 0_1 and 1_1, where AperiodicSRS-Resource Trigger (indicating the association between aperiodic SRS triggering state and SRS resource sets), triggered SRS resource(s) srs-ResourceSetId, csi-RS (indicating the associated NZP-CSI-RS. ResourceId) are higher layer configured in SRS-ResourceSet. A UE is not expected to update the SRS precoding information if the gap from the last symbol of the reception of the aperiodic NZP-CSI-RS resource and the first symbol of the aperiodic SRS transmission is less than 42 OFDM symbols.If the UE is configured with aperiodic SRS associated with aperiodic NZP CSI-RS resource, the presence of the associated CSI-RS is indicated by the SRS request field if the value of the SRS request field is not ‘00’ and if the scheduling DCI is not used for cross carrier or cross bandwidth part scheduling. The CSI-RS is located in the same slot as the SRS request field. If the UE configured with aperiodic SRS associated with aperiodic NZP CSI-RS resource, any of the TCI states configured in the scheduled CC shall not be configured with ‘QCL-TypeD’. In the following embodiments on this component, we assume that the UE is configured with a SRS resource set, and associatedCSI-RS in SRS-ResourceSet for the SRS resource set for non-codebook based UL transmission, the details of which are as explained above. We further assume that SRS resource(s) in the SRS resource set are configured to be aperiodic. In one embodiment 5A, the PDCCH containing the DCI triggers AP-SRS where the AP-SRS is associated with an AP-CSI-RS (e.g., AP-CSI-RS can be received by the UE to obtain beamforming/precoding information for pre-coded AP-SRS transmission). In one example, AP-CSI-RS is associated with an AP-SRS via higher layer configuration (this is pertinent when DL-UL beam correspondence or reciprocity holds). At least one of embodiments 1-3, or sub-embodiments therein, can be used (analogously) for aperiodic CSI-RS transmission in this case. The DCI triggering aperiodic CSI-RS can be DL-related DCI or UL-related DCI. Let μPDCCH, μCSI-RS, and μSRS, respectively, be the subcarrier spacing configurations for PDCCH, CSI-RS, and SRS. In the following embodiment, the subcarrier spacing configurations for PDCCH and CSI-RS are the same, i.e., μPDCCH=μCSI-RS, and that for SRS can be different from PDCCH/CSI-RS. In one embodiment 5B, the PDCCH containing the DCI triggers AP-SRS where the AP-SRS is associated with an AP-CSI-RS (e.g., AP-CSI-RS can be received by the UE to obtain beamforming/precoding information for pre-coded AP-SRS transmission). In one example, AP-CSI-RS is associated with an AP-SRS via higher layer configuration (this is pertinent when DL-UL beam correspondence or reciprocity holds). Regarding the QCL assumption for SRS transmission, the UE is not expected to be configured with ‘QCL-Type D’ which indicates spatial filtering information (the spatial filtering information is instead derived based on AP-CSI-RS associated with the AP-SRS). Since CSI-RS is located in the same slot as PDCCH, the slot offset between PDCCH and CSI-RS is zero. The minimal time interval between the last symbol of the PDCCH triggering the aperiodic SRS transmission and the first symbol of SRS resource is determined according to at least one example/alternative in embodiment 4A. The slot offset between PDCCH and SRS transmission is determined according to at least one example/alternative in embodiment 4A. The processing time between AP-CSI-RS reception and AP-SRS transmission needs to be such that the UE can derive/calculate the updated SRS precoding information after AP-CSI-RS reception. At least one of the following examples is used for the processing time.In one example 5B-1, a UE is not expected to update the SRS precoding information if the gap from the last symbol of the reception of the aperiodic NZP-CSI-RS resource and the first symbol of the aperiodic SRS transmission is less than Z OFDM symbols. In one alternative, Z is fixed (e.g., 42). In another alternative, Z is configured to the UE.In one example 5B-2, a UE is not expected to update the SRS precoding information if the gap from the last symbol of the reception of the aperiodic NZP-CSI-RS resource and the first symbol of the aperiodic SRS transmission is less than Z OFDM symbols, where the OFDM symbols is counted based on the minimum subcarrier spacing between the PDCCH (or AP-CSI-RS) and the AP-SRS. In one alternative, Z is fixed (e.g., 42). In another alternative, Z is configured to the UE.In one example 5B-3, a UE is not expected to update the SRS precoding information if the gap from the last symbol of the reception of the aperiodic NZP-CSI-RS resource and the first symbol of the aperiodic SRS transmission is less than 42×q OFDM symbols, where the parameter q is determined according to at least one of the following examples. Note here that μPDCCH=μCSI-RS.In one example, q=2μSRS2μPDCCH⁢or⁢⌈2μSRS2μPDCCH⌉⁢or⁢⌊2μSRS2μPDCCH⌋⁢or⌈2μSRS2μPDCCH⌉⁢or⁢⌊2μSRS2μPDCCH⌋.In another example, q=(2μSRS-μPDCCH) or q┌2μSRS-μPDCCH┐ or q=└2μSRS-μPDCCH┘.In another example, q=max⁡(1,2μSRS2μPDCCH)=2max(0,μSRS-μPDCCH).In one example 5B-4, a UE is not expected to update the SRS precoding information if the gap from the last symbol of the reception of the aperiodic NZP-CSI-RS resource and the first symbol of the aperiodic SRS transmission is less than Z×q OFDM symbols, where the parameter q is determined according to at least one of the following examples in example 5B-3, and Z is either fixed (e.g., 14, 28, 42, or 48), or configured to the UE. In 3GPP NR specification, the UL transmission is configured to be either codebook-based or non-codebook-based via higher layer parameter txConfig in PUSCH-Config set to either “codebook” or “nonCodebook.” According to 3GPP NR specification, the following is supported for codebook based UL transmission. For codebook based transmission, the UE determines the UE's codebook subsets based on TPMI and upon the reception of higher layer parameter ULCodebookSubset or codebookSubset in PUSCH-Config which may be configured with “fullAndPartialAndNonCoherent,” or “partialAndNonCoherent,” or “nonCoherent” depending on the UE capability. The maximum transmission rank may be configured by the higher parameter ULmaxRank or maxRank in PUSCH-Config. A UE reporting the UE's UE capability of “partialAndNonCoherent” transmission may not expect to be configured by ULCodebookSubset with “fullAndPartialAndNonCoherent.” A UE reporting the UE's UE capability of “Non-Coherent” transmission may not expect to be configured by ULCodebookSubset with “fullAndPartialAndNonCoherent” or with “partialAndNonCoherent.” A UE may not expect to be configured with the higher layer parameter ULCodebookSubset set to “partialAndNonCoherent” when two antenna ports are configured. In the present disclosure, “fullAndPartialAndNonCoherent,” “partialAndNonCoherent.” and “Non-Coherent” are referred to as the three examples of coherence type/capability, where the term “coherence” implies a subset of antenna ports at the UE that can be used to transmit a layer of UL data coherently. According to NR specification, for non-codebook-based UL transmission, the precoding matrix W equals the identity matrix. For codebook-based UL transmission, the precoding matrix W is given by W=1 for single-layer transmission on a single antenna port, otherwise by TABLE 3 to TABLE 8. The subset of TPMI indices for the three coherence types are summarized in TABLE 9 and TABLE 10 where rank=r corresponds to (and is equivalent to) r layers. The rank (or number of layers) and the corresponding precoding matrix Ware indicated to the UE using TRI and TPMI, respectively. In one example, this indication is joint via a field “Precoding information and number of layers” in DCI, e.g., using DCI format 0_1. In another example, this indication is via higher layer RRC signaling. In one example, the mapping between a field “Precoding information and number of layers” and TRI/TPMI is according to NR. The rank (or number of layers) and the corresponding precoding matrix Ware indicated to the UE using TRI and TPMI, respectively. In one example, this indication is joint via a field “Precoding information and number of layers” in DCI, e.g., using DCI format 0_1. In another example, this indication is via higher layer RRC signaling. In one example, the mapping between a field “Precoding information and number of layers” and TRI/TPMI is according to NR. TABLE 3Precoding matrix W for single-layer transmission using two antenna portsTPMIWindex(ordered from left to right in increasing order of TPMI index)0-512[10]12[01]12[11]12[1-1]12[1j]12[1-j]—— TABLE 4Precoding matrix W for single-layer transmission using fourantenna ports with transform precoding disabled.TPMIWindex(ordered from left to right in increasing order of TPMI index)0-712[1000]12[0100]12[0010]12[0001]12[1010]12[10-10]12[10j0]12[10-j0]8-1512[0101]12[010-1]12[010j]12[010-j]12[1111]12[11jj]12[11-1-1]12[11-j-j]16-2312[1j1j]12[1jj-1]12[1j-1-j]12[1j-j1]12[1-11-1]12[1-1j-j]12[1-1-11]12[1-1-jj]24-2712[1-j1-j]12[1-jj1]12[1-j-1j]12[1-j-j-1]———— TABLE 5Precoding matrix W for two-layer transmission using twoantenna ports with transform precoding disabled.WTPMI(ordered from left to right inindexincreasing order of TPMI index)0-212[1001]12[111-1]12[11j-j] TABLE 6Precoding matrix W W for two-layer transmission using fourantenna ports with transform precoding disabled.TPMIWindex(ordered from left to right in increasing order of TPMI index)0-312[10010000]12[10000100]12[10000001]12[00100100]4-712[00100001]12[00001001]12[1001100-j]12[1001100j]8-1112[1001-j001]12[1001-j00-1]12[1001-100-j]12[1001-100j]12-1512[1001j001]12[1001j00-1]12⁢2[11111-11-1]12⁢2[1111j-jj-j]16-1912⁢2[11jj1-1j-j]12⁢2[11jjj-j-11]12⁢2[11-1-11-1-11]12⁢2[11-1-1j-j-jj]20-2112⁢2[11-j-j1-1-jj]12⁢2[11-j-jj-j1-1]—— TABLE 7Precoding matrix W W for three-layer transmission using four antenna ports withtransform precoding disabled.TPMIWindex(ordered from left to right in increasing order of TPMI index)0-312[100010001000]12[100010100001]12[100010-100001]12⁢3[1111-1111-11-1-1]4-612⁢3[1111-11jj-jj-j-j]12⁢3[111-11-111-1-111]12⁢3[111-11-1jj-j-jjj]— TABLE 8Precoding matrix W W for four-layer transmission using four antenna ports withtransform precoding disabled.TPMIWindex(ordered from left to right in increasing order of TPMI index)0-312[1000010000100001]12⁢3[110000111-100001-1]12⁢3[11000011j-j0000j-j]14[11111-11-111-1-11-1-11]414[11111-11-1jj-j-jj-j-jj]——— TABLE 9TPMI indices for 2 antenna portsNon-RankCoherentfullAndPartialAndNonCoherent10-10-5200-2 TABLE 10TPMI indices for 4 antenna portsNon-RankCoherentpartialAndNonCoherentfullAndPartialAndNonCoherent10-30-110-2720-50-130-21300-20-6400-20-4 In one embodiment 6A1, a UE is configured with a low-resolution dual-stage codebook C1for codebook-based UL transmission where the codebook C1comprises precoding matrices W=W1W2, wherethe first component W1 is a group of L pre-coders/beams/ports, andthe second component W2 is a selection vector which selects 1 pre-coder/beam/port (from the L pre-coders/beams/ports in W1) per layer, and if UE antennas are dual-polarized, then it may also select a co-phase value. An example of such a codebook is NR Type I CSI codebook. In one embodiment 6A2, a UE is configured with a high-resolution dual-stage codebook C2for codebook-based UL transmission where the codebook C2comprises precoding matrices W=W1W2, wherethe first component W1 comprises a group of L pre-coders/beams/ports, andthe second component W2 is a combination vector which combines L pre-coders/beams/ports (in W1) per layer. An example of such a codebook is NR Type II CSI codebook. Another example of such as codebook is that W1 is (potentially oversampled) DFT codebook and W2 is NR UL codebook (either all or a subset of pre-coder/pre-coding matrices). If both W1 and W2 are indicated by the gNB to the UE, then at least one of the following alternatives is used for the indication.In one alternative Alt 6A-1: A joint TPMI indicates both W1 and W2.In one alternative Alt 6A-2: A joint SRI indicates both W1 and W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) also indicated jointly with SRI.In one alternative Alt 6A-3: A joint SRI2 indicates both W1 and W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) indicated via a separate SRI indication.In one alternative Alt 6A-4: A first TPMI1 indicates W1, and a second TPMI2 indicates W2.In one alternative Alt 6A-5: TPMI indicates W1, and SRI indicates W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) also indicated jointly with SRI.In one alternative Alt 6A-6: TPMI indicates W1, and SRI2 indicates W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) indicated via a separate SRI indication.In one alternative Alt 6A-7: TPMI indicates W2, and SRI indicates W1. If number of SRS resources >1, then the selected SRS resource(s) is (are) also indicated jointly with SRI.In one alternative Alt 6A-8: TPMI indicates W2, and SRI2 indicates W1. If number of SRS resources >1, then the selected SRS resource(s) is (are) indicated via a separate SRI indication.In one alternative Alt 6A-9: A first SRI1 indicates W1, and a second SRI2 indicates W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) also indicated jointly either with SRI1 or SRI2.In one alternative Alt 6A-10: A first SRI1 indicates W1, and a second SRI2 indicates W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) indicated via a separate SRI indication. If only W1 is indicated by the gNB to the UE (e.g. when W2 is determined by the UE in a transparent manner), then at least one of the following alternative is used for the indication.In one alternative Alt 6A-11: TPMI indicates W1.In one alternative Alt 6A-12: SRI indicates W1. If number of SRS resources >1, then the selected SRS resource(s) is (are) also indicated jointly with SRI.In one alternative Alt 6A-13: SRI2 indicates W1. If number of SRS resources >1, then the selected SRS resource(s) is (are) indicated via a separate SRI indication. If only W2 is indicated by the gNB to the UE (e.g. when W1 is determined by the UE in a transparent manner), then at least one of the following alternative is used for the indication.In one alternative Alt 6A-14: TPMI indicates W2.In one alternative Alt 6A-15: SRI indicates W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) also indicated jointly with SRI.In one alternative Alt 6A-16: SRI2 indicates W2. If number of SRS resources >1, then the selected SRS resource(s) is (are) indicated via a separate SRI indication. The W1 indication is in a WB manner, i.e., a single W1 is indicated common for all scheduled PRBs/SBs for UL transmission. The W2 indication, on the other hand, can either be in a WB manner or per SB, i.e., one W2 is indication for each scheduled PRB/SB. The W1 indication can be via UL-related DCI (e.g., DCI format 0_1 in NR). Alternatively, it is via higher-layer (e.g., RRC) signaling. Alternatively, the W1 indication is via PDSCH. Likewise, the W2 indication can be via UL-related DCI (e.g., DCI format 0_1 in NR). Alternatively, the W2 indication is via higher-layer (e.g., RRC) signaling. Alternatively, the W2 indication is via PDSCH. In one alternative, the value L in UL codebooks (C1and C2) is fixed, for example, L=1 for C1and L=2 for C2. In another alternative, the value L in UL codebooks (C1and C2) is configured (e.g. via higher layer RRC signaling), for example, from {1, 2}. In one example, when L=1 for C1, the UL codebook is the same as NR Type I codebook for Codebook-Config 1. In one example, when L=2 for C2, the UL codebook is the same as Rel. 15 Type II codebook, except that there can be some additional restrictions such as either one or any combination of the following restrictions.The W2 comprises only coefficient phase, where the phase codebook is fixed to QPSK (2 bits per coefficient). The coefficient amplitude is assumed to be one.The W2 comprises only coefficient phase, where the phase codebook is configurable from QPSK (2 bits) and 8PSK (3 bits). The coefficient amplitude is assumed to be one.The W2 comprises coefficient phase and coefficient amplitude, where the phase codebook is fixed to QPSK (2 bits per coefficient) and the coefficient amplitude is fixed to {0,12,12,1}⁢(2⁢bits).The W2 comprises coefficient phase and coefficient amplitude, where the phase codebook is configurable from QPSK (2 bits) and 8PSK (3 bits) and the coefficient amplitude is fixed to {0,12,12,1}⁢(2⁢bits).The W2 comprises coefficient phase and coefficient amplitude, where the phase codebook is fixed to QPSK (2 bits per coefficient) and the coefficient amplitude is fixed to WB amplitude codebook in Rel.15 Type II codebook.The W2 comprises coefficient phase and coefficient amplitude, where the phase codebook is configurable from QPSK (2 bits) and 8PSK (3 bits) and the coefficient amplitude is fixed to WB amplitude codebook in Rel.15 Type II codebook.Only rank 1 is supported. FIG.11illustrates a method for a partial reciprocity based scheme1100according to embodiments of the present disclosure. The embodiment of the partial reciprocity based scheme1100illustrated inFIG.11is for illustration only.FIG.11does not limit the scope of this disclosure to any particular implementation of the partial reciprocity based scheme1100. In one embodiment 7, a UE is configured with codebook-based UL transmission according to the method illustrated inFIG.11. As illustrated inFIG.11, The UE receives higher-layer configuration to transmit NSRS≥1 SRS resources. In response, the UE transmits SRS resources according to the configuration. The gNB measures these SRS resources, estimates UL channel based on the SRS measurement, and then determines/calculates W1 (indicating a group of precoders/beams). The UE receives an indication about W1 (from the gNB). The UE next receives a configuration about CSI-RS measurement (for W2 calculation). The UE receives/measures CSI-RS, estimates DL channel, and (assuming reciprocity) uses it as UL channel for W2 calculation. The UE finally transmits UL transmission using pre-coder/pre-coding matrix W=W1W2, where W1 is indicated by the gNB, and W2 is determined by the UE. Since W2 is transparent to the gNB/NW, the UE can calculate W2 for each scheduled PRB/SB for UL transmission, i.e., the UL precoding can be applied in a per PRB/SB manner. Since W1 is a WB component of the pre-coding matrix W, it can be indicated via higher layer (e.g. RRC) signaling. Alternatively, W1 is indicated via UL-related DCI (e.g. DCI format 0_1 in NR). Also, W1 indication can be via a separate UL-related DCI parameter. Or, this indication can be via an existing UL-related DCI parameter such as TPMI or SRI. The W1 indication can correspond to a fixed rank (transmit rank indicator or TRI) value, for example, rank 1. Or, a rank (TRI) value is also indicated jointly with the W1 indication. Or, a rank (TRI) value is also indicated separately from the W1 indication. In the latter case, at least one of the following indication alternatives can be used.In one alternative Alt 7-1: W1 indication is via higher-layer signaling and TRI indication is via DCI. Their respective indication is either joint using an existing parameter or separate using a new parameter.In one alternative Alt 7-2: W1 indication is via DCI and TRI indication is via higher-layer signaling. Their respective indication is either joint using an existing parameter or separate using a new parameter.In one alternative Alt 7-3: Both W1 and TRI indication are via DCI, either jointly using a single parameter or separately using two parameters.In one alternative Alt 7-4: Both W1 and TRI indication are via higher-layer signaling, either jointly using a single parameter or separately using two parameters. The W2 calculation at the UE either follows rank indicated via TRI or has a fixed rank (e.g. rank 1). In an alternative, TRI is indicated via higher layer signaling, and W1 and W2 are calculated/indicated accordingly. The other UL-related parameters such as MCS can be indicated jointly with the W1 indication. Or, they are indicated via a separate indication (e.g., via DCI). The SRS and CSI-RS resources can be linked (or associated with each other) via higher layer configuration of parameters such as associatedSRS in CSI-RS-ResourceSet for CSI-RS resource and associatedCSI-RS in SRS-ResourceSet for SRS resource. FIG.12illustrates another method for a partial reciprocity based scheme1200according to embodiments of the present disclosure. The embodiment of the partial reciprocity based scheme1200illustrated inFIG.12is for illustration only.FIG.12does not limit the scope of this disclosure to any particular implementation of the partial reciprocity based scheme1200. In one embodiment 7A, as illustrated inFIG.12, which is a variation of embodiment 7, the UE is further configured to transmit W2 to the gNB which uses it to determine parameters such as MCS for UL transmission assuming W=W1W2 as UL pre-coder/pre-coding matrix. The UE receives MCS (e.g., via UL-related DCI) and transmits UL data accordingly. FIG.13illustrates yet another method for a partial reciprocity based scheme1300according to embodiments of the present disclosure. The embodiment of the partial reciprocity based scheme1300illustrated inFIG.13is for illustration only.FIG.13does not limit the scope of this disclosure to any particular implementation of the partial reciprocity based scheme1300. In one embodiment 8, as illustrated inFIG.13, a UE is configured with codebook-based UL transmission. The UE receives a configuration (e.g., via higher layer signaling) about CSI-RS measurement (for W1 calculation). The UE receives/measures CSI-RS, estimates DL channel, and (assuming reciprocity) uses it as UL channel for W1 calculation. The calculated W1 is used to pre-code NSRS≥1 SRS resources, whose configuration is received by the UE via higher layer signaling, either jointly with or separate from CSI-RS configuration. The UE transmits SRS resources (pre-coded with W1) according to the configuration. The gNB measures these SRS resources, estimates UL channel based on the SRS measurement, and then determines/calculates W2 component of the UL pre-coder. The UE receives an indication about W2 (from the gNB). The UE finally transmits UL transmission using pre-coder/pre-coding matrix W=W1W2, where W2 is indicated by the gNB (hence, it is non-transparent), and W1 is determined by the UE (hence, it is transparent). The W2 indication can be WB, i.e., a single W2 is indicated for all scheduled PRBs/SBs for UL transmission. Alternatively, the gNB/NW can calculate W2 for each scheduled PRB/SB for UL transmission, i.e., the UL precoding can be applied in a per PRB/SB manner. The use of multiple pre-coded SRS resources (that are pre-coded using W1 derived based on CSI-RS measurement) can, for instance, be for capturing UL channel rank space or avoiding UL channel null space. Let X=number of precoders/beams in W1. In one sub-embodiment 8-1, NSRS=X, and each SRS resource comprises 1 port. The W2 indicates a pre-coder which combines all X SRS ports (equivalently, all precoders/beams in W1) for each layer using the W2 of high-resolution codebook C2in embodiment A2. In one sub-embodiment 8-2, NSRS=1, and the SRS resource comprises X port. The W2 indicates a pre-coder which combines all X SRS ports (equivalently, all precoders/beams in W1) for each layer using the W2 of high-resolution codebook C2in embodiment A2. In one sub-embodiment 8-3, NSRS=Y, and each SRS resource comprises X/Y ports. The W2 indicates a pre-coder which combines all X SRS ports (equivalently, all precoders/beams in W1) for each layer using the W2 of high-resolution codebook C2in embodiment A2. In one sub-embodiment 8-4, NSRS=X, and each SRS resource comprises 1 port. The W2 indicates a pre-coder which selects 1 out of X SRS ports (equivalently, 1 precoder/beam in W1) for each layer using the W2 of low-resolution codebook C1in embodiment A1. In one sub-embodiment 8-5, NSRS=1, and the SRS resource comprises X port. The W2 indicates a pre-coder which selects 1 out of X SRS ports (equivalently, 1 precoder/beam in W1) for each layer using the W2 of low-resolution codebook C1in embodiment A1. In one sub-embodiment 8-6, NSRS=Y, and each SRS resource comprises X/Y ports. The W2 indicates a pre-coder which selects 1 out of X SRS ports (equivalently, 1 precoder/beam in W1) for each layer using the W2 of low-resolution codebook C1in embodiment A1. The W2 indication is according to one of Alt 6A-14, 6A-15, and 6A-16. Alternatively, a generalized (joint) SRI can be used to indicate both SRS resource selection and W2 for the selected SRS resources. That is, this generalized SRI essentially functions as a TPMI across the selected SRS resources. Alternatively, generalized (joint) TPMI can be used to indicate both SRS resource selection and W2 for the selected SRS resources. That is, this generalized TPMI essentially functions as a TPMI across the selected SRS resources. Alternatively, a SRI can be used to indicate SRS resource selection, and TPMI can be used to indicate W2 for the selected SRS resources. The W2 indication can correspond to a fixed rank (transmit rank indicator or TRI) value, for example, rank 1. Or, a rank (TRI) value is also indicated jointly with the W2 indication. Or, a rank (TRI) value is also indicated separately from the W2 indication. In the latter case, at least one of the following indication alternatives can be used.In one alternative Alt 8-1: W2 indication is via higher-layer signaling and TRI indication is via DCI. Their respective indication is either joint using an existing parameter or separate using a new parameter.In one alternative Alt 8-2: W2 indication is via DCI and TRI indication is via higher-layer signaling. Their respective indication is either joint using an existing parameter or separate using a new parameter.In one alternative Alt 8-3: Both W2 and TRI indication are via DCI, either jointly using a single parameter or separately using two parameters.In one alternative Alt 8-4: Both W2 and TRI indication are via higher-layer signaling, either jointly using a single parameter or separately using two parameters. The W1 calculation at the UE has a fixed rank (e.g., rank 1). In an alternative, TRI is indicated via higher layer signaling, and W1 and W2 are calculated/indicated accordingly. The SRS and CSI-RS resources can be linked (or associated with each other) via higher layer configuration of parameters such as associatedSRS in CSI-RS-ResourceSet for CSI-RS resource and associatedCSI-RS in SRS-ResourceSet for SRS resource. In one embodiment 8A, a variation of embodiment 8, the UE is configured to transmit W1 to the gNB and SRS resources (that are not pre-coded with W1), which uses them to determine parameters such as MCS for UL transmission assuming W=W1W2 as UL pre-coder/pre-coding matrix. The UE receives MCS (e.g. via UL-related DCI) and transmits UL data accordingly. FIG.14illustrates still another method for a partial reciprocity based scheme1400according to embodiments of the present disclosure. The embodiment of the partial reciprocity based scheme1400illustrated inFIG.14is for illustration only.FIG.14does not limit the scope of this disclosure to any particular implementation of the partial reciprocity based scheme1400. In one embodiment 9, as illustrated inFIG.14, a UE is configured with codebook-based UL transmission. The UE receives higher-layer configuration for the first SRS transmission comprising NSRS,1≥1 SRS resources. In response, the UE transmits the first SRS resources according to the configuration. The gNB measures these SRS resources, estimates UL channel based on the SRS measurement, and then determines/calculates W1 (indicating a group of precoders/beams). The UE receives an indication about W1 (from the gNB). The UE also receives higher-layer configuration for the second SRS transmission comprising NSRS,2≥1 SRS resources, either jointly with or separate from the first SRS configuration. The UE transmits the second SRS resources (pre-coded with W1) according to the configuration. The gNB measures these SRS resources, estimates UL channel based on the SRS measurement, and then determines/calculates W2 component of the UL pre-coder. The UE receives an indication about W2 (from the gNB). The UE finally transmits UL transmission using pre-coder/pre-coding matrix W=W1W2. The first SRS resources may or may not be pre-coded, but the second SRS resources are pre-coded based on W1 (e.g. via TPMI1). The rank (TRI) indication can be according to at least one of the following alternatives. In one alternative Alt 9-1 (with W1): TRI is indicated either jointly or separately with the W1 indication (e.g. via TPMI1). The W2 indication either follows rank indicated via TRI or has a fixed rank (e.g. rank 1). In one alternative Alt 9-2 (with W2): TRI is indicated either jointly or separately with the W2 indication (e.g. via TPMI2). The W1 indication can assume a fixed rank (e.g. rank 1). In one alternative Alt 9-3 (with both W1 and W2): both TRI1 and TRI 2 are indicated.TRI1 is indicated either jointly or separately with the W1 indication.TRI2 is indicated either jointly or separately with the W2 indication. FIG.15illustrates a flow chart of a method1500for operating a user equipment (UE) for aperiodic channel state information reference signal (CSI-RS) reception, as may be performed by a UE such as UE116, according to embodiments of the present disclosure. The embodiment of the method1500illustrated inFIG.15is for illustration only.FIG.15does not limit the scope of this disclosure to any particular implementation. As illustrated inFIG.15, the method1500begins at step1502. In step1502, the UE (e.g.,111-116as illustrated inFIG.1) receives aperiodic CSI-RS configuration information including a CSI-RS triggering offset. In step1504, the UE receives downlink control information (DCI) via a physical downlink control channel (PDCCH), where the DCI triggers an aperiodic CSI-RS. In step1506, the UE determines the CSI-RS triggering offset based on the CSI-RS configuration information. The CSI-RS triggering offset is configured from a first set when μPDCCH<μCSIRS, and the CSI-RS triggering offset is configured from a second set when μPDCCH>μCSIRS, wherein μPDCCHand μCSIRSare subcarrier spacing configurations for the PDCCH and the aperiodic CSI-RS, respectively. In step1508, the UE receives the aperiodic CSI-RS in a slot Ksdetermined based on the CSI-RS triggering offset, a slot containing the triggering DCI, and the subcarrier spacing configurations (μPDCCHand μCSIRS). In one embodiment, the first set is 10, 1, 2, . . . 311 and the second set is {0, 1, 2, 3, 4, 16, 24}. In one embodiment, the slot Ks=⌊n.2μCSIRS2μPDCCH⌋+X, where n is the slot containing the triggering DCI, X is the CSI-RS triggering offset, and └┘ is a floor function. In one embodiment, the processor is further configured to determine a starting orthogonal frequency-division multiplexing (OFDM) symbol for the aperiodic CSI-RS reception, and the transceiver is further configured to start the aperiodic CSI-RS reception from the starting OFDM symbol. For μPDCCH<μCSIRS, the starting OFDM symbol is determined such that the CSI-RS reception starts no earlier than a first OFDM symbol of a CSI-RS slot that starts at least Δ PDCCH symbols after an end of the PDCCH triggering the aperiodic CSI-RS. For μPDCCH>μCSIRS, the starting OFDM symbol is determined such that the CSI-RS reception starts no earlier than at least Δ PDCCH symbols after the end of the PDCCH triggering the aperiodic CSI-RS. In one embodiment, when μPDCCH=0 indicating subcarrier spacing of 15 kHz, Δ=4. In one embodiment, the processor is further configured to determine a quasi co-location (QCL) assumption for the aperiodic CSI-RS reception based on a condition on a scheduling offset δ between a last symbol of the PDCCH triggering the aperiodic CSI-RS and a first symbol of the aperiodic CSI-RS, where the condition is given by, when δ<α, the QCL assumption is a QCL assumption for a PDSCH, if the PDSCH is received in the same OFDM symbols as the aperiodic CSI-RS, and the QCL assumption is a QCL assumption for a PDCCH, otherwise, when δ≥α, the QCL assumption is indicated via the PDCCH triggering the aperiodic CSI-RS. The transceiver is further configured to apply the determined QCL assumption for the aperiodic CSI-RS reception, where a is a threshold and the QCL assumption corresponds to QCL-TypeD indicating a beam to receive the aperiodic CSI-RS. In one embodiment, the threshold α=Y+d⁢2μCSIRS2μPDCCH, wherein Y is a UE reported threshold beamSwitchTiming taken from a set that includes {14, 28, 48}, and wherein d is an additional delay such that d=0 when μPDCCH≥μCSIRSand d=m when μPDCCH<μCSIRS. In one embodiment, when μPDCCH=0 indicating a subcarrier spacing of 15 kHz, m=4; when μPDCCH=1 indicating a subcarrier spacing of 30 kHz, m=4; and when μPDCCH=2 indicating a subcarrier spacing of 60 kHz, m=8. FIG.16illustrates a flow chart of another method1600, as may be performed by a base station (BS) such as BS102, according to embodiments of the present disclosure. The embodiment of the method1600illustrated inFIG.16is for illustration only.FIG.16does not limit the scope of this disclosure to any particular implementation. As illustrated inFIG.16, the method1600begins at step1602. In step1602, the BS (e.g.,101-103as illustrated inFIG.1), generates an aperiodic channel state information reference signal (CSI-RS) configuration information and a downlink control information (DCI). In step1604, the BS transmits the aperiodic CSI-RS configuration information including a CSI-RS triggering offset. In step1606, the BS transmits the DCI via a physical downlink control channel (PDCCH), where the DCI triggers an aperiodic CSI-RS. In step1608, the BS transmits the aperiodic CSI-RS in a slot Ks. The CSI-RS triggering offset is configured from a first set when μPDCCH<μCSIRS, and from a second set when μPDCCH>μCSIRS, where μPDCCHand CSIRS are subcarrier spacing configurations for the PDCCH and the aperiodic CSI-RS, respectively. The slot Ksis determined based on the CSI-RS triggering offset, a slot containing the triggering DCI, and the subcarrier spacing configurations (μPDCCHand μCSIRS). In one embodiment, the first set is {0, 1, 2, . . . 31} and the second set is {0, 1, 2, 3, 4, 16, 24}. In one embodiment, the slot Ks=⌊n.2μCSIRS2μPDCCH⌋+X, where n is the slot containing the triggering DCI, X is the CSI-RS triggering offset, and └┘ is a floor function. In one embodiment, a starting orthogonal frequency-division multiplexing (OFDM) symbol for an aperiodic CSI-RS reception is determined based on the CSI-RS configuration information, and the aperiodic CSI-RS reception is started from the starting OFDM symbol. For MPDCCH<μCSIRS, the starting OFDM symbol is determined such that the CSI-RS reception starts no earlier than a first OFDM symbol of a CSI-RS slot that starts at least Δ PDCCH symbols after an end of the PDCCH triggering the aperiodic CSI-RS. For μPDCCH>μCSIRS, the starting OFDM symbol is determined such that the CSI-RS reception starts no earlier than at least Δ PDCCH symbols after the end of the PDCCH triggering the aperiodic CSI-RS. In one embodiment, when μPDCCH=0 indicating a subcarrier spacing of 15 kHz, Δ=4. In one embodiment, a quasi co-location (QCL) assumption for aperiodic CSI-RS reception is determined based on a condition on a scheduling offset δ between a last symbol of the PDCCH triggering the aperiodic CSI-RS and a first symbol of the aperiodic CSI-RS, where the condition is given by, when δ<α, the QCL assumption is a QCL assumption for a PDSCH, if the PDSCH is received in the same OFDM symbols as the aperiodic CSI-RS, and the QCL assumption is a QCL assumption for a PDCCH, otherwise, when δ≥α, the QCL assumption is indicated via the PDCCH triggering the aperiodic CSI-RS; and the determined QCL assumption for the aperiodic CSI-RS reception is applied, where a is a threshold and the QCL assumption corresponds to QCL-TypeD indicating a beam to receive aperiodic CSI-RS. In one embodiment, the threshold α=Y+d⁢2μCSIRS2μPDCCH, wherein Y is a UE reported threshold beamSwitchTiming taken from a set that includes {14, 28, 48}, and wherein d is an additional delay such that d=0 when μPDCCH≥μCSIRSand d=m when μPDCCH<<CSIRS. In one embodiment, when μPDCCH=0 indicating a subcarrier spacing of 15 kHz, m=4; when μPDCCH=1 indicating a subcarrier spacing of 30 kHz, m=4; and when μPDCCH=2 indicating a subcarrier spacing of 60 kHz, m=8. Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims. None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claims scope. The scope of patented subject matter is defined by the claims.
115,880
11943650
MODE(S) FOR CARRYING OUT THE INVENTION Hereinafter, (a) preferred embodiment(s) of the present disclosure will be described in detail with reference to the appended drawings. Note that, in this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted. Note that description will be given in the following order. 1. Overview of wireless LAN system 2. Configuration of device 3. Operation of device 4. Modifications 5. Application examples 6. Supplemental remarks 7. Conclusion 1. OVERVIEW OF WIRELESS LAN SYSTEM An embodiment of the present disclosure relates to a wireless LAN system. First, an overview of a wireless LAN system according to an embodiment of the present disclosure is described with reference toFIGS.1to8. 1-1. Configuration of Wireless LAN System FIG.1illustrates a configuration of a wireless LAN system according to an embodiment of the present disclosure. As illustrated inFIG.1, the wireless LAN system according to an embodiment of the present disclosure includes access point devices (hereinafter referred to as “access point (AP)” for convenience)200and station devices (hereinafter referred to as “station (STA)” for convenience)100. Then, one AP200and one or more STAs100constitute a basic service set (hereinafter referred to as “basic service set (BSS)” for convenience)10. The wireless LAN system according to an embodiment of the present disclosure may be installed in any place. For example, the wireless LAN system according to the present embodiment may be installed in office buildings, housing, commercial facilities, public facilities, or the like. In addition, an area of the BSS10according to the present embodiment may overlap with an area of another BSS10using an overlapping frequency channel (hereinafter referred to as “overlap basic service set (OBSS)” for convenience); in that case, a signal transmitted from the STA100located in the overlap area may interfere with a signal transmitted from the OBSS. When description is given using the example ofFIG.1, an area of the BSS10aoverlaps with part of an area of the BSS10bthat is an OBSS, and the STA100band the STA100care located in the overlap area. In this case, a signal transmitted from the STA100bbelonging to the BSS10amay interfere with a signal transmitted from the AP200bor the STA100cbelonging to the BSS10b. In addition, a signal transmitted from the STA100cbelonging to the BSS10bmay interfere with a signal transmitted from the AP200aor the STA100bbelonging to the BSS10a. The AP200according to the present embodiment is connected to an external network, and provides communication with the external network for the STA100. For example, the AP200is connected to the Internet, and provides communication between the STA100and a device on the Internet or a device connected via the Internet. The STA100according to the present embodiment is a wireless device that communicates with the AP200. The STA100may be any wireless device. For example, the STA100may be a display with a display function, a memory with a storage function, a keyboard and a mouse with an input function, a speaker with a sound output function, or a smartphone with a function of executing advanced calculation processing. 1-2. Background Then, the background of the present disclosure is described. Before wireless LAN systems have become widely used, an AP had managed each BSS by setting a frequency channel in a manner that a frequency band to be used does not overlap with another BSS; thus, the possibility of signals transmitted from the BSSs interfering with each other had been low. However, in recent years, the widespread use of wireless LAN systems has led to an increase in the number of cases in which frequency bands used in a plurality of adjacent BSSs overlap, which makes signals transmitted from the BSSs more likely to interfere with each other. To cope with such a situation, the following method has been considered: an AP acquires interference information such as parameter information of a signal transmitted from an OBSS, and changes parameters in communication in a BSS to which the own device belongs (hereinafter referred to as “own BSS” for convenience) on the basis of the interference information, thereby preventing occurrence of interference. Examples include a method in which the AP changes transmission power to a low value on the basis of information of interference with the OBSS, and makes a radio wave reachable range smaller, thereby preventing occurrence of interference. Here, Patent Literature 1 and Patent Literature 2 disclose examples of methods for collecting, managing, and using interference information. Hence, disclosure of Patent Literature 1 will now be described with reference toFIG.2.FIG.2illustrates a configuration of a wireless LAN system according to Patent Literature 1. As illustrated inFIG.2, in the wireless LAN system according to Patent Literature 1, a BSS1includes an AP1, an STA1, and an STA2, and a BSS2includes an AP2, an STA3, and an STA4. Then, an area of the BSS1overlaps with part of an area of the BSS2that is an OBSS, and the STA2and the STA3are located in the overlap area. In addition, the wireless LAN system according to Patent Literature 1 includes a database connected to the AP1and the AP2via a network. In the wireless LAN system, the database acquires interference information from each AP and manages the interference information. Then, each AP acquires interference information from the database, and changes parameters in communication in the own BSS on the basis of the interference information, thereby preventing occurrence of interference. In addition, although not illustrated, in the disclosure of Patent Literature 2, a monitoring server connected to each AP via a network receives interference information from the AP and manages the interference information. Then, a power control device connected to the monitoring server acquires interference information from the monitoring server, and decides transmission power of each AP on the basis of the interference information. As described above, in the disclosure of Patent Literature 1 and the disclosure of Patent Literature 2, a management device that collects and manages interference information (referring to the database in Patent Literature 1 and referring to the monitoring server in Patent Literature 2) exists separately from an AP, and the AP acquires interference information from the management device. Here, for example, it is considered to be inappropriate, in terms of cost-effectiveness, to take the trouble to provide a management device even in the case where the number of wireless LANs is small. In addition, from another viewpoint, in the case where a malfunction occurs in a network connecting the management device and each AP, each AP cannot acquire interference information from the management device, and thus cannot perform interference control. Hence, the disclosing party of the present case has devised the present disclosure by focusing on the above circumstances. The AP200according to an embodiment of the present disclosure can grasp interference information without using a management device. Then, the AP200can exchange the interference information with another AP200. Furthermore, the AP200can appropriately perform interference control on the basis of the interference information. Described below are a functional overview, a configuration, operation, modifications, and application examples of a wireless LAN system according to an embodiment of the present disclosure. 1-3. Functional Overview of Wireless LAN System The background of the present disclosure has been described above. Now, a functional overview of a wireless LAN system according to an embodiment of the present disclosure will be described. In the case of receiving signals of the own BSS or an OBSS, the STA100in the wireless LAN system according to the present embodiment reports parameter information regarding these signals to the AP200, instead of a management device as in Patent Literatures. More specifically, in the case of receiving signals of the own BSS or an OBSS, the STA100stores parameter information regarding a modulation scheme, transmission power, a BSS identifier, a received signal strength indicator (RSSI), a transmission path utilization time, or the like in a state where the own BSS is distinguished from an OBSS, and reports the parameter information to the AP200. Here, an overview of the operation of the STA100acquiring parameter information is described with reference toFIG.3.FIG.3is a sequence diagram illustrating the operation of the STA100according to the present embodiment acquiring parameter information. In step S1000, it is assumed that, in the case where the STA100atransmits a signal to the AP200a, not only the AP200abut also the STA100breceives the signal. In this case, the STA100bstores parameter information of the signal as parameter information of the own BSS. Note that also the AP200athat has received the signal stores parameter information of the signal as parameter information of the own BSS. In step S1004, the STA100btransmits a signal to the AP200a. Then, it is assumed that not only the AP200abut also the STA100creceives the signal. In this case, the STA100cstores parameter information of the signal as parameter information of an OBSS. Note that the AP200athat has received the signal stores parameter information of the signal as parameter information of the own BSS. In step S1008, the STA100ctransmits a signal to the AP200b. Then, it is assumed that not only the AP200bbut also the STA100breceives the signal. In this case, the STA100bstores parameter information of the signal as parameter information of an OBSS. Note that the AP200bthat has received the signal stores parameter information of the signal as parameter information of the own BSS. In step S1012, the STA100dtransmits a signal to the AP200b. Then, it is assumed that not only the AP200bbut also the STA100creceives the signal. In this case, the STA100cstores parameter information of the signal as parameter information of the own BSS. Note that also the AP200bthat has received the signal stores parameter information of the signal as parameter information of the own BSS. As described above, each STA100acquires parameter information of the own BSS or parameter information of an OBSS. Now, an overview of the operation of the STA100reporting parameter information to the AP200will be described with reference toFIG.4.FIG.4is a sequence diagram illustrating the operation of the STA100according to the present embodiment transmitting parameter information to the AP200. In step S1100, the STA100bgenerates a frame including parameter information of the own BSS or an OBSS, and transmits the frame to the AP200a. This enables the AP200ato grasp that an OBSS exists and that the STA100bhas received a signal of the OBSS. Thus, the AP200acan perform interference control by changing parameters such as transmission power, a modulation scheme, or a frequency band. In addition, in step S1104, the STA100ccan grasp that parameter information has been reported from the STA100bto the AP200a. Thus, for example, the STA100cmay be triggered by the reporting of parameter information from the STA100bto the AP200ato report parameter information to the AP200b. In step S1108, the STA100cgenerates a frame including parameter information of the own BSS or an OBSS, and transmits the frame to the AP200b. This enables the AP200bto grasp that an OBSS exists and that the STA100chas received a signal of the OBSS. Thus, as described above, the AP200bcan perform interference control by changing parameters such as transmission power, a modulation scheme, or a frequency band to be used. In addition, in step S1112, as described above, the STA100bcan grasp that parameter information has been reported from the STA100cto the AP200b. As described above, the AP200can acquire parameter information of a signal of the own BSS or an OBSS from each STA100. Then, the AP200stores these pieces of parameter information in association with identification information of the acquisition source STA100. Hereinafter, parameter information that the AP200acquires from each STA100and stores will be referred to as aggregate parameter information. Aggregate parameter information may be information obtained by editing parameter information reported from each STA100, or may be, of course, information obtained by merely associating identification information of the acquisition source STA100with parameter information reported from each STA100. Then, the APs200exchange individually stored aggregate parameter information, thereby grasping parameter information set for different BSSs and an interference situation between BSSs. Now, an overview of the operation of the APs200exchanging aggregate parameter information will be described with reference toFIG.5.FIG.5is a sequence diagram illustrating the operation of the APs200according to the present embodiment exchanging aggregate parameter information. In step S1200, the AP200atransmits aggregate parameter information, and the AP200breceives the aggregate parameter information. This enables the AP200bto grasp parameter information set in the BSS10aand influence of interference received by a device of the BSS10a. Thus, the AP200bcan appropriately perform interference control by changing parameter information. In addition, in step S1204, the AP200btransmits aggregate parameter information, and the AP200areceives the aggregate parameter information. As described above, this enables the AP200ato grasp parameter information set in the BSS10band influence of interference received by a device of the BSS10b, and to appropriately perform interference control by changing parameter information. As described above, in the wireless LAN system according to the present embodiment, the STA100reports parameter information of the own BSS or an OBSS to the AP200, which enables the AP200to grasp interference information without using a management device. In addition, the AP200can aggregate parameter information reported from the STA100, and exchange aggregate parameter information between the APs200, thereby grasping parameter information set in different BSSs and an interference situation between BSSs. Then, the AP200can appropriately perform interference control by changing parameter information of the own BSS on the basis of aggregate parameter information. 1-4. Frame Configuration The functional overview of the wireless LAN system according to an embodiment of the present disclosure has been described above. Now, a configuration of a frame transmitted and received by the wireless LAN system according to the present embodiment will be described with reference toFIGS.6to8. FIG.6illustrates a configuration of a frame transmitted and received in the wireless LAN system according to the present embodiment. As illustrated inFIG.6, a frame transmitted and received by the wireless LAN system according to the present embodiment is a PPDU including Preamble, PLCP Header, and MPDU. The PLCP Header includes L-SIG and HE-SIG. The MPDU includes MAC Header, Frame Body, and Frame Check Sequence (FCS). FIG.7illustrates a configuration of the PLCP Header inFIG.6. As illustrated inFIG.7, the PLCP Header includes BSS Color, Tx Power, MCS Index, Uplink Indicator, and the like. The BSS Color is information for identifying a BSS of a transmitted and received signal. For example, BSS Color of a signal transmitted and received in a certain BSS contains BSS Color information corresponding to the BSS, and BSS Color of aggregate parameter information or the like transmitted and received between different BSSs contains wildcard BSS Color information. The STA100or the AP200that has received a signal determines whether or not the signal is a signal of the own BSS, or whether or not the signal is a signal communicated across BSSs, on the basis of BSS Color. In addition, the Tx Power is transmission power information. In addition, the MCS Index is obtained by indexing a combination of a modulation scheme, a coding rate, and the like. In addition, the Uplink Indicator is a signal transmission direction, indicates that the signal is an uplink signal in the case where the Uplink Indicator is 1, and indicates that the signal is a downlink signal in the case where the Uplink Indicator is 0, for example. FIG.8illustrates a configuration of the MAC Header inFIG.6. As illustrated inFIG.8, the MAC Header includes Frame Control, Address1to Address4, Sequence Control, Qos Control, HT Control, and the like. The Frame Control contains information of a protocol version, a frame time, or the like, and Address1to Address4contain information of a BSSID, a transmission source address, a destination address, or the like. The Sequence Control contains a sequence number, the Qos Control contains a Qos parameter, and the HT Control contains a high-speed communication parameter. 2. CONFIGURATION OF DEVICE The functional overview of the wireless LAN system according to an embodiment of the present disclosure has been described above. Now, configurations of the STA100and the AP200according to the present embodiment will be described with reference toFIG.9.FIG.9illustrates configurations of the STA100and the AP200according to the present embodiment. 2-1. Configuration of STA First, a configuration of the STA100is described. As illustrated inFIG.9, the STA100includes a wireless communication unit110, a data processing unit120, and a control unit130. (Wireless Communication Unit) As illustrated inFIG.9, the wireless communication unit110includes an antenna control unit111, a reception processing unit112, and a transmission processing unit113, and functions as a reception unit and a reporting unit. The antenna control unit111controls transmission and reception of signals via at least one antenna. More specifically, the antenna control unit111provides the signal received via the antenna to the reception processing unit112, and transmits the signal generated by the transmission processing unit113via the antenna. The reception processing unit112performs frame reception processing on the basis of the signal provided from the antenna control unit111. For example, the reception processing unit112outputs a baseband reception signal by performing analog processing and down-conversion on a signal obtained from an antenna. Then, the reception processing unit112calculates correlation between a predetermined signal pattern and the reception signal, while shifting the reception signal that is a target of computation on a time axis, and detects a preamble on the basis of appearance of a peak of correlation. Thus, the reception processing unit112can detect a signal of the own BSS, a signal of an OBSS, or the like. In addition, the reception processing unit112acquires a frame by performing demodulation, decoding, and the like on the baseband reception signal, and provides the acquired frame to a received frame analysis unit121. In addition, the reception processing unit112provides information regarding success or failure of reception processing to an operation control unit131. For example, in the case of failing in reception processing such as demodulation, the reception processing unit112provides error occurrence information to the operation control unit131. In addition, in the case of receiving a signal that cannot be detected by computing correlation with a predetermined signal pattern (i.e., a signal not including a wireless-LAN-standard preamble), the reception processing unit112provides the information to the received frame analysis unit121. The transmission processing unit113performs transmission processing of a frame provided from a transmission frame constructing unit126. More specifically, the transmission processing unit113generates a transmission signal on the basis of a frame provided from the transmission frame constructing unit126and a parameter set in accordance with an instruction from a signal control unit132. For example, the transmission processing unit113generates a baseband transmission signal by performing encoding, interleaving, and modulation on the frame provided from the transmission frame constructing unit126in accordance with coding and modulation schemes and the like instructed by the signal control unit132. In addition, the transmission processing unit113performs up-conversion on the baseband transmission signal obtained by the preceding processing. (Data Processing Unit) As illustrated inFIG.9, the data processing unit120includes the received frame analysis unit121, a reception buffer122, an interface unit123, a transmission buffer124, a parameter information storage unit125, and the transmission frame constructing unit126. The received frame analysis unit121functions as a determination unit and an acquisition unit, and performs analysis of a received frame, acquisition of parameter information, or the like. More specifically, the received frame analysis unit121analyzes PLCP Header, MAC Header, and the like included in a frame received by the wireless communication unit110. Then, the received frame analysis unit determines whether or not the reception signal is a signal of the own BSS on the basis of BSS Color or a BSSID that is identification information. In the case where it is determined that the reception signal is a signal of the own BSS, the received frame analysis unit121acquires parameters, and causes the parameter information storage unit125to store the parameters as parameter information of the own BSS (referring to second parameter information). In addition, in the case where it is determined that the reception signal is not a signal of the own BSS, the received frame analysis unit121acquires parameters, and causes the parameter information storage unit125to store the parameters as parameter information of an OBSS. In addition, in the case where information indicating that a signal not including a wireless-LAN-standard preamble is received is provided from the reception processing unit112, the received frame analysis unit121acquires parameters, and causes the parameter information storage unit125to store the parameters as energy detection parameter information. In addition, in the case where a parameter information report request from the AP200is received, the received frame analysis unit121provides the information to the operation control unit131, and, in the case where aggregate parameter information from the AP200is received, provides the information to the operation control unit131. Furthermore, in the case where a frame destination includes the own device, the received frame analysis unit121acquires data or the like from the frame, and causes the reception buffer122to store the data or the like. The reception buffer122stores data included in a received frame. The interface unit123is an interface connected to another component included in the STA100. More specifically, the interface unit123performs reception of data that is desired to be transmitted from the other component, for example, an application or a user interface, provision of reception data to the application or the user interface, or the like. The transmission buffer124stores transmission data provided from the interface unit123. The parameter information storage unit125stores parameter information of the own BSS, parameter information of an OBSS, and energy detection parameter information provided from the received frame analysis unit121. Here, an example of information stored by the parameter information storage unit125is described with reference toFIG.10.FIG.10illustrates an example of parameter information stored by the parameter information storage unit125according to the present embodiment. As illustrated inFIG.10, the parameter information storage unit125creates a record for each reception signal, and stores parameter information. Then, the parameter information storage unit125adds information of a transmission source network of the reception signal. More specifically, the parameter information storage unit125makes it possible to distinguish whether the reception signal is a signal of the own BSS or a signal of an OBSS by containing information of the own BSS or an OBSS in “BSS/Overlap BSS column” in the record (it is written “BSS” instead of “own BSS” in parameter information of the own BSS). For example, a record10, a record11, and a record13ofFIG.10are parameter information of a signal of an OBSS, and a record12is parameter information of a signal of the own BSS. Although not illustrated, in the case where the reception signal is a signal of a network other than a wireless LAN, such as a cellular network, “BSS/Overlap BSS column” may contain “N/A” or the like, or may contain some sort of identification information. For example, a type defined by EDCA, for example, or the like may be contained as version information, a frame type format, a subtype format, an aggregation format, or a QoS parameter of a wireless LAN. The transmission frame constructing unit126generates a transmission frame. For example, the transmission frame constructing unit126generates a parameter report frame on the basis of parameter information stored in the parameter information storage unit125and control information set by the operation control unit131. The transmission frame constructing unit126generates a frame (packet) from parameter information for transmission acquired from the parameter information storage unit125, and performs processing of adding a MAC header for medium access control (MAC) and an error detection code to the generated frame and the like. In addition, the transmission frame constructing unit126may generate a transmission frame by using transmission data contained in the transmission buffer124. Here, an example of a parameter report frame generated by the transmission frame constructing unit126is described with reference toFIGS.11to13.FIG.11illustrates an information element20used for transmitting parameter information of the own BSS. As illustrated inFIG.11, the information element20includes Element ID, Length, Report MAC Address, BSS STA Counts, parameter information for each reception signal, and the like. The Element ID is information of a type of information element, the Length is information of a length of the information element20, the Report MAC Address is information of a report destination address, and the BSS STA Counts is information of the number of own BSS signals to be reported. In addition, parameter information for each own BSS signal can include RSSI, MCS, Type, Duration, and the like, but may be changed as appropriate. Here, the Type is information indicating a type of data, and the Type may include, for example, version information of a wireless LAN frame, information regarding whether or not it is configured by aggregation as a type of frame, or information regarding voice, video, or the like included in data. In addition, the Duration is information regarding a transmission path utilization time.FIG.11is an example, and contents of the information element20may be changed as appropriate. FIG.12illustrates an information element30used for transmitting parameter information of an OBSS. As illustrated inFIG.12, the information element30includes Element ID, Length, Report MAC Address, OBSS Counts, parameter information for each reception signal, and the like. The OBSS Counts is information of the number of OBSS signals to be reported. Other information is similar to that of the information element20inFIG.11; hence, description is omitted.FIG.12is an example, and contents of the information element30may be changed as appropriate. FIG.13illustrates an information element40used for transmitting energy detection parameter information. As illustrated inFIG.13, the information element40includes Element ID, Length, Report MAC Address, RSSI min level, Detect Counts, parameter information for each reception signal, and the like. The RSSI min level is information of the lowest RSSI. Detect Counts is information of the number of signals to be reported. In addition, parameter information for each signal can include RSSI max and Duration, but may be changed as appropriate. Here, the RSSI max is information of the highest RSSI for each signal.FIG.13is an example, and contents of the information element40may be changed as appropriate. Each information element illustrated inFIGS.11to13is contained in the Frame Body ofFIG.6and transmitted. At this time, each information element may be contained in the Frame Body alone, or a plurality of information elements may be coupled and contained in the Frame Body. (Control Unit) As illustrated inFIG.9, the control unit130includes the operation control unit131and the signal control unit132. The operation control unit131controls processing related to transmission of parameter information. More specifically, the operation control unit131controls transmission processing of parameter information of the own BSS, parameter information of an OBSS, or energy detection parameter information. For example, in the case of determining that an error has occurred at a predetermined frequency or more on the basis of error occurrence information provided from the reception processing unit112, the operation control unit131controls each component so as to transmit parameter information. In addition, in the case where predetermined time or more passes from timing at which parameter information has been transmitted previously, the operation control unit131similarly controls each component so as to transmit parameter information. In addition, in the case where information indicating that a parameter information report request from the AP200is received is provided from the reception processing unit112, the operation control unit131similarly controls each component so as to transmit parameter information. These timings at which parameter information is transmitted may be changed freely. By the above method, the operation control unit131can control each component so as to transmit parameter information at appropriate timing. The signal control unit132controls an operation of the wireless communication unit110. More specifically, the signal control unit132controls transmission/reception processing of the wireless communication unit110. For example, the signal control unit132causes the wireless communication unit110to set control information for transmission and reception on the basis of an instruction from the operation control unit131. In addition, the signal control unit132controls vacant channel detection processing as in CSMA/CA. For example, the signal control unit132decides transmission start or transmission standby of a signal on the basis of a carrier sense result and back off time. 2-2. Configuration of AP The AP200may include components similar to those of the STA100. Of course, the AP200may include a component not included in the STA100as appropriate. (Wireless Communication Unit) As illustrated inFIG.9, the wireless communication unit210includes an antenna control unit211, a reception processing unit212, and a transmission processing unit213, and functions as a reception unit and a reporting unit. The functions of the components are similar to those of the STA100; hence, description is omitted. (Data Processing Unit) As illustrated inFIG.9, the data processing unit220includes a received frame analysis unit221, a reception buffer222, an interface unit223, a transmission buffer224, a parameter information storage unit225, and a transmission frame constructing unit226. Hereinafter, of the functions of the components, description of a function similar to that of a component of the STA100is omitted. The received frame analysis unit221functions as a generation unit, and performs analysis of a received frame, and processing related to parameter information and aggregate parameter information. More specifically, in the case where a frame including parameter information is received from the STA100, the received frame analysis unit221analyzes the frame, and acquires parameter information. Then, the received frame analysis unit221generates aggregate parameter information on the basis of the parameter information, and causes the parameter information storage unit225to store the aggregate parameter information. At this time, the received frame analysis unit221causes the parameter information storage unit225to store the aggregate parameter information in association with identification information of the transmission source STA100. In addition, as described above, the received frame analysis unit221may generate aggregate parameter information by editing parameter information transmitted from the STA100. In addition, in the case where aggregate parameter information transmitted from another AP200is received, the received frame analysis unit221causes the parameter information storage unit225to store the aggregate parameter information in association with identification information of the transmission source AP200. In addition, the received frame analysis unit221may edit aggregate parameter information transmitted from another AP200, and cause the parameter information storage unit225to store the edited aggregate parameter information. The parameter information storage unit225stores aggregate parameter information provided from the received frame analysis unit221. The transmission frame constructing unit226generates a transmission frame. For example, the transmission frame constructing unit226is controlled by an operation control unit231to generate a parameter information report request frame. In addition, the transmission frame constructing unit226is controlled by the operation control unit231to generate a frame including aggregate parameter information. (Control Unit) As illustrated inFIG.9, the control unit230includes the operation control unit231and a signal control unit232. Hereinafter, of the functions of the components, description of a function similar to that of a component of the STA100is omitted. The operation control unit231controls processing related to parameter information, aggregate parameter information, and interference control. For example, the operation control unit231controls processing related to a parameter information report request. The operation control unit231controls each component so as to generate and transmit a frame for a parameter information report request. Here, a parameter information report request may be made at any timing. For example, the operation control unit231may make a parameter information report request after predetermined time elapses from timing at which a parameter information report request has been made previously. In addition, the operation control unit231may make a parameter information report request in the case of determining that error occurrence frequency is equal to or greater than a predetermined threshold, on the basis of error occurrence information provided from the reception processing unit212. In addition, the operation control unit231controls processing of reporting aggregate parameter information to another AP200. The operation control unit231controls each component so as to generate a frame including aggregate parameter information stored by the parameter information storage unit225, and report the frame to another AP200. Here, aggregate parameter information may be reported at any timing. For example, the operation control unit231may report aggregate parameter information after predetermined time elapses from timing at which aggregate parameter information has been reported previously. In addition, the operation control unit231may report aggregate parameter information in the case of determining that error occurrence frequency is equal to or greater than a predetermined threshold, on the basis of error occurrence information provided from the reception processing unit212. In addition, the operation control unit231performs processing related to interference control. More specifically, the operation control unit231performs interference control on the basis of aggregate parameter information generated by using parameter information from the STA100or aggregate parameter information received from another AP200. For example, in the case of determining that communication environment is poor on the basis of aggregate parameter information, the operation control unit231changes a modulation scheme to a modulation scheme with low transmission efficiency (BPSK etc.) enabling communication more reliably, or changes transmission power to higher transmission power allowed in the standard. In addition, the operation control unit231may change setting in a manner that a frequency band different from a frequency band used in an OBSS is used. In addition, the operation control unit231may perform interference control on the basis of information regarding priority of data included in aggregate parameter information. More specifically, in the case where it can be confirmed that communication of data with high priority, such as voice, is performed in an OBSS on the basis of the Type included in aggregate parameter information, the operation control unit231may change parameters in a manner that communication of the OBSS is performed preferentially. In addition, in the case where it can be confirmed that communication of data with high priority is not performed in an OBSS on the basis of the Type included in aggregate parameter information, the operation control unit231may change parameters in a manner that communication of the own BSS is performed preferentially. Alternatively, in this case, the operation control unit231may perform determination again after parameters of the OBSS are changed, without changing parameters of the own BSS. 3. OPERATION OF DEVICE The configurations of the STA100and the AP200according to the present embodiment have been described above. Now, parameter information acquisition operation will be described with reference toFIGS.14A and14B.FIGS.14A and14Bare flowcharts illustrating the operation of the STA100according to the present embodiment acquiring parameter information. Here, also in the case where the AP200acquires parameter information, the operation shown inFIGS.14A and14Bmay be performed as in the STA100. In step S1300, the wireless communication unit110detects a signal having an RSSI greater than a predetermined threshold. In the case where the reception processing unit112detects a preamble of a wireless LAN by computing correlation between a predetermined signal pattern and a reception signal (Yes in step S1304), in step S1308, the received frame analysis unit121extracts information of the PLCP Header. Then, in step S1312, the received frame analysis unit121acquires a parameter related to the MCS (MCS Index) included in the PLCP Header. Then, in step S1316, the received frame analysis unit121analyzes a header configuration or version information included in a header. In the case where the header of the signal conforms to a standard supported by the own device (a standard in which Tx Power, BSS Color, and the like are included in a header) (Yes in step S1316), in step S1320, the received frame analysis unit121acquires a parameter related to transmission power (Tx Power) from the PLCP Header. In step S1324, the received frame analysis unit121acquires a parameter related to the BSS Color (BSS Color) from the PLCP Header. In step S1328, in the case where the received frame analysis unit121determines that the acquired BSS Color information is BSS Color information of the own BSS (Yes in step S1328), the received frame analysis unit121causes the parameter information storage unit125to store the acquired parameter information as parameter information of the own BSS. In step S1328, in the case where the received frame analysis unit121determines that the acquired BSS Color information is not BSS Color information of the own BSS (No in step S1328), the received frame analysis unit121causes the parameter information storage unit125to store the acquired parameter information as parameter information of an OBSS. In the case where the header of the signal does not conform to a standard supported by the own device in step S1316(No in step S1316), in step S1332, the received frame analysis unit121acquires address information (Address1to Address4) of the MAC Header. In the case where the address information of the MAC Header includes MAC address information of the AP200as a BSSID of the own BSS (Yes in step S1336), the received frame analysis unit121causes the parameter information storage unit125to store the acquired parameter information as parameter information of the own BSS. In the case where the address information of the MAC Header does not include MAC address information of the AP200as a BSSID of the own BSS (No in step S1336), the received frame analysis unit121causes the parameter information storage unit125to store the acquired parameter information as parameter information of an OBSS. In the case where the reception processing unit112cannot detect a preamble of a wireless LAN in step S1304(No in step S1304), in step S1348, the received frame analysis unit121causes the parameter information storage unit125to store the acquired parameter information as energy detection parameter information. In step S1352, the received frame analysis unit121acquires information regarding an RSSI from the reception processing unit112, and causes the parameter information storage unit125to store the information. In step S1356, the received frame analysis unit121acquires information regarding a transmission path utilization time from the reception processing unit112, and causes the parameter information storage unit125to store the information. In the case where an FCS error does not occur in a series of frames (Yes in step S1360), processing ends. In the case where an FCS error occurs in a series of frames in step S1360(No in step S1360), the reception processing unit112provides error occurrence information to the operation control unit131, the operation control unit131causes a storage unit (not illustrated) to store the information, and processing ends. The parameter information acquisition operation has been described above. Now, parameter information reporting operation will be described with reference toFIGS.15A and15B.FIGS.15A and15Bare flowcharts illustrating the operation of the STA100according to the present embodiment reporting parameter information to the AP200. In step S1400, the operation control unit131acquires error occurrence information from the reception processing unit112. Then, in the case where an error has occurred at a predetermined frequency or more in step S1404(Yes in step S1404), parameter information reporting operation in step S1416and subsequent steps is performed. In addition, even in the case where an error has not occurred at a predetermined frequency or more in step S1404(No in step S1404), in the case where predetermined time or more passes from timing when parameter information has been reported previously (Yes in step S1408), processing of reporting parameter information is performed. Even in the case where predetermined time or more does not pass from timing when parameter information has been reported previously (No in step S1408), in the case where a parameter information report request from the AP200is received (Yes in step S1412), processing of reporting parameter information is performed. In the case where a parameter information report request from the AP200is not received in step S1412(No in step S1412), processing moves to step S1400. As described above, these triggers for parameter information reporting operation may be changed as appropriate. In addition, processing of step S1400may be omitted. In step S1416, in the case where the parameter information storage unit125stores unreported parameter information of the own BSS (Yes in step S1416), in step S1420, the transmission frame constructing unit126acquires the unreported parameter information of the own BSS from the parameter information storage unit125. In step S1424, the transmission frame constructing unit126constructs an own BSS parameter report frame. In the case where the parameter information storage unit125does not store unreported parameter information of the own BSS in step S1416(No in step S1416), processing moves to step S1428. In step S1428, in the case where the parameter information storage unit125stores unreported parameter information of an OBSS (Yes in step S1428), in step S1432, the transmission frame constructing unit126acquires the unreported parameter information of the OBSS from the parameter information storage unit125. In step S1436, the transmission frame constructing unit126constructs a BSS parameter report frame. In the case where the parameter information storage unit125does not store unreported parameter information of an OBSS in step S1428(No in step S1428), processing moves to step S1440. In step S1440, in the case where the parameter information storage unit125stores unreported energy detection parameter information (Yes in step S1440), in step S1444, the transmission frame constructing unit126acquires the unreported energy detection parameter information from the parameter information storage unit125. In step S1448, the transmission frame constructing unit126constructs an energy detection parameter report frame. In the case where the parameter information storage unit125does not store unreported energy detection parameter information in step S1440(No in step S1440), processing moves to step S1452. In step S1452, in the case where the parameter information storage unit125stores unreported parameter information (Yes in step S1452), in step S1456, the control unit130controls the wireless communication unit110to transmit a generated parameter report frame. In step S1460, the control unit130records a transmission time of the parameter report frame, and processing ends. In the case where the parameter information storage unit125does not store unreported parameter information in step S1452(No in step S1452), processing ends. 4. MODIFICATIONS The parameter information reporting operation has been described above. Now, modifications of the present disclosure will be described with reference toFIGS.16to18. 4-1. First Modification First, a first modification of the present disclosure is described with reference toFIGS.16and17.FIG.16illustrates a configuration of a wireless LAN system according to the first modification. The first modification is a case where it is difficult for the APs200to directly communicate with each other. As illustrated inFIG.16, the STA100bbelonging to the BSS10acan communicate with the STA100cbelonging to the BSS10bthat is an OBSS, but the AP200acannot communicate with the AP200b. In the first modification, the AP200exchanges aggregate parameter information with another AP200via the STA100. That is, the STA100according to the first modification controls processing related to transfer of aggregate parameter information. More specifically, the received frame analysis unit121of the STA100analyzes a received frame, and in the case of determining that aggregate parameter information from the AP200is received, provides the information to the operation control unit131. After that, the operation control unit131controls each component so as to transfer a frame including the aggregate parameter information. Now, an example of aggregate parameter information exchange operation according to the first modification will be described with reference toFIG.17.FIG.17is a sequence diagram illustrating the operation of the APs200exchanging aggregate parameter information in the first modification. In step S1500, the AP200atransmits aggregate parameter information, and the STA100breceives the aggregate parameter information. In step S1504, the STA100btransfers aggregate parameter information, and the STA100creceives the aggregate parameter information. In step S1508, the STA100ctransfers aggregate parameter information, and the AP200breceives the aggregate parameter information. In step S1512, the AP200btransmits aggregate parameter information, and the STA100creceives the aggregate parameter information. In step S1516, the STA100ctransfers aggregate parameter information, and the STA100breceives the aggregate parameter information. In step S1520, the STA100btransfers aggregate parameter information, and the AP200areceives the aggregate parameter information. As described above, according to the first modification, even in the case where the APs200cannot communicate with each other directly, the AP200can exchange aggregate parameter information with a different AP200via the STA100. For example, even in a situation in which different APs200cannot always communicate with each other normally, such as the case where a place of the AP200may be changed, the AP200can exchange aggregate parameter information with a different AP200. 4-2. Second Modification Now, a second modification of the present disclosure will be described with reference toFIG.18.FIG.18illustrates a configuration of a wireless LAN system according to the second modification. The second modification is a case where a controller and a plurality of APs200are connected via a wired network. As illustrated inFIG.18, the AP200a, the AP200b, and a controller are connected via a wired network. For example, the AP200a, the AP200b, and the controller may be connected via an Ethernet cable. In the second modification, the AP200transmits aggregate parameter information to the controller or exchanges aggregate parameter information with another AP200via the wired network. In the second modification, interference control using interference information may be performed by the controller, or may be performed by each AP200as appropriate. As shown in the second modification, the present disclosure may be applied to wireless LAN systems of various network configurations. 5. APPLICATION EXAMPLES The technology according to the present disclosure can be applied to various products. For example, the STA100may be realized as mobile terminals such as smartphones, tablet personal computers (PCs), notebook PCs, portable game terminals, or digital cameras, fixed-type terminals such as television receivers, printers, digital scanners, or network storages, or car-mounted terminals such as car navigation devices. In addition, the STA100may be realized as terminals that perform machine to machine (M2M) communication (also referred to as machine type communication (MTC) terminals) such as smart meters, vending machines, remotely controlled monitoring devices, or point of sale (POS) terminals. Furthermore, the STA100may be wireless communication modules mounted in such terminals (for example, integrated circuit modules configured by one die). On the other hand, for example, the AP200may be realized as a wireless LAN access point (also referred to as a wireless base station) which has a router function or does not have a router function. The AP200may be realized as a mobile wireless LAN router. The AP200may also be a wireless communication module (for example, an integrated circuit module configured with one die) mounted on such devices. 5-1. First Application Example FIG.19is a block diagram illustrating an example of a schematic configuration of a smartphone900to which the technology of the present disclosure can be applied. The smartphone900includes a processor901, a memory902, a storage903, an external connection interface904, a camera906, a sensor907, a microphone908, an input device909, a display device910, a speaker911, a wireless communication interface913, an antenna switch914, an antenna915, a bus917, a battery918, and an auxiliary controller919. The processor901may be, for example, a central processing unit (CPU) or a system on chip (SoC), and controls functions of an application layer and other layers of the smartphone900. The memory902includes random access memory (RAM) and read only memory (ROM), and stores data and programs executed by the processor901. The storage903can include a storage medium such as a semiconductor memory or a hard disk. The external connection interface904is an interface for connecting an externally attachable device such as a memory card or a universal serial bus (USB) device to the smartphone900. The camera906has an image sensor, for example, a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS), to generate captured images. The sensor907can include a sensor group including, for example, a positioning sensor, a gyro sensor, a geomagnetic sensor, an acceleration sensor, and the like. The microphone908converts sounds input to the smartphone900into audio signals. The input device909includes, for example, a touch sensor that detects touches on a screen of the display device910, a key pad, a keyboard, buttons, switches, and the like, to receive operation or information input from a user. The display device910has a screen such as a liquid crystal display (LCD), or an organic light emitting diode (OLED) display to display output images of the smartphone900. The speaker911converts audio signals output from the smartphone900into sounds. The wireless communication interface913supports one or more wireless LAN standards of IEEE 802.11a, 11b, 11g, 11n, 11ac, and 11ad, to establish wireless communication. The wireless communication interface913can communicate with another device via a wireless LAN access point in an infrastructure mode. In addition, the wireless communication interface913can directly communicate with another device in a direct communication mode such as an ad hoc mode or Wi-Fi Direct (registered trademark). Note that, Wi-Fi Direct is different from the ad hoc mode. One of two terminals operates as an access point, and communication is performed directly between the terminals. The wireless communication interface913can typically include a baseband processor, a radio frequency (RF) circuit, a power amplifier, and the like. The wireless communication interface913may be a one-chip module on which a memory that stores a communication control program, a processor that executes the program, and a relevant circuit are integrated. The wireless communication interface913may support another kind of wireless communication scheme such as a cellular communication scheme, a near-field communication scheme, or a proximity wireless communication scheme in addition to the wireless LAN scheme. The antenna switch914switches a connection destination of the antenna915among a plurality of circuits (for example, circuits for different wireless communication schemes) included in the wireless communication interface913. The antenna915has a single or a plurality of antenna elements (for example, a plurality of antenna elements constituting a MIMO antenna), and is used for transmission and reception of wireless signals through the wireless communication interface913. Note that the smartphone900may include a plurality of antennas (for example, antennas for a wireless LAN or antennas for a proximity wireless communication scheme, or the like), without being limited to the example ofFIG.19. In this case, the antenna switch914may be omitted from the configuration of the smartphone900. The bus917connects the processor901, the memory902, the storage903, the external connection interface904, the camera906, the sensor907, the microphone908, the input device909, the display device910, the speaker911, the wireless communication interface913, and the auxiliary controller919with each other. The battery918supplies electric power to each of the blocks of the smartphone900illustrated inFIG.19via power supply lines partially indicated by dashed lines in the drawing. The auxiliary controller919causes, for example, necessary minimum functions of the smartphone900to be operated in a sleep mode. In the smartphone900illustrated inFIG.19, the wireless communication unit110, the data processing unit120, and the control unit130described with reference toFIG.9may be mounted on the wireless communication interface913. In addition, at least some of these functions may be mounted on the processor901or the auxiliary controller919. Note that the smartphone900may operate as a wireless access point (software AP) as the processor901executes the function of an access point at an application level. In addition, the wireless communication interface913may have the function of a wireless access point. 5-2. Second Application Example FIG.20is a block diagram illustrating an example of a schematic configuration of a car navigation device920to which the technology of the present disclosure can be applied. The car navigation device920includes a processor921, a memory922, a GPS module924, a sensor925, a data interface926, a content player927, a storage medium interface928, an input device929, a display device930, a speaker931, a wireless communication interface933, an antenna switch934, an antenna935, and a battery938. The processor921may be, for example, a CPU or an SoC controlling a navigation function and other functions of the car navigation device920. The memory922includes RAM and ROM storing data and programs executed by the processor921. The GPS module924measures a position of the car navigation device920(for example, latitude, longitude, and altitude) using GPS signals received from a GPS satellite. The sensor925can include a sensor group including, for example, a gyro sensor, a geomagnetic sensor, a barometric sensor, and the like. The data interface926is connected with an in-vehicle network941via, for example, a terminal (not illustrated) to acquire data generated on the vehicle side such as car speed data. The content player927reproduces content stored in a storage medium (for example, a CD or a DVD) inserted into the storage medium interface928. The input device929includes, for example, a touch sensor that detects touches on a screen of the display device930, buttons, switches, and the like to receive operation or information input from a user. The display device930has a screen such as an LCD or an OLED display to display images of the navigation function or reproduced content. The speaker931outputs sounds of the navigation function or reproduced content. The wireless communication interface933supports one or more wireless LAN standards of IEEE 802.11a, 11 b, 11g, 11n, 11ac, 11ad, and the like to execute wireless communication. The wireless communication interface933can communicate with another device via a wireless LAN access point in the infrastructure mode. In addition, the wireless communication interface933can directly communicate with another device in a direct communication mode such as an ad hoc mode or Wi-Fi Direct. The wireless communication interface933can typically have a baseband processor, an RF circuit, a power amplifier, and the like. The wireless communication interface933may be a one-chip module on which a memory that stores a communication control program, a processor that executes the program, and a relevant circuit are integrated. The wireless communication interface933may support another kind of wireless communication scheme such as a near-field communication scheme, a proximity wireless communication scheme, or the cellular communication scheme in addition to the wireless LAN scheme. The antenna switch934switches a connection destination of the antenna935among a plurality of circuits included in the wireless communication interface933. The antenna935has a single or a plurality of antenna elements and is used for transmission and reception of wireless signals from and to the wireless communication interface933. Note that the car navigation device920may include a plurality of antennas, without being limited to the example ofFIG.20. In this case, the antenna switch934may be omitted from the configuration of the car navigation device920. The battery938supplies electric power to each of the blocks of the car navigation device920illustrated inFIG.20via power supply lines partially indicated by dashed lines in the drawing. In addition, the battery938accumulates electric power supplied from the vehicle side. In the car navigation device920illustrated inFIG.20, the wireless communication unit110, the data processing unit120, and the control unit130described with reference toFIG.9may be mounted on the wireless communication interface933. In addition, at least some of these functions may be mounted on the processor921. In addition, the wireless communication interface933may operate as the AP200described above, and provide wireless communication for a terminal of a user on the vehicle. Further, the technology of the present disclosure may be realized as an in-vehicle system (or a vehicle)940including one or more blocks of the above-described car navigation device920, the in-vehicle network941, and a vehicle-side module942. The vehicle-side module942generates vehicle-side data such as a vehicle speed, the number of engine rotations, or failure information and outputs the generated data to the in-vehicle network941. 5-3. Third Application Example FIG.21is a block diagram illustrating an example of a schematic configuration of a wireless access point950to which the technology of the present disclosure can be applied. The wireless access point950includes a controller951, a memory952, an input device954, a display device955, a network interface957, a wireless communication interface963, an antenna switch964, and an antenna965. The controller951may be, for example, a CPU or a digital signal processor (DSP) and operates various functions (for example, access limitation, routing, encryption, a fire wall, and log management) of the Internet Protocol (IP) layer and higher layers of the wireless access point950. The memory952includes RAM and ROM and stores a program executed by the controller951and various kinds of control data (for example, a terminal list, a routing table, an encryption key, security settings, and a log). The input device954includes, for example, a button or a switch, and receives operation performed by a user. The display device955includes an LED lamp and displays an operation status of the wireless access point950. The network interface957is a wired communication interface that connects the wireless access point950with a wired communication network958. The network interface957may include a plurality of connection terminals. The wired communication network958may be a LAN such as Ethernet (registered trademark) or may be a wide area network (WAN). The wireless communication interface963supports one or more wireless LAN standards of IEEE 802.11a, 11b, 11g, 11n, 11ac, 11ad, and the like to supply wireless connection to a nearby terminal as an access point. The wireless communication interface963can typically include a baseband processor, an RF circuit, and a power amplifier. The wireless communication interface963may be a one-chip module in which memory storing a communication control program, a processor executing the program, and relevant circuits are integrated. The antenna switch964switches a connection destination of the antenna965among a plurality of circuits included in the wireless communication interface963. The antenna965includes one antenna element or a plurality of antenna elements and is used to transmit and receive a wireless signal through the wireless communication interface963. In the wireless access point950illustrated inFIG.21, the wireless communication unit210, the data processing unit220, and the control unit230described with reference toFIG.9may be mounted on the wireless communication interface963. In addition, at least some of these functions may be mounted on the controller951. 6. SUPPLEMENTAL REMARKS The application examples of the present disclosure have been described above. Now, supplemental remarks about parameter information collection processing by the STA100will be described. As described above, the STA100collects parameter information of a BSS or an OBSS, but does not need to always perform the collection processing. For example, the STA100may refrain from collecting parameter information in the case where error occurrence frequency in transmission/reception processing is equal to or less than a predetermined threshold, and collect parameter information in the case where error occurrence frequency is greater than the predetermined threshold. Thus, the STA100can reduce an amount of power consumed by trying to collect parameter information even in the case where interference has not occurred. In addition, the STA100may refrain from collecting parameter information in the case where the own device is not connected to a power supply and is operated by a mobile battery, and collect parameter information in the case where the own device is connected to a power supply. Thus, the STA100can prevent the mobile battery from being exhausted by collecting parameter information. In addition, in the case where the STA100is moving, an interference situation with an OBSS changes frequently; hence, there is a possibility that appropriate parameter information is not acquired. Consequently, the STA100may use a global positioning system (GPS) sensor or the like, refrain from collecting parameter information in the case of determining that the own device is moving by being carried by a user, and collect parameter information in the case of determining that the own device is not moving. Thus, the STA100can collect appropriate parameter information, and can reduce an amount of power consumed by acquiring inappropriate parameter information. 7. CONCLUSION As described above, the AP200according to an embodiment of the present disclosure can grasp interference information without using a management device. Then, the AP200can exchange the interference information with another AP200. Furthermore, the AP200can appropriately perform interference control on the basis of the interference information. The preferred embodiment(s) of the present disclosure has/have been described above with reference to the accompanying drawings, whilst the present disclosure is not limited to the above examples. A person skilled in the art may find various alterations and modifications within the scope of the appended claims, and it should be understood that they will naturally come under the technical scope of the present disclosure. For example, steps in the operation of the STA100according to the present embodiment need not be always processed in chronological order in accordance with the order described as a flow chart. For example, steps inFIGS.3to5,FIGS.14A to15B, andFIG.18may be processed in an order different from the order described in the drawing, or may be concurrently processed, as appropriate. For example, steps S1000to S1012inFIG.3may be processed in a different order, or may be concurrently processed. In addition, part of the configuration of the STA100may be provided outside the STA100as appropriate. Similarly, part of the configuration of the AP200may be provided outside the AP200as appropriate. In addition, some functions of the STA100may be implemented by the control unit130. That is, the control unit130may implement some functions of the wireless communication unit110or the data processing unit120. Similarly, some functions of the AP200may be implemented by the control unit230. That is, the control unit230may implement some functions of the wireless communication unit210or the data processing unit220. Further, the effects described in this specification are merely illustrative or exemplified effects, and are not limitative. That is, with or in the place of the above effects, the technology according to the present disclosure may achieve other effects that are clear to those skilled in the art from the description of this specification. Additionally, the present technology may also be configured as below. (1) A station device including: a reception unit configured to receive a signal transmitted from another network other than a BSS to which the own device belongs; an acquisition unit configured to acquire parameter information regarding the signal; and a reporting unit configured to report the parameter information to an access point device that performs interference control in the BSS. (2) The station device according to (1), in which the other network is an OBSS that overlaps with the BSS. (3) The station device according to (2), in which the parameter information includes modulation scheme information, transmission power information, BSS identification information, RSSI information, version information, type information, or transmission path utilization time information. (4) The station device according to (1), in which the other network is a cellular network. (5) The station device according to (4), in which the parameter information includes RSSI information or transmission path utilization time information. (6) The station device according to any one of (1) to (5), in which in a case where the reception unit receives a signal transmitted from the BSS, the acquisition unit acquires second parameter information regarding the signal transmitted from the BSS, and the reporting unit reports the second parameter information to the access point device. (7) The station device according to (6), in which the reception unit receives aggregate parameter information generated by the access point device aggregating the parameter information or the second parameter information, and the reporting unit reports the aggregate parameter information to an access point device that belongs to another BSS other than the BSS and performs interference control. (8) The station device according to any one of (1) to (3), further including a determination unit configured to determine whether or not the signal is a signal transmitted from the BSS on the basis of BSS identification information included in the parameter information, in which the reporting unit reports the parameter information as interference information to the access point device on the basis of the determination. (9) The station device according to any one of (1) to (8), in which the acquisition unit acquires the parameter information in a case where the station device is connected to a power supply or a case where the station device is not moving. (10) A wireless control method executed by a computer, including: receiving a signal transmitted from another network other than a BSS to which an own device belongs; acquiring parameter information regarding the signal; and reporting the parameter information to an access point device that performs interference control in the BSS. (11) A program causing a computer to implement: receiving a signal transmitted from another network other than a BSS to which an own device belongs; acquiring parameter information regarding the signal; and reporting the parameter information to an access point device that performs interference control in the BSS. (12) An access point device including: a reception unit configured to receive, from a station device, parameter information regarding a signal transmitted from another network other than a BSS to which the own device belongs; and a control unit configured to perform interference control on the basis of the parameter information. (13) The access point device according to (12), in which the other network is an OBSS that overlaps with the BSS. (14) The access point device according to (13), in which the parameter information includes modulation scheme information, transmission power information, BSS identification information, RSSI information, version information, type information, or transmission path utilization time information. (15) The access point device according to (12), in which the other network is a cellular network. (16) The access point device according to (15), in which the parameter information includes RSSI information or transmission path utilization time information. (17) The access point device according to any one of (12) to (16), in which the reception unit receives, from the station device, second parameter information regarding a signal transmitted from the BSS, and the control unit performs interference control on the basis of the parameter information and the second parameter information. (18) The access point device according to (17), further including: a generation unit configured to generate aggregate parameter information obtained by aggregating the parameter information or the second parameter information; and a reporting unit configured to report the aggregate parameter information to an access point device that belongs to another BSS other than the BSS and performs interference control. (19) A communication control method executed by a computer, including: receiving, from a station device, parameter information regarding a signal transmitted from another network other than a BSS to which an own device belongs; and performing interference control on the basis of the parameter information. (20) A program causing a computer to implement: receiving, from a station device, parameter information regarding a signal transmitted from another network other than a BSS to which an own device belongs; and performing interference control on the basis of the parameter information. REFERENCE SIGNS LIST 10BSS20information element used for transmitting parameter information of BSS30information element used for transmitting parameter information of OBSS40information element used for transmitting energy detection parameter information100STA110wireless communication unit120data processing unit130control unit200AP210wireless communication unit220data processing unit230control unit
74,361
11943651
DETAILED DESCRIPTION By way of example, embodiments of the invention will now be described in the context of a point-to-point microwave broadband link operating as a time division duplex system at carrier frequencies typically between 3 and 6 GHz. However, it will be understood that this is by way of example only and that other embodiments may involve other wireless systems and frequencies, and embodiments are not restricted to a specific frequency band of operation or a specific standard, and may involve operation in licensed or unlicensed bands. Typical applications include backhaul systems and microwave Ethernet bridges, for providing connectivity to small cell and macro cell infrastructure, for leased line replacement, and for providing rapidly deployed video, voice and data services for disaster recovery. FIGS.1aand1bshow an embodiment of the invention.FIG.1ashows transmission of data from a first wireless station1to a second wireless station2in the absence of a fault condition. As shown inFIG.1a, in the absence of detection of a failure of a first radio link, data is transmitted from a first transmitter in a primary master radio3at the first wireless station1to a first receiver in a primary slave radio5at the second wireless station2, via a first radio link using a first subset of first radio resource blocks, in this case a first timeslot9, of duration t1. In addition, second data is transmitted from a second transmitter in a secondary master radio4at the first wireless station1to a second receiver in a secondary slave radio6at the second wireless station2via a second radio link using a second subset of the first radio resource blocks, in this case a second timeslot10, of duration t2. The first radio link and the second radio link are monitored for a failure of the first radio link or the second radio link. The monitoring may be carried out by a control processor, for example at the first wireless station, and may be on the basis of monitoring of synchronisation of the receiver, fed back as signalling data from the second wireless station. A failure of synchronisation may result in the detection of a failure of a link. Alternatively or additionally, detection of a failure of a link may be on the basis of detection of a packet error rate or bit error rate being greater than an acceptable threshold. It may be required that an error or failure condition persists for at least a predetermined period of time, in order for a failure of a link to be detected. Other methods of detection of a failure of a link may be used, such as, for example, monitoring of received signal power level, and detecting a failure if the received signal power level falls below a threshold level for a predetermined period of time. As shown inFIG.1b, if a failure of the first radio link is detected, a combination of the first and second subsets of the first radio resource blocks is used for the second radio link, in this case by extending the duration of the timeslot used by the second radio link to a longer timeslot13with duration t3, to occupy the time allocated to both the first and second links. This allows data capacity to be maintained in the fault condition, and because both the first and second links are already established and being monitored, this provides assurance that the system will perform correctly in the event of failure of one link, and avoid the need for a start-up time to be allowed for a link to be established. The assurance that the system will perform correctly in the event of failure of one link, and the avoidance of the need for a start-up time, is gained at the potential cost that the selection of links is between only the two alternative established links, being the first link from the first transmitter to the first receiver, and the second link from the second transmitter to the second receiver. The potential cost of this approach is that selection by this method would not allow selection of cross-coupled links between the first transmitter and the second receiver, or between the second transmitter and the first receiver. However, the cross-coupled links could be tried as a fall-back if detection of failure of both the first and second links is detected, but without the advantages of assured performance and lack of set-up time. So, the present method typically provides advantages if a single link fails, but at the cost of potential disadvantages in the event of both links failing. As shown inFIG.1a, the first subset of the first radio resource blocks comprise a first timeslot9occupying a first frequency channel f1and the second subset of the first radio resource blocks comprise a second timeslot10occupying the same first frequency channel f1. This allows an increase in capacity of the working radio link when a fault is detected in the other radio link by increasing the length of the timeslot used by the working radio link within the time allocated to the two radio links. In the embodiment shown inFIGS.1aand1b, the first data and second data comprise payload data. In the absence of detection of a failure of the first radio link, the input data to the first wireless station1, which is a payload data stream in this example, is de-multiplexed by the data mux/demux function7into a first data stream for transmission via the first radio link and into a second data stream for transmission via the second radio link. On reception at the second wireless station2, data received at the second wireless station via the first radio link is aggregated in the data mux/demux function8with data received via the second radio link. As shown inFIG.1b, dependent on detection of a failure of the first radio link, the payload data stream is transmitted via the second radio link only, and the data is not multiplexed or aggregated. This allows payload data to be used to maintain the synchronisation of both radio links in the absence of a fault condition, so that there is no need for a delay to allow for the start up of a second link in a fault condition, since both links are already operating. As shown inFIG.1a, the first subset of the first radio resource blocks, in this case the first timeslot9, has substantially the same capacity as the second subset of the first radio resource blocks, in this case the second timeslot10. FIGS.2aand2bshow an embodiment in which the first data comprises payload data and the second data comprises control data and does not comprise payload data. The second data serves to keep the second data link in synchronisation. As shown inFIG.2a, in the absence of detection of a failure of the first radio link, a payload data stream is switched for transmission via the first radio link by data switch14. On reception at the second wireless station2, a data switch15selects data received via the first radio link. As shown inFIG.2b, dependent on detection of a failure of the first radio link, the payload data stream is switched by data switch15for transmission via the second radio link in an extended timeslot21. On reception at the second wireless station2, a data switch15selects data received via the second radio link. This allows a simple implementation by avoiding the need for data multiplexing and aggregation. As shown inFIG.2a, the first subset of radio resource blocks, in this case the first timeslot17of duration t4, has greater capacity than the second subset of radio resource blocks, in this case the second timeslot18of duration t5, which is shorter than t4. This allows the data capacity in the absence of a fault condition to be increased while using data switch as opposed to a multiplexer/demultiplexer, so that only one of the radio links is used for transmission of payload data. The first subset of radio resource blocks may have greater than nine times the capacity of the second subset of radio resource blocks. The asymmetry increases the payload data capacity in the absence of a fault condition. FIGS.3aand3bshow a case where payload data is demultiplexed at the first wireless station in the absence of a fault condition between the first and second radio links, in a case where the first timeslot22for the first radio link is longer than the second timeslot23for the second radio link. This may simplify multiplexing and aggregation for some types of data. On detection of failure of the first radio link, the second timeslot is extended to a longer timeslot26. FIGS.4aand4bshow an embodiment of the invention in which the first subset of the first radio resource blocks28comprise a first frequency channel of bandwidth B1in a first timeslot and the second subset of the first radio resource blocks29comprise a second frequency channel in the same first timeslot, also in this example of bandwidth B1, although the bandwidths could differ from each other. As shown inFIG.4b, when a fault is detected in the first radio link fault, the frequency bandwidth used by the second radio link is increased within the bandwidth allocated to the two radio links, in this case to2B1. As shown inFIGS.4aand4b, in an embodiment of the invention, the first radio resource blocks occupy a contiguous allocation in frequency within a timeslot27,30. This allows efficient use of reallocated radio resource in a fault condition by expansion of the frequency bandwidth of the second radio link. The expansion of bandwidth may be straightforward to achieve in a radio which has selectable bandwidths. The expansion need not be limited to an expansion by a factor of two; any expansion factor may be used provided that the expanded bandwidth is within the allocation for the first and second links. FIGS.1a,1b,2a,2b,3a, and3bshow that the first radio resource blocks may occupy a contiguous allocation in time, in the form of contiguous timeslots. This does not preclude the inclusion of a guard time between the first and second timeslots. FIG.5ashows first radio resource blocks32in an embodiment in which the first radio resource blocks32comprise two timeslots33,34. FIG.5bshows first radio resource block35in an embodiment in which the first radio resource blocks35comprise two frequency channels36,37. FIG.6shows a series of Time Division Duplex (TDD) frames, including downlink frames DL for transmission from the first wireless station1to the second wireless station2, and uplink frames UL for transmission from the second wireless station2to the first wireless station1. As shown inFIG.6, the first radio resource blocks ofFIG.5amay be transmitted within the recurring downlink timeslots DL, in this examples as the first and second timeslots33,34, which are recurring timeslots within the TDD frames. Alternatively, radio resource blocks comprising two frequency channels as shown inFIG.5bmay be transmitted within the downlink timeslots. Also, for uplink operation, radio resource blocks divided in frequency and/or time may also be used for the same reasons as for on the downlink. FIG.7ashows bi-directional transmission of data between a first wireless station1and a second wireless station2in a TDD system in the absence of a fault condition, downlink data being multiplexed for transmission from the first wireless station1to the second wireless station2over a first radio link in a first downlink timeslot38, corresponding to timeslot33inFIG.6, and over a second radio link in a second downlink timeslot40, corresponding to timeslot34inFIG.6, and uplink data being multiplexed for transmission from the second wireless station to the first wireless station over a first radio link in a first uplink timeslot39and over a second radio link in a second uplink timeslot41. FIG.7bshows the system ofFIG.7ain a fault condition of the first radio link, the data being transmitted over the second radio link in extended timeslots42,43occupying the time allocated in the absence of a fault to the first and second respective downlink and uplink timeslots. In an embodiment of the invention, the first and second wireless stations are part of a wireless network comprising further wireless stations synchronised according to the TDD and TDMA protocol. This allows the first and second wireless stations to be used within a wireless network having other wireless stations. FIG.8shows a series of frames according to a Time Division Duplex and Time Division Multiple Access protocol. Timeslots44and45are for downlink transmission from the primary and secondary master radio of the first, or master, wireless station to the primary and secondary radios respectively of a second wireless station, which may be a first slave station. Timeslots46and47are for downlink transmission from the primary and secondary master radio of the first, or master, wireless station to the primary and secondary radios respectively of a third wireless station, which may be a second slave station. Timeslots48and49are for uplink transmission from the primary and secondary radios respectively of the second wireless station, which may be the first slave station, to the primary and secondary master radio of the first, or master, wireless station. Timeslots50and51are for uplink transmission from the primary and secondary radios respectively of the third wireless station, which may be the second slave station, to the primary and secondary master radio of the first, or master, wireless station. Although only two slave stations are shown, this is for illustration only and more than two slave stations may be used, each being allocated downlink and uplink timeslots within a TDD/TDMA frame. FIG.9ais a schematic diagram showing transmission of data in the timeslots ofFIG.8in a Time Division Duplex system and Time Division Multiple Access, between a first wireless station1and a second2aand third2bwireless station in the absence of a fault condition. FIG.9bis a schematic diagram showing the system ofFIG.9ain a fault condition of the first, primary, radio link between the first and second wireless stations. As can be seen, the timeslots45and49used by the secondary radios of the master station1and the first slave station2aare expanded in time to include the timeslots44and48previously allocated to the failed first link between the primary radios. The longer timeslots are shown as timeslots52and53. In this example, the timeslots allocated in the link between the master station1and the second slave station2bare unaffected by the failure of the first link. This is because, in this case, the failure is caused by a problem in the first slave station2a, and does not affect any links to the second slave station2b. If the failure of the first link is due to a problem in the master station1, then both the link from the primary master radio3to the primary slave radio5aof the first slave station2a, and also the link from the primary master radio3to the primary slave radio5bof the second slave station2bwould be affected. In this case, the timeslots47and51for communication between the secondary master radio4and the secondary slave radio6bwould be extended to occupy the time previously allocated to timeslots46and50. A similar process would occur if a fault is detected in a link between secondary radios, the timeslots used by the secondary radios being allocated to the primary radios. The designation of “primary” and “secondary” radio may be arbitrary. FIG.10shows the first wireless station1and the second wireless station2in schematic form. A third wireless station, as used as a second slave station in a TDMA scheme, may also have the same block diagram as shown for the first and second wireless stations. Considering the first wireless station1, a data link56is typically connected to the first wireless station1, typically an optical fibre connection, which may carry, for example, Ethernet traffic. The data stream is processed in a data processing circuit element54, which may be a multiplexer/demultiplexer circuit element in some embodiments, which splits the data stream into typically two streams before transmission, and aggregates the two data streams on reception, under control of a controller circuit element55. If a fault condition is detected, the multiplexing and aggregation may be disabled, and the data routed via the second radio link. In another embodiment, the data processing circuit element54may be a data switch which routes the data stream via the first or second radio link under control of the controller55. The data processing circuit element54may be implemented using well known techniques including digital signal processing integrated circuits, programmable gate arrays, dedicated hardware. The controller55, which may also be referred to as a processor, may comprise program code held in memory configured to cause the controller to perform the method of embodiments of the invention. The processor may comprise one or more digital signal processors, and/or programmable logic arrays. The primary master radio3and the secondary master radio4are each connected to and controlled by the controller55, and each comprises conventional baseband signal processing circuit elements and conventional up-conversion and down-conversion circuit elements, comprising filtering, amplification and mixing components as is conventional in a radio transceiver. Each radio may be connected to a respective antenna as shown, or alternatively the two radios may be connected to the same antenna. The second wireless station2may have an identical construction to the first wireless station1. The designations of “master radio” and “slave radio” may be arbitrary, and equivalent to “first radio” and “second radio”. Typically, the master radio sets the timing for the slave radio, but this may not be the case in all embodiments and the two radios may be equivalent to each other. A radio typically comprises a transmitter and a receiver. The designations of “primary radio” and “secondary radio” may also be arbitrary, and equivalent to “first radio” and “second radio”. In particular, in the embodiments ofFIGS.1a,1b,3a,3b,4a,4b,7aand7b, the either radio could be the primary radio in terms of the method of operation; the primary radio is the radio whose failure is detected. The data processing circuit element57of the second wireless station2is typically the same as the data processing circuit element of the first wireless station1. The slave controller58of the second wireless station2may typically have the same construction as the controller55of the first wireless station, and may or may not have the same program code. The second wireless station typically has a data link connection59which is similar to that of the first wireless station. FIG.11is a flow diagram showing a method according to an embodiment of the invention according to steps S11.1, S11.2and S11.3, andFIG.12is a flow diagram showing a method according to an embodiment of the invention according to steps S12.1, S12.2and S12.3. So, various embodiments of the invention have been described which comprise either a payload data switch shown inFIGS.2A and2Bfor example, or a payload data multiplexer, as shown inFIGS.1A and1Bfor example. In each case this allows data capacity to be maintained in the fault condition by re-allocating the radio resource of the failed link to the good link, and maintains operation of both links in the absence of failure to reduce start-up time in the event of a failure and to provide assurance that the system will perform correctly in the event of failure of one link. In the case of the use of a payload data switch, one of the links is maintained in the absence of detection of failure by signalling data which typically does not include payload data. In the case of the use of a payload data multiplexer, both of the links are maintained in the absence of detection of failure by payload data. In some circumstances this may allow the data flow to be maintained with less impact on, for example, packet delay than may be the case with the data switch, but at the cost of the higher complexity of the multiplexer. Embodiments using a data switch may be termed a 1+1 solution. This uses the data switch to provide only one link with active payload data while the other link is inactive with regard to payload data, being maintained by signalling data. Preferably the link which is inactive with regard to payload data is configured to consume a small proportion, for example 10%, of the total time or frequency resources. Bridged payload traffic is typically carried only by the active link. On failure of one link, say the primary link, the remaining link, in this case the secondary link, becomes active in terms of payload data and expands to use all of the resources. Embodiments using a multiplexer may be termed a 2+0 solution. This uses the multiplexer to provide two links carrying payload data in the absence of a fault, sharing the time or frequency resources. Payload traffic is de-multiplexed and multiplexed so that both links contribute to the overall capacity. On failure of one link, traffic is routed over the remaining link, and the remaining link expands to use all of the resources. The 1+1 solution is typically simpler in implementation, not requiring the potentially complex multiplexing function. The capacity of the 1+1 solution may increase following failure. The 2+0 solution typically provides slightly higher capacity in normal operation as all the resource blocks are used to transport data. The 1+1 solution monitors the operation of the equipment which is inactive with regard to payload data and provides assurance that it is available to take over in the event of failure. Conventional systems do not provide complete assurance that the inactive radio will operate correctly after a protection switchover. By maintaining and monitoring two links, embodiments of the invention provide this assurance. However, this may be at the cost of typically not allowing for links to be set up using the transmitter of one link and the receiver of the other link, as may be the case in a conventional hot standby system. The 1+1 solution allows the inactive link to be established in advance, so that the protection switchover involves only expansion of time or frequency dimensions. This permits the use of air interface methods that inherently take time to establish a link without an excessive downtime on failure of the active link, such as OFDM (Orthogonal Frequency Division Multiplexing). Such air interface methods may be particularly suited to non-ideal wireless paths, for example non-line of sight. The 2+0 solution maintains overall capacity on link failure, where conventional systems may typically drop to 50% capacity. The above embodiments are to be understood as illustrative examples of the invention. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.
23,226
11943652
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS Methods and apparatuses are described herein for prioritizing V2X sidelink communication transmissions over uplink (UL) transmissions. An enhanced V2X Management Object (MO) is described herein, which may be may be used for provisioning V2X configuration parameters into a user equipment (UE). Methods are described herein for (1) performing sidelink logical channel prioritization (LCP), (2) performing UL data LCP that address the impacts of SL related transmissions, and (3) determining the prioritization of a V2X sidelink communication transmission versus an UL transmission for scenarios where the V2X traffic priority with respect to the UL traffic varies dynamically. In the embodiments described herein, the terms V2X service, V2X message or V2X application data packet may be used interchangeably. The techniques described herein address the issue of sidelink LCP in view of New Radio (NR) V2X (Vehicle-to-X Communication) requirements, which are more diverse and more stringent than that of LTE V2X. These techniques address the issue of inter-UE prioritization between sidelink transmission versus uplink transmission over the Uu interface, taking into account the fact that such prioritization in legacy systems is based on absolute sidelink LCH priority configured into the UE, as opposed to relative priority between SL transmission and UL transmission. The following abbreviations and definitions may be used herein:3 GPP 3rdGeneration Partnership ProjectAS Access StratumBSD Bucket Size DurationBSR Buffer Status ReportBWP Bandwidth PartCE Control ElementC-RNTI Cell Radio Network Temporary IdentifierDPR Data Volume and Power Headroom ReporteNB Evolved Node BeV2X Enhanced Vehicle-to-X CommunicationeMBB enhanced Mobile BroadbandE-UTRAN Evolved UMTS Terrestrial Radio Access NetworkGA Geographical AreagNB NR NodeBHARQ Hybrid Automatic Repeat RequestHSS Home Subscriber ServerITS Intelligent Transport SystemITS-AID ITS Application IdentifierLCG Logical Channel GroupLCH Logical ChannelLCID Logical Channel IdentityLCP Logical Channel PrioritizationLTE Long Term EvolutionMAC Medium Access ControlMME Mobility Management EntitymMTC massive Machine Type CommunicationMO Management ObjectNR New RadioPBR Prioritized Bit RatePDB Packet Delay BudgetPDCP Packet Data Convergence ProtocolPDN Packet Data NetworkPDU Protocol Data UnitP-GW PDN GatewayPHR Power Headroom ReportPLMN Public Land Mobile NetworkPPPP ProSe Per Packet PriorityPPPR ProSe Per Packet ReliabilityProSe Proximity-Based ServicesPSID Provider Service IdentifierPUCCH Physical Uplink Control ChannelSC sidelink ControlSCS SubCarrier SpacingSCI sidelink Control InformationS-GW Serving GatewaySL sidelinkSPS Semi-Persistent SchedulingSR Scheduling RequestRAN Radio Access NetworkRAT Radio Access TechnologyRLC Radio Link ControlRNTI Radio Network Temporary IdentifierRRC Radio Resource ControlSL-SCH sidelink Shared ChannelTTI Transmission Time IntervalUE User EquipmentUL UplinkUL-CCCH Uplink Common Control ChannelUMTS Universal Mobile Telecommunications SystemURLLC Ultra-Reliable and Low Latency CommunicationsUSIM Universal Subscriber Identify ModuleV2I Vehicle-to-Infrastructure CommunicationV2N Vehicle-to-Network CommunicationV2P Vehicle-to-Pedestrian CommunicationV2V Vehicle-to-Vehicle CommunicationV2X Vehicle-to-X Communication NR V2X use cases and requirements are addressed by the techniques described herein. SA1 has identified numerous use cases for advanced V2X services with the consideration of desirable new applications in the automotive industry. These use cases for advanced V2X services may be categorized into four use case groups such as vehicles platooning, extended sensors, advanced driving and remote driving as follows: Vehicle Platooning enables vehicles to dynamically form a platoon travelling together. The vehicles in the platoon obtain information from a leading vehicle to manage this platoon. This information may allow the vehicles to drive closer than normal in a coordinated manner, going to the same direction and travelling together. Extended Sensors enable the exchange of raw or processed data gathered through local sensors or live video images among vehicles, road site units, devices of pedestrian and V2X application servers. The vehicles may increase the perception of their environment beyond what their own sensors may detect to enable a more broad and holistic view of the local situation. High data rate is one of the characteristics of the extended sensors use case group. Advanced Driving enables semi-automated or full-automated driving. Each vehicle and/or RSU may share its own perception data obtained from its local sensors with vehicles in proximity and that allows vehicles to synchronize and coordinate their trajectories or maneuvers. Each vehicle may share its driving intention with vehicles in proximity as well. Remote Driving enables a remote driver or a V2X application to operate a remote vehicle for those passengers who cannot drive by themselves or remote vehicles located in dangerous environments. When variation is limited and routes are predictable, such as in the case of public transportation, driving based on cloud computing may be used. High reliability and low latency are the main requirements of the remote driving use case group. FIG.1depicts an overview50of 5G eV2X requirements versus LTE V2V R14 requirements. The 5G eV2X target data rate51is approximately one hundred times higher than the LTE V2V Rel-14 data rate, e.g. from a range of 1-10 Mbps to 1 Gbps or above. Similarly, the 5G eV2X target end-to-end latency52is five to twenty times lower than that of LTE Rel-14 V2V, e.g. a latency reduction from a range of 20-100 ms to a range of 3-5 ms. The 5G eV2X target communication range53is two to three time larger than that of LTE Rel-14 V2X, e.g. an increase in communication range from a range of 100-320 m to 1000 m or above. The 5G eV2X positioning target accuracy54is ten times higher than that of LTE Rel-14 V2X, e.g. an accuracy increase from a range of 5-15 m to a range of 0.1-0.5 m. Similarly, the 5G eV2X target mobility relative speed55is two times higher than that of LTE Rel-14 V2V, e.g. an increase in target relative speed from 280 km/h to 550 km/h. Similarly, the 5G eV2X target reliability56is 1000 times higher than that of LTE V2V, e.g. an increase in reliability requirement from 90% to 99.99% or more. FIG.2depicts a non-roaming architecture200for PC5 and LTE-Uu based V2X communication. The V2X Control Function201is the logical function that may be used for network related actions required for V2X. The V2X Control Function201may communicate with a V2X Application Server202via the V2 reference point217. The V2X Control Function201may be used to provision a UE (e.g., UE205a, UE205b, UE205c, or UE205d) with necessary parameters (e.g., destination Layer-2 IDs, radio resource parameters, V2X Application Server202address information, mapping between service types and V2X frequencies) in order to use V2X communication. These parameters may be pre-configured in the UE, or, if in coverage, may be provisioned by signaling over the V3 reference point210from the V2X Control Function201in the Home Public Land Mobile Network (HPLMN). The UE may exchange V2X control information with the V2X Control Function201over the V3 reference point210. A V2X application (e.g., V2X application204a, V2X application204b, V2X application204c, or V2X application) may be associated with each UE (e.g., UE205a, UE205b, UE205c, or UE205d, respectively). V2X applications may communicate via the V5 reference point212. A V2X Application may communication with the V2X Application Server202via the V1 reference point219. The UEs (e.g., UE205a, UE205b, UE205c, or UE205d) may communicate with the Evolved-UMTS Terrestrial Radio Access Network (E-UTRAN)206via the LTE-Uu interface213. The E-UTRAN206may access the Mobility Management Entity (MME)207via the S1 interface214. The V2X Control Function201may access the Home Subscriber Server (HSS)209via the V4 reference point218. The MME207may access the HSS209via the S6a reference point215. The V2X Application Server202may access the PDN Gateway or Serving Gateway (SIP-GW)208via the SGi reference point216. When PC5211is used for the transmission of V2X messages, the following principles may be followed for both network scheduled operation mode (e.g. mode3) and UE autonomous resources selection mode (e.g. mode4): ProSe Per-Packet Priority (PPPP) may apply to the V2X communication over PC5211; The application layer may set the PPPP for each V2X message when passing it to lower layer for transmission; The mapping of application layer V2X message priority to PPPP may be configured on the UE; The setting of the PPPP value may reflect the latency required in both the UE and the Evolved Node B (eNB), e.g. the low Packet Delay Budget (PDB) may be mapped to the high priority PPPP value; The mapping between V2X service types and V2X frequencies; The mapping of Destination Layer-2 ID(s) and the V2X services, e.g., PSID or ITS-AIDs of the V2X application (e.g., V2X application204a, V2X application204b, V2X application204c, or V2X application204d); and The mapping of the PPPP to packet delay budget. When the network scheduled operation mode is used, the following principles may apply: A UE may provide priority information reflecting the PPPP to the eNB for resources request; When the eNB receives a request for a PC5 resource from the UE, the eNB may deduce the packet delay budget from the priority information reflecting PPPP from the UE; The eNB may use the priority information reflecting the PPPP for priority handling and UE-PC5-AMBR for capping the UE PC5 transmission in the resources management; The UE may provide the Destination Layer-2 ID(s) of the V2X services to the eNB for resources requested; and When the eNB receives a request for PC5 resource from a UE, the eNB may determine the V2X frequency(ies) in which the V2X service is to be scheduled. When the autonomous resources selection mode is used, the following additional principles apply: The UE may derive the packet delay budget of the V2X message from the PPPP based on the provisioned mapping information; and The UE may derive the frequency in which a V2X service is to be transmitted, from the mapping between V2X service types and V2X frequencies. FIG.3depicts the configuration parameters in the form of an LTE V2X over PC5 Communication MO300for V2X communication over PC5 for LTE V2X Communication Provisioning. FIG.4depicts configuration parameters in the form of a Geographical Area (GA) MO400. For sidelink HARQ operation, there may be one sidelink HARQ Entity at the medium access control (MAC) entity for transmission on SL-SCH, which may maintain a number of parallel sidelink processes. For example, for V2X sidelink communication, the maximum number of transmitting sidelink processes associated with the sidelink HARQ Entity may be eight. A sidelink process may be configured for transmissions of multiple MAC PDUs. For example, for transmissions of multiple MAC PDUs, the maximum number of transmitting sidelink processes with the sidelink HARQ entity may be two. A delivered and configured sidelink grant and its associated HARQ information may be associated with a sidelink process, which may be associated with a HARQ buffer. If the sidelink process is configured to perform transmissions of multiple MAC PDUs for V2X sidelink communication, the process may maintain a counter SL_RESOURCE_RESELECTION_COUNTER. For other configurations of the sidelink process, this counter may not be available. The transmission of V2X sidelink communication may be prioritized over UL transmissions if the following conditions are met:if the MAC entity is not able to perform UL transmissions and V2X sidelink transmissions simultaneously;if the UL transmission is not prioritized by the upper layer; andif the value of the highest priority of the sidelink logical channel(s) in the MAC PDU is lower than thresSL-TxPrioritization. A UE may establish multiple logical channels. An logical channel identity (LCD) included within the MAC subheader may identify a logical channel within the scope of one source layer-2 ID and destination layer-2 ID combination. Parameters for LCP may not be configured. The access stratum (AS) may be provided with the PPPP of a PDU transmitted over the PC5 interface by a higher layer. There may be a PPPP associated with each logical channel. The PDB of the PDU may be determined from the PPPP. The low PDB may be mapped to the high priority PPPP value. A sidelink LCP procedure may be applied when a new transmission is performed. Each sidelink logical channel has an associated priority, which may comprise the PPPP. Multiple sidelink logical channels may have the same associated priority. The mapping between priority and LCD may be left for UE implementation. The MAC entity may perform the following LCP procedure either for each sidelink control information (SCI) transmitted in a sidelink control (SC) period in sidelink communication, or for each SCI corresponding to a new transmission in V2X sidelink communication: The MAC entity may allocate resources to the sidelink logical channels in the following steps: Consider sidelink logical channels not previously selected for this SC period and the SC periods (if any) that are overlapping with this SC period, to have data available for transmission in sidelink communication. Select a ProSe destination, having the sidelink logical channel with the highest priority, among the sidelink logical channels having data available for transmission. For each MAC PDU associated to the SCI the following steps may be performed: Allocate resources to the sidelink logical channel with the highest priority among the sidelink logical channels belonging to the selected ProSe destination and having data available for transmission; If any resources remain, sidelink logical channels belonging to the selected ProSe destination may be served in decreasing order of priority until either the data for the sidelink logical channel(s) or the SL grant is exhausted, whichever comes first. Sidelink logical channels configured with equal priority may be served equally. During the scheduling procedure described above the UE may also follow the rules below:the UE does not segment a radio link control (RLC) SDU (or partially transmitted SDU) if the whole SDU (or partially transmitted SDU) fits into the remaining resources;if the UE segments an RLC SDU from the sidelink logical channel, it may maximize the size of the segment to fill the grant as much as possible;maximize the transmission of data; andif the MAC entity is given a sidelink grant size that is equal to or larger than 10 bytes (for sidelink communication) or 11 bytes (for V2X sidelink communication) while having data available for transmission, the MAC entity may not transmit only padding. The above sidelink LCP procedure has no provision similar to that used in the Uu interface logical channel prioritization procedure, such as logical channel prioritized bit rate to avoid lower priority channel starvation. Similarly, the sidelink logical channel procedure has no built-in provision for restrictions of logical channels that can be served by a given resource grant, such as the one specified for NR LCP procedure, in order to fulfill requirements imposed by restrictions such as latency restrictions, numerology restrictions or allowed serving cells restrictions (for example in support of packet duplication). In LTE or NR a buffer status reporting (BSR) procedure over the Uu interface may be used to provide the serving eNB or gNB with information about UL data volume in the MAC entity. Also in LTE, the sidelink Buffer Status reporting procedure may be used to provide the serving eNB with information about the amount of sidelink data available for transmission in the SL buffers associated with the MAC entity. Radio resource control (RRC) may control BSR reporting for the sidelink by configuring the two timers: the periodic-BSR-TimerSL and the retx-BSR-TimerSL. Each sidelink logical channel may belong to a ProSe destination. Each sidelink logical channel may be allocated to a logical channel group (LCG) depending on the priority of the sidelink logical channel and the mapping between the LCG ID and the priority that is provided by upper layers. The LCG may be defined per ProSe destination. A sidelink BSR similar to the Uu interface BSR may be a regular BSR, a periodic BSR, or a padding BSR. A MAC PDU may contain at most one sidelink BSR MAC control element, even when multiple events trigger a sidelink BSR by the time a first sidelink BSR has been transmitted. In this case the regular sidelink BSR and the periodic sidelink BSR may have precedence over the padding sidelink BSR. The MAC entity may restart retx-BSR-TimerSL upon reception of an SL grant. All triggered regular sidelink BSRs may be canceled in case the remaining configured SL grant(s) validly are able to accommodate all pending data available for transmission in V2X sidelink communication. Triggered sidelink BSRs may be cancelled in case the MAC entity has no data available for transmission for any of the sidelink logical channels. Triggered sidelink BSRs may be canceled when a sidelink BSR (except for a Truncated sidelink BSR) is included in a MAC PDU for transmission. All triggered sidelink BSRs may be canceled, and retx-BSR-TimerSL and periodic-BSR-TimerSL may be stopped, when upper layers configure autonomous resource selection. The MAC entity may transmit at most one regular/periodic sidelink BSR in a transmission time interval (TTI). If the MAC entity is requested to transmit multiple MAC PDUs in a TTI, it may include a padding sidelink BSR in any of the MAC PDUs that do not contain a Regular/Periodic sidelink BSR. All sidelink BSRs transmitted in a TTI may reflect the buffer status after MAC PDUs have been built for the TTI. Each LCG may report at most one buffer status value per TTI, and this value may be reported in all sidelink BSRs that are reporting buffer status for this LCG. A padding sidelink BSR may not be allowed to cancel a triggered regular/periodic sidelink BSR. A padding sidelink BSR may be triggered for a specific MAC PDU, and the trigger may be cancelled when the MAC PDU has been built. FIG.5depicts a sidelink BSR and truncated sidelink BSR MAC control element for even N500. The example ofFIG.5depicts the BSR MAC control elements (CEs) sidelink BSR and truncated sidelink BSR MAC CEs comprising a destination index field501, one LCG ID field502, and one corresponding buffer size field503per reported target group. The destination index field501may identify the destination for V2X sidelink communication. The length of this field may be, for example, 4 bits. The value may be set to the index of the destination reported to the eNB in the sidelink UE information message as part of the V2X destination list. The LCG ID field502may identify the group of logical channel(s) whose buffer status is being reported. The length of the field may be, for example, 2 bits. The buffer size field503may identify the total amount of data available across all logical channels of an LCG of a ProSe destination after all MAC PDUs for the TTI have been built. The amount of data may be indicated in a number of bytes. It may include all data that is available for transmission in the RLC layer and in the PDCP layer. Buffer sizes of LCGs may be included in decreasing order of the highest priority of the sidelink logical channel belonging to the LCG irrespective of the value of the destination index field501. FIG.6depicts a sidelink BSR and Truncated sidelink BSR MAC control element for odd N600. The example ofFIG.6depicts the BSR MAC control elements (CEs) sidelink BSR and truncated sidelink BSR MAC CEs comprising a destination index field601, one LCG ID field602, and one corresponding buffer size field603per reported target group, and reserved bits604. A LTE V2X sidelink communication scheduling request may rely on the LTE Uu scheduling request mechanism, which is also the baseline for the NR Uu scheduling request mechanism. In NR, the MAC entity may be configured with a zero, one, or more scheduling request (SR) configurations. An SR configuration may comprise a set of PUCCH resources for SR across different bandwidth parts (BWPs) and cells. For a logical channel, at most one PUCCH resource for SR is configured per BWP. Each SR configuration may correspond to one or more logical channels. Each logical channel may be mapped to zero or one SR configuration, which may be configured by RRC. The SR configuration of the LCH that triggered the BSR, if such a configuration exists, may be considered as corresponding to the SR configuration for the triggered SR. For a BSR triggered by the expiry of the BSR retransmission timer, the corresponding SR configuration for the triggered SR is that of the highest priority LCH (if such a configuration exists) that has data available for transmission at the time the BSR is triggered. RRC may configure the following parameters for the scheduling request procedure:sr-ProhibitTimer (per SR configuration);sr-TransMax (per SR configuration);sr-ConfigIndex. The following UE variables may be used for the scheduling request procedure:SR_COUNTER (per SR configuration). If an SR is triggered and there are no other SRs pending corresponding to the same SR configuration, the MAC entity may set the SR_COUNTER of the corresponding SR configuration to 0. When an SR is triggered, it may be considered as pending until it is canceled. All pending SR(s) triggered prior to the MAC PDU assembly may be canceled and each respective sr-ProhibitTimer may be stopped when the MAC PDU is transmitted. This this PDU may include a BSR MAC Control Element (CE), which contains buffer status up to (and including) the last event that triggered a BSR prior to the MAC PDU assembly. All pending SR(s) may be canceled when the UL grant(s) can accommodate all pending data available for transmission. Only PUCCH resources on a BWP that is active at the time of SR transmission occasion may be considered valid. For Rel-15, sidelink packet duplication is supported for V2X sidelink communication and may be performed at PDCP layer of the UE. Regarding the sidelink packet duplication for transmission, a PDCP PDU may be duplicated at the PDCP entity. The duplicated PDCP PDUs of the same PDCP entity may be submitted to two different RLC entities and associated with two different sidelink logical channels respectively. The duplicated PDCP PDUs of the same PDCP entity may be transmitted on different sidelink carriers. A UE using autonomous resource selection (regardless of its RRC state) may autonomously activate or deactivate sidelink packet duplication based on (pre)configuration. For a scheduled resource allocation (mode3), the eNB is informed of the ProSe Per Packet Reliability (PPPR) information of the V2X transmission requested by the UE. The PPPR information may comprise the amount of data associated with one (or more) PPPR values that the UE has in the buffer and the destination of the V2X messages associated with one (or more) PPPR values that the UE has in the buffer. The main use cases supported by Release 14 LTE V2V include basic safety with relatively low data rates in the order of 1-10 Mbps, where vehicles' status information such as position, speed, and heading are exchanged with nearby vehicles, infrastructure nodes, or pedestrians. As V2X applications advance, transmission of short messages about vehicles' own status data may be complemented with transmission of larger messages containing raw sensor data, vehicles' intention data, coordination, confirmation of future maneuvers, etc. For these advanced applications, the expected requirements to meet the needed data rate, reliability, latency, communication range, and speed may more stringent than illustrated inFIG.1. Advanced V2X application throughput is expected to be a hundred times higher than that of the LTE V2X basic safety application throughput. For example, sensor raw data sharing between UEs supporting V2X applications may require data rates as high as 1 Gbps. The NR LCP procedure for sidelink communications may re-use the LTE sidelink LCP procedure or the newly specified NR Uu interface LCP procedure, but each of them have their own limitations with respect to NR sidelink requirements. The current LTE V2X LCP procedure for sidelink data transmission has at least the following limitations: (1) It is designed to primarily support low data rate transmission with no built-in logical channel starvation avoidance mechanism. For example, the current LTE sidelink LCP procedure does not support a built-in mechanism, for example a prioritized bit rate, in order to avoid the starvation of low priority logical channels by higher priority logical channels configured to serve applications with high data rate requirements. (2) It is not designed to support configuration mapping restrictions for each sidelink logical channel such as: (2a) SubCarrier Spacing (SCS) transmission restrictions (2b) Allowable latency restrictions. For example, the NR sidelink, similar to the NR Uu interface, may be characterized by a variable transmission opportunity timing or a variable transmission time duration, and as a result, a sidelink logical channel may be restricted to using certain resource grants because of the logical channel allowed latency requirement. (2c) Other potential restrictions such as data duplications related restriction RATs/3GPP release co-existence related restrictions. For example, considering carrier frequency restrictions, a RAT restriction or 3GPP release restriction, for example to take into account the co-existence between V2X devices with different RATs or 3GPP release capabilities, depending on the adopted design, some of these restrictions may be visible to LCP procedure. (3) The criteria for the selection of the transmission destination (e.g. ProSe destination) during the LCP procedure, only takes into account the priority (e.g., PPPP) of the logical channels toward that destination, specifically the logical channel with the highest priority. Furthermore more, in a given transmission opportunity, once selected, all logical channels of the selected destination must be served before logical channels on other destination are selected. In LTE V2X, the UE may be configured with PPPP to PDB mapping, but this mapping is left to implementation and not specified. One issue with this transmission destination selection approach is that in the case of NR V2X with a diverse set of services beyond just basic safety services, data of higher priority logical channels on some other destinations may fail to meet their transmission latency requirement and even be discarded, while lower priority channels on the selected destination are being served. Another issue with the current LTE V2X transmission destination selection approach is with respect to the ability to efficiently allocate resource grants while taking into account logical channel mapping restrictions other than allowed latency restriction. The current destination selection approach may result in selecting a transmission destination for whom at least the highest priority logical channel cannot be served by the available grant, or the grant among the available grants whose transmission time is next, with the risk of unnecessary failing to meet allowed latency requirements, dropping data and waste of radio resource grants. The NR Uu LCP procedure is designed to support high data rates with built-in mechanisms to support a prioritized data rate, logical channel mapping restrictions such as SCS restrictions, allowed latency restrictions, but it is designed in the context of the NR Uu interface requirements and doesn't support capabilities for transmission destination selection or other potential logical channel mapping restrictions (for example, service to frequency mapping restrictions). In view of the discussion above, there is a need for a new NR sidelink LCP procedure that builds on the existing LTE sidelink LCP procedure and NR Uu LCP procedure. There is also a need to investigate any impact, a grant type (e.g., mode3or mode4grant type) may have on the LCP procedures. In Rel-15 LTE, for the LCP procedure, the MAC entity may take into account the following relative priority in decreasing order:MAC CE for C-RNTI or data from UL-CCCH;MAC CE for Data Volume and Power Headroom Report (DPR);MAC CE for SPS confirmation;MAC CE for BSR, with the xception of BSR included for padding;MAC CE for PHR, Extended PHR, or Dual Connectivity PHR;MAC CE for sidelink BSR, with the exception of sidelink BSR included for padding;Data from any Logical Channel, except data from UL-CCCH;MAC CE for recommended bit rate query;MAC CE for BSR included for padding; andMAC CE for sidelink BSR included for padding. In Rel-15 NR, logical channels may be prioritized in accordance with the following order (highest priority listed first):C-RNTI MAC CE or data from UL-CCCH;Configured Grant Confirmation MAC CE;MAC CE for BSR, with the exception of BSR included for padding;Single Entry PHR MAC CE or Multiple Entry PHR MAC CE;Data from any logical channel, except data from UL-CCCH;MAC CE for BSR included for padding. This description denotes, non-padding BSR, MAC CE for BSR with the exception of BSR included for padding. This description similarly denotes non-padding sidelink BSR, MAC CE for sidelink BSR with the exception of BSR included for padding. In Rel-15 LTE, non-padding BSR is prioritized over non-padding sidelink BSR, which is prioritized over data from any logical channel, except data from the UL-CCCH. Similarly, in Rel-15 NR, non-padding BSR is prioritized over data from any logical channel, except data from the UL-CCCH. The prioritization rules above do not take into account the relative priority between sidelink transmissions versus uplink transmissions. For example, the BSR is always prioritized over the sidelink BSR even when the UL data that triggers the BSR may be of lower priority than the sidelink data that triggers the sidelink BSR. As defined above, the term “BSR” refers to the procedure used to provide the serving eNB (in the case of LTE) or gNB (in the case of NR) with information about the amount of data available for transmission in the UL buffers associated with the MAC entity while “sidelink BSR” may refer to the procedure used to provide the serving eNB with information about the amount of sidelink data available for transmission in the sidelink buffers associated with the MAC entity. There is also a need for rules for prioritization of sidelink data versus UL data transmission. In legacy LTE V2X (e.g., TS 36.321), the transmission of V2X sidelink communication is prioritized over uplink transmissions if the following conditions are met:(1) if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communication simultaneously at the time of the transmission; and(2) if uplink transmission is not prioritized by upper layer; and(3) if the value of the highest priority of the sidelink logical channel(s) in the MAC PDU is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured. In LTE V2X specification and in line with the above conditions, uplink transmission is prioritized over sidelink transmission if the following conditions are met:(1) if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communications simultaneously at the time of the transmission; and(2) if uplink transmission is not prioritized by upper layers, for example, if the UE has an emergency PDN connection, the UE may send an indication to the lower layers to prioritize transmission over the emergency PDN connection as compared to transmission of V2X communication over PC5 (e.g., TS 24.386); or(3) if the value of the highest priority of the sidelink logical channel(s) in the MAC PDU is higher than or equal to the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured. As discussed above, the NR system may support a diverse set of services including ultra reliable and very low latency services (URLLC) in addition to Enhanced Mobile Broadband (eMBB) and Massive Machine Type Communication (mMTC) type of services over the Uu interface. Advanced V2X, which is expected to support basic safety services and various degrees of URLLC services such as vehicle platooning, extended sensors, advanced driving, remote driving, is also expected support infotainment services and other less time critical services over the sidelink including sidelink through a relay-UE node. Furthermore, either sidelink transmissions or uplink transmissions may be based on a grant free resource allocation, where the UE autonomously selects the resource transmission. In view of these NR requirements and these advanced V2X requirements, beside emergency calls, some uplink transmissions may be critical and more time sensitive than some V2X communication, while other V2X transmissions may be critical and more time sensitive than uplink transmissions. For example, for a relay-UE node, some non-emergency uplink transmission may be more critical and more time sensitive than a sidelink transmission. Therefore, the LTE V2X prioritization approach, where sidelink transmission is prioritized over uplink transmission based on an absolute priority threshold configured for the sidelink transmission, regardless of uplink transmission priority may not be appropriate. Accordingly, there is a need for new prioritization rules between sidelink transmissions and uplink transmissions. In the legacy systems, sidelink transmission opportunities follow a periodicity of a fixed transmission time interval and fixed transmission duration. NR sidelink transmissions may be based on an NR RAT design where transmission opportunities have a variable transmission time interval with no predefined periodicity, and variable transmission durations. As a result, multiple overlapping sidelink grants (e.g. a grant for sidelink transmission) may co-exist in the UE. Similarly, multiple uplink grants (e.g. grants for uplink transmission) may co-exist in the UE. As a result, rules for prioritization of sidelink transmissions versus uplink transmissions need to be enhanced. FIG.7is a diagram700showing grants that fully overlap.FIG.7shows the UL grant701in slot1702and the sidelink grant703in slot3704.FIG.7also shows that the UL Tx grant705in slot5706fully overlaps with the sidelink Tx707grant in slot9708. The overlap time interval709is the duration of slot9708. The earliest allowed transmission starting points of the UL Tx grant705and the sidelink Tx707are the same, but the two grants have different durations, slot5706and slot9708. FIG.8is a diagram800showing grants that fully overlap.FIG.8shows the UL grant801in slot1802and the sidelink grant803in slot4804.FIG.8also shows that the UL Tx grant805in slot5806fully overlaps with the sidelink Tx807grant in slot10808. The overlap time interval809is the duration of slot10808. The latest allowed transmission end points of the UL Tx grant805and the sidelink Tx807are the same, but the two grants have different durations. FIG.9is a diagram900showing grants that are in different durations and partially overlap.FIG.9shows the UL grant901in slot1902and the sidelink grant903in slot3904.FIG.9also shows that the UL Tx grant905in slot5906partially overlaps with the sidelink Tx907grant in slot9908. The overlap time interval909is a portion of the duration of slot9908. The earliest allowed transmission starting point of the UL Tx grant905and the sidelink Tx907are different. FIG.10is a diagram1000showing grants that are of different durations and partially overlap.FIG.10shows the UL grant1001in slot11002and the sidelink grant1003in slot31004.FIG.10also shows that the UL Tx grant1005in slot51006partially overlaps with the sidelink Tx1007grant in slot91008. The overlap time interval1009is a portion of the duration of slot91008. The latest allowed transmission end point of the UL Tx grant1005and the sidelink Tx1007are different. FIG.11is a diagram1100showing grants that are of different durations and fully overlap.FIG.11shows the UL grant1101in slot11102and the sidelink grant1103in slots7and81104.FIG.11also shows that the UL Tx grant1105in slot51106fully overlaps with the sidelink Tx1107grant in slots18and191108. The overlap time interval1109is the duration of slots18and191108. In this example, the earlier allowed transmission starting and the latest allowed transmission end point of each of the grants are different. The methods and apparatuses described herein provide solutions to the above limitations. One solution described herein includes an enhanced V2X MO, which may be used for the provisioning of V2X configuration parameters in the UE. Another solution described herein includes methods to perform sidelink LCP. The methods include configuring the MAC with relevant V2X configuration parameters, e.g., LCP control parameters. The methods also include performing selection of the logical channel(s) served by the sidelink grant, which may be based on any combination of the following: a set of allowed V2X serving cells, a set of allowed SCSs, an allowed latency, an allowed SL-SCH-duration, an allowed SL-SCH K2 duration, a set of allowed RATs/RAT versions, a set of allowed BWPs, and a set of allowed transmission profiles. The methods also include selecting the ProSe destination(s) among the selected SL LCHs having data available for transmission. The methods also include performing resource allocation for a SL LCH when performing a new SL transmission. Another solution described herein includes methods for performing UL LCP that address the impacts of SL related transmissions. The methods include determining when to prioritize transmission of V2X SL BSRs over UL BSRs. The methods also includes determining when to prioritize V2X SL transmissions over UL transmissions. Another solution described herein includes methods to determine the prioritization of V2X sidelink communication transmissions versus uplink transmissions for scenarios where the V2X traffic priority may vary dynamically with respect to the uplink traffic. The methods also include using sidelink PPPR or uplink PPPR to determine the prioritization of V2X sidelink transmissions versus uplink transmissions. The methods also include determining the prioritization of V2X sidelink transmissions versus uplink transmissions for overlapping grants. Another solution described herein includes methods for determining the transmission parameters associated with a radio resource grant. In support of the NR V2X solutions described herein, an NR V2X MO may comprise the following V2X configuration parameters in addition to the configuration parameters in the LTE V2X MO:Prioritized Bit Rate (PBR) for each authorized V2X service;Bucket Size Duration (BSD) for each authorized V2X service;List of allowed SCSs for each authorized V2X service;List of allowed of V2X BWPs for each authorized V2X service;List of allowed RATs for each authorized V2X service;List of allowed RAT versions for each authorized V2X service;List of allowed transmission profiles for each authorized V2X service;List of allowed carrier frequencies for each authorized V2X services;List of PPPP per allowed carrier frequencies for each authorized V2X services; orPPPR to packet duplication mapping rule that comprises the list of PPPR values or PPPR ranges and the corresponding number of duplicated data paths. E.g., for each PPPR value or PPPR range, there may be a configured number of parallel paths for data duplication. The number of paths may be, for example, 2, 3, or more. The NR V2X MO may also comprise a mapping of transmission modes (unicast, groupcast or broadcast) and the V2X services e.g., PSID or ITS-AIDs of the V2X application. For example, a transmission mode may be configured for each V2X service. A default transmission mode may be configured for use for V2X services that are not configured with a mapping of transmission mode. One or more default transmission modes may be configured for each V2X service. The mapping of transmission mode and V2X services may be achieved with a structure in the NR V2X MO that explicitly associates V2X service with transmission mode. Alternatively, the association may be implicit, wherein the mapping of V2X service and destination Layer-2 ID for broadcast transmission, groupcast transmission, and unicast transmission are configured using a separate or a dedicated Information Element (IE) or data structure. A mapping of transmission modes and the V2X services may comprise mapping of transmission modes and service data flows or packet filter sets. For example, mapping of transmission modes and V2X services may comprise several mappings wherein each mapping comprises a transmission mode and a service data flow or packet filter set. In an alternative embodiment, a mapping of transmission modes and V2X services may comprise, for each V2X service, of a mapping between the V2X service and more than one transmission mode, e.g. the association between V2X service and the transmission mode is done at the service level where each service e.g., PSID or ITS-AID, may be associated with more than one transmission mode. The NR V2X MO may also comprise a mapping of PC5 QoS Identifier (PQIs) and the V2X services. For example, a PQI may be configured for each authorized V2X service. One or more default PQIs may also be configured for use by V2X services that are not configured with a mapping of PQI. A V2X service may comprise more than one service data flow or packet filter set. A mapping of PQIs and the V2X services may comprise a mapping of PQIs and service data flows or packet filter sets. For example, a mapping of a PQI and V2X service may comprise several mappings where each mapping includes a PQI and a service data flow or packet filter set. In an alternative embodiment, mapping of PQIs and V2X services may comprise, for each V2X service, a mapping between the V2X service and more than one PQIs, for example, the association between a V2X service and a PQI is generated at the service level where each service (e.g., PSID or ITS-AID) may be associated with more than one transmission mode. The NR V2X MO may also comprise a mapping of a transmission range and the V2X services (e.g., PSID or ITS-AIDs) of the V2X application, for example, groupcast or unicast. A transmission range may be configured for each V2X service. A default transmission range may be configured for use for V2X services that are not configured with a mapping of transmission ranges. A default transmission range may be configured for each V2X service, for example, in the case of a groupcast or unicast transmission. A mapping of transmission ranges and the V2X services may comprise a mapping of transmission ranges and service data flows or packet filter sets. For example, a mapping of transmission ranges and V2X services may comprise several mappings where each mapping includes a transmission range and a service data flow or packet filter set. In an alternative embodiment, a mapping of transmission ranges and V2X services may comprise, for each V2X service, a mapping between the V2X service and more than one transmission range, for example, the association between the V2X service and transmission range is generated at the service level where each service (e.g., PSID or ITS-AID) may be associated with more than one transmission range. The mapping of transmission ranges and V2X services, and the mapping of transmission modes and V2X services may be jointly generated using the same IE or data structure. The NR V2X MO may also comprise a list of allowed PC5 QoS rules where each QoS rule comprises the QFI of the associated QoS Flow, a packet filter set, and a precedence value. The NR V2X MO may also comprise a mapping of QFIs (QoS Flow Identifier) and PQIs, a mapping of PQIs and logical channels, a mapping of QFI and sidelink bearers, a mapping of PQIs to QoS characteristics, e.g. QoS Profiles, or a mapping of resource allocation modes and logical channel or mapping of resource allocation mode and QoS Flow. As used herein, the term QFI may also be referred to as PC5 QoS Flow Identifier (PFI). FIG.12depicts an NR Basic V2X over PC5 Communication MO1200. FIG.13depicts an NR Extended V2X over PC5 Communication MO1300. FIG.14depicts a procedure1400for the NR V2X provisioning of V2X configuration parameters into a UE by a V2X Control Function in accordance with one embodiment, which may be used in combination with any of the embodiments described herein. At step1410, the UE-V2X Function1402may be pre-configured, or provisioned by a V2X Control Function1404over the V3 interface or an NR interface, with each of the V2X configuration parameters listed above. At step111, the UE RRC1401may notify the UE-V2X Function1402of in-coverage and out-of-coverage events. For example, when the UE returned to coverage after being out of coverage, the UE RRC1401may detect an in-coverage event and may send a notification to the UE-V2X Function1402such that the latter may check the validity timer of the UE-V2X provisioned configuration parameters (step1412) and if the validity timer has expired, it may trigger a configuration parameter request to the V2X Control Function1404(step1413). Similarly, if the UE goes out of coverage, the UE-V2X Function1402may need to know so that it does not unnecessarily trigger configuration requests toward the network, for example, when configuration parameter validity timer(s) have expired while the UE is out of coverage. At step1414, V2X configuration parameters may be provisioned by a V2X Control Function1404over the V3 interface or an NR interface to the UE-V2X Function1402. At step1415, the UE RRC1401and the UE-V2 Function1402may store new V2X configuration parameters or update stored V2X configuration parameters. At step1416, the UE is in coverage and the UE-V2X Function1402may check the validity timer of the UE-V2X RRC configured configuration parameters. At step1417, the gNB1403may send a system information broadcast of RAN specific NR V2 configuration parameters. FIG.15depicts a NR V2X sidelink LCP procedure1500is described herein in accordance with another embodiment, which may be used in combination with all of the embodiments described herein. At step1510, control parameters for NR sidelink LCP may be determined. At step1511, logical channels may be selected for grant allocation. At step1512, a V2X transmission destination may be selected. At step1513, resources may be allocated to the selected logical channel. The NR V2X sidelink LCP procedure1500may be applied whenever a new transmission is performed. Each sidelink logical channel within the MAC entity may be configured with the following parameters: (1) The sidelink logical channel priority, e.g. the PPPP associated with the sidelink logical channel, where an increasing priority value indicates a lower priority level. The logical channel priority may also be the priority determined based on the PQI indicated by the V2X application with the data being mapped to the logical channel. Further details on the mapping of V2X data to logical channels and determination of the corresponding logical channel priority are described below. (2) The sidelink prioritized bit rate that sets the PBR, which may comprise the data rate that must be served on the sidelink logical channel before a sidelink logical channel of lower priority is served. (3) The sidelink bucket size duration that sets the BSD for the sidelink, which may comprise the duration to fill up the bucket size at the rate of the PBR. The PBR together with the BSD define the size of a prioritized bucket for the sidelink logical channel. As long as there is data in the sidelink prioritized bucket, and there is a sidelink grant that may serve the sidelink logical channel, then the sidelink logical channel may be prioritized for a sidelink resource grant allocation over sidelink logical channels of lower priority. (4) The Maximum Data Burst Volume. (5) Bj may be denoted in the embodiments herein as the amount of data in the prioritized bucket associated with the sidelink logical channel j. The MAC entity may maintain a variable Bj for each sidelink logical channel j. Additionally, mapping restrictions for an uplink resource grant to logical channel mapping may be configured for each sidelink logical channel. The mapping restriction for the uplink resource grant to logical channel may be one or more of the following: Set of allowed V2X serving cells that set the allowed cell(s) for transmission. The allowed V2X serving cells may, for example, be expected to fulfill the following conditions: the carrier frequency of the serving cell is configured for each of the V2X services mapped to the logical channel; and the serving cell meets a data duplication requirement. For example, logical channels carrying duplicated data may be mapped to different serving cells. List of allowed carrier frequencies. Set of allowed V2X SCSs that set the allowed Subcarrier Spacing(s) for transmission. Set of allowed V2X BWPs that set the allowed bandwidth parts for transmission. Maximum Allowed SL-SCH duration that set the maximum SL-SCH duration allowed for transmission. Allowed latency that set the maximum allowed latency from the time the data becomes available for sidelink transmission to the time the data transmission ends. Allowed SL-SCH K2 duration that set the maximum allowed latency from the time the SL-SCH grant becomes available for sidelink transmission to the time the SL-SCH data transmission begins. Allowed RATs that set the allowed RATs for transmission. Allowed RAT versions that set the allowed RAT versions for transmission. Allowed transmission profiles that set the allowed transmission profiles for transmission. The content of the transmission profile may comprise radio parameters for V2X transmission and may include RAT specific and RAT version specific radio parameter configurations. An example of transmission profile content may be the Rel-14 SL-V2X-Preconfiguration information element defined in TS 36.331 and captured for example as a radio parameter container in the V2X over the PC5 Communication MO illustrated inFIG.3. The transmission profile may also include transmission reliability configuration parameters, for example the PPPR target value. The transmission profile may include NR specific V2X transmission parameters such as SCS value(s), BWP(s), maximum allowed latency, maximum allowed SL-SCH duration, and allowed V2X transmission frequencies. Allowed transmission mode(s) e.g., unicast, groupcast or broadcast. Allowed resource allocation mode(s) e.g., mode1or mode2, e.g. scheduled grant or configured grant or autonomous resource grant. Allowed transmission range(s). The UE may determine control parameters for the sidelink LCP. The UE may determine how many logical channels to configure and the identity of each logical channel to be configured. Similarly, the UE may determine how many LCGs to configure and the identities of the LCGs. The UE may use V2X information, preconfigured into the UE or provisioned into the UE by the V2X Control Function (for example, as specified in the NR V2X MO described herein), to autonomously determine how many logical channels to configure, each logical channel identity, how many LCGs to configure, and each LCG identity. The UE may use one or more of the following information, denoted herein as “logical channel derivation parameters”: One or more services in the list of allowed V2X services; one or more priority values in the PPPP list provisioned to the UE (for example, the PPPP to PDB mapping list in the NR V2X MO described herein); One or more of the PPPP to PDB mappings in the PPPP to PDB mapping list; One or more frequencies in the list of allowed carrier frequencies for one or more authorized V2X services; One or more transmission profiles from the list of allowed transmission profiles for the one or more authorized V2X services; One or more SCSs from the list of allowed SCSs for the one or more authorized V2X services; One or more RATs from the list of allowed RATs for the one or more authorized V2X service; One or more RAT versions from the list of allowed RAT versions for the one or more authorized V2X services; and One or more BWPs from the list of allowed BWPs for the one or more authorized V2X services. The UE may configure the LCH or LCG to, for example, ensure that different LCHs or LCGs are configured for V2X messages with the same destination identity but are associated with different V2X frequency sets, different serving cell sets, or different BWP sets. Similarly, the UE may configure LCH to, for example, ensure that duplicated data are mapped to different LCHs that are in turn configured with different allowed serving cells or allowed carrier frequencies. Similarly, the UE may configure LCGs to, for example, ensure that logical channels configured for duplicate data are mapped to different LCGs. Alternatively, the UE may configure LCGs to, for example, ensure that only one logical channel configured for duplicate data is mapped to an LCG. Yet in another alternative, the UE may configure LCGs or mapping of LCHs to LCGs such that LCHs configured for data duplications are always mapped to the same LCG. Instead of the UE autonomously configuring the LCH/LCG based on V2X parameters preconfigured or provisioned into the UE, the UE may be configured by the gNB through common RRC signaling (e.g. system information broadcast signaling) or dedicated RRC signaling (e.g., a RRC reconfiguration message) with one or more LCH identities and one or more LCG identities, and mapping of LCHs to LCGs for example as per the LCH to LCG mapping options described above. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or the LTE like V2X UE Information message, one or more of the logical channel derivation parameters described above, to assist the gNB in configuring the UE with LCH and LCG information. The UE may map V2X services to logical channels. The mapping may be a one-to-one mapping, more than one services mapped to one logical channel mapping, or one service mapped to more than one logical channel. The UE may use one or more of the logical channel derivation parameters described above in deciding the mapping of V2X service or V2X message to logical channel. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g. system information broadcast signaling) or dedicated RRC signaling (e.g., RRC reconfiguration message), the mapping of V2X services to logical channels. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or the LTE like V2X UE Information message, one or more of the logical channel derivation parameters described above, to assist the gNB in configuring the UE with mapping between V2X services and logical channels. The base station or the scheduling entity may configure the UE with LCGs and the associated priorities, or with LCHs and the associated priorities. Additionally, the base station may configure the UE with mappings of V2X services to LCHs or with mappings of services to QoS Flows, and mapping of QoS Flows to Bearers and mapping of bearers to LCHs. A sidelink logical channel priority may be assigned. The UE may assign a priority value to each logical channel from the PPPP values associated with the services or application data packets mapped to the logical channel. In one embodiment, the logical channel priority may be the highest priority PPPP (lowest PPPP value) among the PPPP values associated with services or application data packets mapped to the logical channel. The PPPP value associated with each service or application data packet may be as specified in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g., system information broadcast signaling or multicast information signaling to a group of UEs) or dedicated RRC signaling (e.g., RRC reconfiguration message) with the priority value for one or more logical channels. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or the LTE like V2X UE Information message, one or more of the logical channel derivation parameters described above, to assist the gNB in configuring the UE with sidelink logical channel priority. Instead of logical channel priority as a sidelink QoS characteristic, any other sidelink QoS characteristics, including but not limited to a QoS flow, a transmission range, reliability, a PQI, or latency may be used. The UE may assign a PBR value to each logical channel from the PBR values associated with the services or application data packets mapped to the logical channel. The PBR of the logical channel may be for example, the highest PBR among the PBRs of the services or application data packets mapped to the logical channel, or the sum of the PBRs of the services or application data packets mapped to the logical channel. The PBR for each V2X service may be determined as provisioned in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g. system information broadcast signaling) or dedicated RRC signaling (e.g., RRC reconfiguration message) with the PBR for each logical channel. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or LTE like V2X UE Information message, the list of V2X communication logical channels and one or more of the following: One or more PBRs for the services or application data packets mapped to each of the logical channel, as specified in the NR V2X MO described herein; One or more allowed RAT, the allowed RAT version, or the allowed transmission profiles, for the services or application data packets mapped to each logical channel, as specified in the NR V2X MO described herein; One or more SCSs for the services or application data packets mapped to each the logical channel, as specified in the NR V2X MO described herein; and The priority of each logical channel. In response, the gNB may signal to the UE the PBR of each logical channel. The UE may assign a BSD to each logical channel from the BSD values associated with the services or application data packets mapped to the logical channel. The BSD of the logical channel may be for example, the highest BSD among the BSDs of the services or application data packets mapped to the logical channel, or the sum of the BSDs of the services or application data packets mapped to the logical channel. The BSD for each V2X service may be determined as provisioned in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g., system information broadcast signaling) or dedicated RRC signaling (e.g., RRC reconfiguration message) with the bucket size duration for each logical channel. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or LTE like V2X UE Information message, the list of V2X communication logical channels, and one or more of the following: One or more PBRs for the services or application data packets mapped to each logical channel, as specified in the NR V2X MO described herein; One or more allowed RAT, the allowed RAT version, or the allowed transmission profiles, for the services or application data packets mapped to each of the logical channel, as specified in the NR V2X MO described herein; One or more SCSs for the services or application data packets mapped to each logical channel, as specified in the NR V2X MO described herein; and The priority of each of the logical channel. In response, the gNB may signal to the UE, the BSD of each logical channel. The UE may assign a list of allowed SCSs to each logical channel. The list of allowed SCSs of the logical channel may be a common subset of the list of allowed SCSs for each V2X service mapped to the logical channel. The list of allowed SCSs for each V2X service may be determined as provisioned in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g. system information broadcast signaling) or dedicated RRC signaling (e.g. RRC reconfiguration message) with the list of allowed SCS for each logical channel. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or the LTE like V2X UE Information message, the list of V2X communication logical channels together with one or more of the following: One or more of the allowed SCSs for the services or application data packets mapped to each logical channel, as provisioned in the NR V2X MO described herein; One or more of the allowed carrier frequencies of the services or application data packets mapped to the logical channels; One or more delay budgets or the most restrictive packet delay budget of the services or application data packets mapped to each logical channel; Priority of the logical channel, PPPP values, or the highest PPPP value of each service or application data packet mapped to the logical channel; and The PBR for one or more of the services or an application data packet mapped to the logical channel. In response, the gNB may signal, to the UE, one or more allowed SCSs associated with each logical channel. The UE may assign a list of allowed RATs, allowed RAT versions, or allowed transmission profiles to each logical channel. The list of allowed RATs, allowed RAT versions, or allowed transmission profiles of the logical channel may be a common subset of the list of allowed RATs, allowed RAT versions, or allowed transmission profiles for each V2X service mapped to the logical channel. The list of allowed RATs, allowed RAT versions, or allowed transmission profiles for each V2X service may be determined as provisioned in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g. system information broadcast signaling) or dedicated RRC signaling (e.g. RRC reconfiguration message) with the list of allowed RATs, allowed RAT versions, or allowed transmission profiles for each logical channel. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or the LTE like V2X UE Information message, the list of V2X communication logical channels together with one or more of the following: One or more of the allowed carrier frequencies of the services or application data packets mapped to each logical channel; One or more of the allowed RATs of the services or application data packets mapped to each logical channels as specified in the NR V2X MO described herein; One or more delay budgets or the most restrictive packet delay budget of the services or application data packet mapped to each logical channel; Priority of the logical channel or PPPP values or the highest PPPP value of the services or application data packets mapped to each logical channel; and The PBR for one or more services or an application data packet mapped to each logical channel. In response, the gNB may signal to the UE, one or more allowed RATs, allowed RAT versions, or allowed transmission profiles associated with each logical channel. The UE may assign a maximum allowed latency to each logical channel. The UE may derive the allowed latency for each logical channel from the packet delay budget of the services or application data packets mapped to the logical channel. For example, the UE may use the most restrictive delay budget, e.g., the smallest delay budget among the delay budgets of the services or application data packets mapped to the logical channel to derive the allowed latency of the logical channel. The delay budgets of the services or application data packets mapped to the logical channel may be determined from provisioning as specified in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g., system information broadcast signaling) or dedicated RRC signaling (e.g., RRC reconfiguration message) with allowed latency. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or the LTE like V2X UE Information message, the list of V2X communication logical channels together with one or more of the following: One or more of the allowed PDBs or the most restrictive PDB for the services or application data packets mapped to each logical channels as provisioned in the NR V2X MO described herein; Priority of the logical channel, one or more PPPP values, or the PPPP of the highest priority (e.g., the smallest PPPP value) of the service or application data packets mapped to each logical channel. In response, the gNB may signal to the UE, the allowed latency for each logical channel. Instead of latency as a sidelink QoS characteristic, any other sidelink QoS characteristics, including but not limited to: a QoS flow, a transmission range, reliability, a PQI, or a priority may be used. The UE may assign a maximum allowed SL-SCH duration to each logical channel. The UE may derive the allowed SL-SCH duration for each logical channel from the packet delay budget of the services or application data packets mapped to the logical channel. For example, the UE may use the most restrictive delay budget, which may be the smallest delay budget among the delay budgets of the services or application data packets mapped to the logical channel, to derive the allowed SL-SCH duration of the logical channel. The delay budgets of the services or application data packets mapped to a logical channel may be determined from provisioning as specified in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g., system information broadcast signaling) or dedicated RRC signaling (e.g., RRC reconfiguration message) with allowed latency. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or the LTE like V2X UE Information message, the list of V2X communication logical channels, together with one or more of the following: One or more of the allowed PDBs or the most restrictive PDB for the services or application data packets mapped to each logical channel as provisioned in the NR V2X MO described herein; Priority of the logical channel, one or more PPPP values, or the PPPP of highest priority (e.g. smallest PPPP value) of the service or application data packets mapped to each logical channel; and One or more of the allowed SCSs for the services or application data packets mapped to each logical channel as provisioned in the NR V2X MO described herein. In response, the gNB may signal to the UE, the allowed latency for each logical channel. The UE may assign a list of allowed V2X serving carrier frequencies to each logical channel. The list of allowed carrier frequencies of the logical channel may be a common subset of the list of the list of allowed carrier frequencies of each V2X service mapped to the logical channel. The list of allowed carrier frequencies for each V2X service may be determined as provisioned in the NR V2X MO described herein. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g., system information broadcast signaling) or dedicated RRC signaling (e.g., RRC reconfiguration message) with the list of allowed carrier frequencies for each logical channel. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or LTE like V2X UE Information message, the list of V2X communication logical channels, together with one or more of the following: One or more of the allowed carrier frequencies of the services or application data packets mapped to each logical channel; One or more of the allowed RATs of the services or application data packets mapped to each logical channel as specified in the NR V2X MO described herein; One or more delay budgets or the most restrictive packet delay budget of the services or application data packet mapped to each logical channel; Priority of the logical channel, PPPP values, or the highest PPPP value of the services or application data packets mapped to each logical channel; and The PBR for one or more of the services or an application data packet mapped to each logical channel. In response, the gNB may signal to the UE, one or more allowed carrier frequencies associated with each logical channel. The UE may assign a list of allowed BWP to each logical channel. The list of allowed BWPs of the logical channel may be a common subset of the list of the list of allowed BWPs of each V2X service mapped to the logical channel. The list of allowed BWPs for each V2X service may be determined as provisioned in the NR V2X MO. Alternatively, the UE may be configured by the gNB through common RRC signaling (e.g., system information broadcast signaling) or dedicated RRC signaling (e.g., RRC reconfiguration message) with the list of allowed BWPs for each logical channel. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or LTE like V2X UE Information message, the list of V2X communication logical channels, together with one or more of the following: One or more of the allowed BWPs of the services or application data packets mapped to each logical channel; One or more of the allowed carrier frequencies of the services or application data packets mapped to each logical channel; One or more of the allowed RATs of the services or application data packets mapped to each logical channel as specified in the NR V2X MO described herein; One or more delay budget or the most restrictive packet delay budget of the services or application data packet mapped to each logical channel; Priority of the logical channel, PPPP values, or the highest PPPP value of the services or application data packets mapped to each logical channel; and The PBR for one or more of the services or an application data packet mapped to each logical channel. In response, the gNB may signal to the UE, one or more allowed carrier frequencies associated with each logical channel. The UE may assign a transmission mode to a logical channel. The mapping may be a one-to-one mapping, or more than one transmission mode may be mapped to one logical channel. The UE may use one or more of the logical channel derivation parameters described herein in determining the mapping of transmission mode to logical channel. Alternatively, the UE may be configured by the gNB through RRC signaling (e.g. system information broadcast signaling) or dedicated RRC signaling (e.g., an RRC reconfiguration message) with the mapping of transmission mode to logical channel. The UE may signal to the gNB, in the LTE-like UE Assistance Information message or LTE like V2X UE Information message, one or more of the logical channel derivation parameters described above, to assist the gNB is configuring UE with the mapping between transmission mode and logical channels. The base station may configure the UE with a mapping of V2X services to LCHs or in other words with a mapping of services to QoS Flows, a mapping of QoS Flows to Bearers, and a mapping of bearers to LCHs, thereby configuring transmission mode to service, QoS Flow, or bearer. Base station as used herein may refer to a scheduler or any other RAN network node or core network node. Alternative embodiments may be derived by substituting transmission mode with transmission range or resource allocation mode. The LTE-like UE Assistance Information message or LTE like V2X UE Information message referred to herein may additionally include one or more of the following: A list of PC5 Flow Identifiers (PFI); A list of PC5 QoS Profile identifier (PQI); Transmission range for a PFI or a PQI; Transmission mode for PFI or a PQI; Transmission resource allocation mode for a PFI or a PQI; One or more QoS requirements such as priority requirement, reliability requirement, delay requirement, range requirement, transmission mode requirement, resource type e.g., guaranteed bit rate (GBR), delay-critical GBR, or non-GBR, guaranteed flow bite rate, maximum flow bite rate; Mapping of transmission modes (unicast, groupcast or broadcast) and the V2X services e.g., PSID or ITS-AIDs of the V2X application; Mapping of PC5 QoS Identifier (PQIs) and the V2X services; Mapping of a transmission range and the V2X services (e.g., PSID or ITS-AIDs); Allowed PC5 QoS rules where each QoS rule comprises the QFI of the associated QoS Flow, a packet filter set, and a precedence value; Mapping of PFIs (QoS Flow Identifier) and PQIs, a mapping of PQIs and logical channels, a mapping of QFI and sidelink bearers, a mapping of PQIs to QoS characteristics, e.g. QoS Profiles, or a mapping of resource allocation modes and logical channel or mapping of resource allocation mode and QoS Flow. The logical channel derivation parameters described above may also include any of the parameters described above. FIG.16depicts a procedure for configuring the MAC with V2X configuration parameters1600, which may be used in combination with any of the embodiments described herein. At step1610, the UE-RRC may signal that the UE is interested in V2X communication. At step1611, the UE-V2X function1603may pre-configure or provision in the UE V2X configuration parameters. At step1612, the UE-RRC1602may send a V2X configuration parameters request with an in coverage indication to the UE-V2X function1603. At step1613, the UE-RRC1602and the UE-V2X function1603may determine that the UE is in coverage and the UE-V2 provisioned configuration parameters validity timer expired. At step1614, the UE-V2X function may send a V2X configuration parameters request to the V2X control function1605via the gNB1604. At step1615, the V2X configuration parameters may be provisioned over the V3 interface (all or part of the NR basic V2X MO or NR extended V2XX MO). At step1616, the UE-RRC1602and the UE-V2X function1603may store new V2X configuration parameters or update the stored V2X configuration parameters. At step1617, a response to the V2X configuration parameters request may be received (all or part of the NR basic V2X MO or NR extended V2X MO). At step1618, the UE-RRC1602and the UE-V2X function1603may store new V2X configuration parameters or update the stored V2X configuration parameters. At step1619, the UE may signal to the gNB1604an LTE-like UE Assistance Information message or the LTE like V2X UE Information message with RAN specific V2X configuration parameters in RRC dedicated signaling or RRC common signaling. At step1620, the UE-RRC1602and the UE-V2X function1603may store new V2X configuration parameters or update the stored V2X configuration parameters. At step1621, the UE-MAC1601may be configured with relevant V2X parameters and LCP control parameters. The UE may receive a sidelink transmission resource grant from the base station for a new sidelink transmission. The received sidelink resource grant may be a scheduled resource grant, wherein the sidelink resource grant is assigned to the UE via physical layer (PHY) signaling such as the PHY Sidelink Control Information (SCI) when the scheduling is performed over the sidelink interface, or the PHY Downlink Control Information (DCI) when the scheduling is performed over the Uu interface. Alternatively, the received sidelink resource grant may be a configured resource grant for example, a type 1-like resource grant or a type 2-like resource grant where type 1 resource grant and type 2 resource grant are as per the current definition of type 1 resource grant and type 2 resource grant in NR release 15. In another alternative, the received sidelink resource grant may be a sidelink resource grant autonomously selected by the UE from a resource pool (pre)configured into the UE by the network, e.g. the base station. In another embodiment, the received resource grant may be allocated according to the so-called resource allocation mode1or mode2, more specifically mode2das being discussed in the context of release 16 NR. When a new sidelink transmission is performed, the MAC entity may use one or more of the following conditions to determine the sidelink logical channels that are allowed to be served by the sidelink grant: The set of allowed V2X serving cells of the sidelink logical channel, if configured, includes the cell information associated with the sidelink grant; The set of allowed SCSs of the sidelink logical channel, if configured, includes the SCS associated with the sidelink grant; The allowed latency of the sidelink logical channel, if configured, may be larger than or equal to the allowed latency associated with the sidelink grant; The allowed SL-SCH duration of the sidelink logical channel, if configured, may be larger or equal to the SL-SCH duration associated with the sidelink grant; The allowed SL-SCH K2 duration of the sidelink logical channel, if configured, may be larger or equal to the SL-SCH duration associated with the sidelink grant; The set of allowed RATs of the sidelink logical channel, if configured, may include the RAT associated with the grant; The set of allowed RAT versions of the sidelink logical channel, if configured, may include the RAT version associated with the sidelink grant; The set of allowed BWPs of the sidelink logical channel, if configured may include the BWP associated with the sidelink grant; The set of allowed transmission profiles of the sidelink logical channel, if configured, may include the transmission profile associated with the sidelink grant; The allowed transmission mode of the sidelink logical channel, if configured, may include the transmission mode associated with the sidelink grant; The allowed transmission range of the sidelink logical channel, if configured, may include the transmission range associated with the sidelink grant; and The allowed resource allocation mode of the sidelink logical channel, if configured, may indicate the resource allocation mode used for the resource grant. The sidelink logical channels selected as per the mapping restrictions to sidelink logical channel grant as specified above, may be referred to as “selected sidelink logical channels.” When a new transmission is performed, the MAC entity may allocate resources to only the selected sidelink logical channels. For the purpose of the selection of the V2X transmission destination as part of the logical channel selection procedure, the term “selected sidelink logical channel” may be used as described above, e.g. in reference to the sidelink logical channel selected as being allowed to be served by the sidelink resource grant, which may comprise the sidelink logical channel that fulfills the mapping restrictions to sidelink resource grant described above. Selection of Destination is described herein in accordance with another embodiment, which may be used in combination with any of the embodiments described herein. In a first transmission destination selection method, the MAC entity may, when a new transmission is performed, select a ProSe Destination, e.g. a transmission destination having the selected sidelink logical channel with the highest priority among the selected sidelink logical channels having data available for transmission. Selected sidelink logical channels, as defined in the selection of logical channel solutions described herein, with same destination ID and having data available for transmission, may be multiplexed and assembled into one PDU. When the UE supports multiple transmission chains, it may simultaneously transmit to multiple destinations. The UE selects the Prose destinations, one ProSe destination per transmission chain, in decreasing order of priority of the transmission destination. The priority of a transmission destination is the priority of the selected sidelink logical channel with the highest priority among the selected logical channels of that destination and having data available for transmission. The destination selected by the destination selection procedure may be denoted herein as a “selected destination.” FIG.17depicts a V2X transmission destination selection procedure for a single transmission chain1700. At step1710, logical channel selection for grant allocation may be performed. At step1711, a V2X transmission destination may be selected, among the available V2X transmission destinations, that has the selected logical channel having data available for transmission. FIG.18depicts a V2X transmission destination selection procedure for multiple transmission chains1800. At step1810, logical channel selection for grant allocation may be performed. At step1811, a V2X transmission destination may be selected, among the available V2X transmission destinations, that has the selected logical channel having data available for transmission. At step1812, the selected V2X transmission destination may be removed from the set of available V2X transmission destinations. At step1813, it may be determined whether there are more transmission chains available, and if there are more transmission chains available, step1811is repeated. The transmission mode may comprise broadcast, groupcast, or unicast. Transmission mode priority may be (pre)configured into the UE or specified. For example, a broadcast transmission may have a higher priority over a groupcast transmission, which may have a higher priority over a unicast transmission. In an alternative transmission destination selection method, the MAC entity may, when a new transmission is performed, select a ProSe destination, e.g. a transmission destination having the selected sidelink logical channel with the highest priority among the selected sidelink logical channels having data available for transmission. In the case of a tie, for example, when more than one transmission destination having the highest priority selected logical channel with available data for transmission, the UE may select the transmission destination according to one of the following: The destination among the tie-up destinations, having the highest selected priority logical channels among the selected logical channels having data available for transmission with the highest transmission mode priority. In other words, among the tie-up destinations, the selected logical channels having data for transmission with the highest transmission mode priority are identified. The transmission destination may then be selected as the destination that corresponds to the highest priority logical channel among the logical channels determined to have an available data transmission with the highest transmission mode. The destination among the tie-up destinations, having the highest selected priority logical channel having data available for transmission with the higher transmission mode priority. In other words, among the tie-up destinations, the selected destination is the one whose selected highest priority channel with available data for transmission has a higher transmission mode. The term “selected logical channel” may be used as described above in reference to sidelink logical channel selected as allowed to be served by the sidelink resource grant, which may comprise the sidelink logical channel that may fulfil the mapping restrictions to sidelink resource grant described above. In another alternative transmission destination selection method, the UE may select the transmission destination having a logical channel with data available for transmission with the highest transmission mode priority and that fulfils the mapping restrictions of a logical channel to resources grant described above. In the case of a tie, the UE may select among the tie-up transmission destinations according to one of the following: The destination having the highest priority logical channel that has data available for transmission with the highest transmission mode priority and fulfils the mapping restrictions of logical channel to resources grant described above. The destination having the highest priority logical channel that has data available for transmission and may fulfil the mapping restrictions of a logical channel to resources grant described above. In yet another alternative transmission destination selection method, the UE may select the transmission destination having the highest priority logical channel with data available for transmission with the highest transmission mode priority and may fulfil the mapping restrictions of logical channel to resources grant described above. One or more of the alternative procedures may be used for resource allocation during a sidelink LCP procedure. FIG.19depicts one example resource allocation procedure1900during an NR V2X sidelink LCP procedure in accordance with one embodiment, which may be used in combination with any of the embodiments described herein. The entity may, when a new transmission is performed, allocate resources based on the procedure1900. At step1910, it may be determined if there is a resource grant available. At step1911, selected sidelink logical channels, LCHj with Bj>0, of the selected destination may be allocated resources in a decreasing priority order, wherein the selected logical channel and selected destination are as defined in the selection of logical channel solutions and in the selection of destination solutions described herein, respectively. If the PBR of a selected logical channel of the selected destination is set to “infinity”, the MAC entity may allocate resources for all the data that is available for transmission on the selected sidelink logical channel before meeting the PBR of the lower priority selected sidelink logical channel(s). At step1912, Bj may be decremented by the total size of MAC SDUs served to sidelink logical channel j above. At step1913, it may be determined if there is a resource grant available. At step1914, if any resource grants remain, all the selected sidelink logical channels of the selected destination may be served in a strict decreasing priority order (regardless of the value of Bj) until either the data for that logical channel or the sidelink grant is exhausted, whichever comes first. Selected sidelink logical channels of the selected destination configured with equal priority may be served equally. FIG.20depicts another example resource allocation procedure2000during an NR V2X sidelink LCP procedure in accordance with another embodiment, which may be used in combination with any of the embodiments described herein. The entity may, when a new transmission is performed, allocate resources to the logical channel based on the procedure2000. At step2010, it may be determined if there is a resource grant available. At step2011, resources may be allocated to the highest priority selected sidelink logical channel of the selected destination and having data for transmission, wherein the selected logical channel and the selected destination are as defined in the selection of logical channel solutions and in the selection of destination solutions described herein respectively. At step2012, it may be determined if there is a resource grant available. At step2013, if any resources remain, selected sidelink logical channels belonging to the selected destination may be served in a strict decreasing priority order until either the data for the selected sidelink logical channel(s) or the sidelink grant is exhausted, whichever comes first. Selected sidelink logical channels of the selected destination, configured with equal priority may be served equally. FIG.21depicts yet another example resource allocation procedure2100during a NR V2X sidelink LCP procedure in accordance with another embodiment, which may be used in combination with any of the embodiments described herein. The entity may, when a new transmission is performed, allocate resources to the logical channel based on procedure2100. At step2110, it may be determined if there is a resource grant available. At step2111, resources may be allocated to the highest priority selected sidelink logical channel of the selected destination and having data for transmission, wherein the selected logical channel and selected destination are as defined in the selection of logical channel solutions and in the selection of destination solutions described herein respectively. At step2112, it may be determined if there is a resource grant available. At step2113, if any resources remain, selected sidelink logical channels, LCHj with Bj>0, of the selected destination, may be allocated resources in a decreasing priority order, wherein the selected logical channel and selected destination are as defined in the selection of logical channel solutions and in the selection of destination solutions described herein respectively. If the PBR of a selected sidelink logical channel of the selected destination is set to “infinity”, the MAC entity may allocate resources for all the data that is available for transmission on the sidelink logical channel before serving a lower priority selected sidelink logical channel(s). At step2114, Bj may be decremented by the total size of MAC SDUs served to sidelink logical channel j above. At step2115, it may be determined if there is a resource grant available. At step2116, if any resources remain, all the selected sidelink logical channels of the selected destination may be served in a strict decreasing priority order (regardless of the value of Bj) until either the data for that logical channel or the UL grant is exhausted, whichever comes first. Selected sidelink logical channels of the selected destination configured with equal priority may be served equally. Solutions for Uplink Data Logical Channel Prioritization are described herein in accordance with another embodiment, which may be used in combination with any of the embodiments described herein. This solution is related to impacts of sidelink related transmission on NR uplink data logical channel prioritization procedure e.g. logical channel prioritization procedure over NR Uu interface. In Rel-15 LTE, a non-padding BSR is prioritized over non-padding sidelink BSR, which is prioritized over data from any Logical Channel, except data from UL-CCCH. Similarly, in Rel-15 NR, a non-padding BSR is prioritized over data from any Logical Channel, except data from UL-CCCH. With the introduction of NR V2X, the prioritization of a non-padding BSR versus sidelink BSR may follow the same prioritization order between uplink transmission and sidelink transmission, e.g. if sidelink transmission is prioritized over uplink transmission then the corresponding sidelink BSR may be prioritized over the corresponding uplink transmission BSR. Specifically, the non-padding V2X sidelink BSR may be prioritized over non-padding BSR, if the following conditions are met:(1) if UL data contributing to the BSR are not prioritized by upper layer over sidelink data contribution to the sidelink BSR; and(2) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured. The second of the conditions above assumes an absolute V2X sidelink threshold for the prioritization of sidelink communication over non-emergency uplink communication regardless of the priority of the uplink communication. As described above, this condition may not be sufficient for the prioritization of NR V2X sideline communications versus uplink transmissions. Therefore, in accordance with one embodiment, the priority of the uplink transmission may also used in the prioritization of the V2X sidelink transmission over the uplink transmission. Therefore, examples of alternatives to the second condition above may include the following: (1) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR. (2) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR. The priority threshold parameter (thresSL-TxPrioritization) may be used to indicate the threshold used to determine whether SL V2X transmission is prioritized over uplink transmission if they overlap in time. It may be pre-configured into the UE, or configured into the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. (3) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR, is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured. The priority threshold parameter thresUL-TxPrioritization may be used to indicate the threshold used to determine whether UL transmission is down prioritized over sidelink communication, if they overlap in time. It may be pre-configured into the UE, or configured into the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. (4) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR, is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured, and the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR. Solutions for sidelink Data Versus Uplink Data Prioritization are described herein in accordance with another embodiment, which may be used in combination with any of the embodiments described herein. As described above, in the legacy LTE V2X, the transmission of V2X sidelink communication is prioritized over uplink transmission if the following conditions are met: (1) if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communication simultaneously at the time of the transmission; and (2) if uplink transmission is not prioritized by upper layer; and (3) if the value of the highest priority of the sidelink logical channel(s) in the MAC PDU is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured. The third of the three conditions above assumes an absolute V2X sidelink threshold for the prioritization of sidelink communication over non-emergency uplink communication regardless of the priority of the uplink communication. As described above, this third condition may not be sufficient for the prioritization of NR V2X sidelink communications versus uplink transmissions. Therefore, in accordance with one embodiment, the priority of the uplink transmission may also be used in the prioritization of the V2X sidelink transmission over the uplink transmission by replacing this third condition with one of the following alternatives: (1) if the priority value of V2X sidelink transmission is lower than the priority value of the uplink transmission. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case of unicast transmission. (2) if the priority value of the V2X sidelink transmission is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the priority value of V2X sidelink transmission is lower than the priority value of the uplink transmission. The priority threshold parameter thresSL-TxPrioritization may be used to indicate the threshold used to determine whether SL V2X transmission is prioritized over uplink transmission if they overlap in time. It may be pre-configured into the UE, or configured into the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for broadcast transmission or groupcast transmission. (3) if the priority value of the V2X sidelink communication is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the priority value of the uplink transmission is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured. The priority threshold parameter thresUL-TxPrioritization may be used to indicate the threshold used to determine whether UL transmission is down prioritized over sidelink communication, if they overlap in time. It may be pre-configured in the UE, or configured in the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for broadcast transmission or groupcast transmission. (4) if the priority value of the V2X sidelink transmission is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the priority value of the uplink transmission is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured, and the priority value of V2X sidelink transmission is lower than the priority value of the uplink transmission. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for broadcast transmission or groupcast transmission. In the above, the priority value of a V2X sidelink transmission may be the value of the highest priority of the sidelink logical channel(s) in the MAC PDU. Similarly, the priority value of an uplink transmission may be the value of the highest priority of the uplink logical channel(s) in the MAC PDU. The lower the priority value of a logical channel, the higher the priority of the logical channel may be. Similarly, the lower the priority value of a transmission, the higher the priority of the transmission may be. (5) if the value of the highest priority of the sidelink logical channel(s) in the MAC PDU is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, wherein the parameter thresSL-TxPrioritization may be (pre) configured into the UE specific to each sidelink transmission destination, or specific to each transmission mode or specific to each sidelink transmission destination and transmission mode. The V2X sidelink transmission PPPR or the uplink transmission PPPR may also be taken into account by the UE in deciding on prioritizing V2X sidelink transmission versus uplink transmission. Similarly, the amount of radio resource at risk of being lost if one transmission is prioritized over the other may be taken into account by the UE in deciding on prioritizing V2X sidelink transmission versus uplink transmission. For example, the third condition above, may be modified as follows: (1) if the priority value of V2X sidelink transmission is lower than the priority value of the uplink transmission. In the case of equal priority, the V2X sidelink communication may be prioritized if its PPPR value is higher than the PPPR value of the uplink transmission, or alternatively, in the case of equal priority, the V2X sidelink communication may be prioritized if the amount of resource grant for its transmission is higher than the amount of resource transmission for the uplink transmission. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case of unicast transmission. (2) if the priority value of the V2X sidelink transmission is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the priority value of V2X sidelink transmission is lower than the priority value of the uplink transmission. In case of equal priority between V2X sidelink communication priority and uplink transmission priority, the V2X sidelink communication may be prioritized if its PPPR value is higher than the PPPR value of the uplink transmission, or alternatively, in the case of equal priority, the V2X sidelink communication may be prioritized if the amount of resource grant for its transmission is higher than the amount of resource transmission for the uplink transmission. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for broadcast transmission or groupcast transmission. (3) if the priority value of the V2X sidelink transmission is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the priority value of the uplink transmission is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured, and the priority value of V2X sidelink transmission is lower than the priority value of the uplink transmission. In case of equal priority between V2X sidelink communication priority and uplink transmission priority, the V2X sidelink communication may be prioritized if its PPPR value is higher than the PPPR value of the uplink transmission, or alternatively, in the case of equal priority, the V2X sidelink communication may be prioritized if the amount of resource grant for its transmission is higher than the amount of resource transmission for the uplink transmission. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for broadcast transmission or groupcast transmission. In the above, the PPPR value of a V2X sidelink transmission is the value of the highest PPPR value of the sidelink logical channel(s) in the MAC PDU. Similarly, the reliability value of an uplink transmission is the value of the highest PPPR value of the uplink logical channel(s) in the MAC PDU. The higher the PPPR value of a logical channel, the higher the reliability requirement of the logical channel may be. Similarly, the higher the PPPR value of a transmission, the higher the reliability requirement of the transmission may be. As discussed above with respect to the embodiments described herein for UL LCP, the BSR with the exception of BSR included for padding as well as the sidelink BSR with the exception of sidelink BSR included for padding may also be prioritized over data from any logical channel, except data from UL-CCCH. Following, the prioritization approaches described in the embodiments described herein for UL LCP regarding prioritization of non-padding BSR versus non-padding sidelink BSR, the BSR prioritization versus sidelink transmission may be in accordance with the relative prioritization between the UL data contributing to the BSR and the sidelink data. Similarly, the sidelink BSR prioritization versus UL data transmission may be in accordance with the relative prioritization between the sidelink data contributing to the sidelink BSR and the UL data. The non-padding V2X sidelink BSR may be prioritized over UL data, if the following conditions are met:(1) if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communication simultaneously at the time of the transmission; and(2) if UL data is not prioritized by upper layer over sidelink data contribution to the sidelink BSR; and(3) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured. Examples of alternatives to the third condition above may include the following: (1) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the value of the highest priority of the logical channel(s) in the UL MAC PDU. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case where the highest priority sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR corresponds to unicast transmission. (2) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the value of the highest priority of the logical channel(s) in the UL MAC PDU. The priority threshold parameter thresSL-TxPrioritization may be used to indicate the threshold used to determine whether SL V2X transmission is prioritized over an uplink transmission if they overlap in time. It may be pre-configured into the UE, or configured into the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case where the highest priority sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR corresponds to groupcast or broadcast transmission. (3) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the logical channel(s) in the UL MAC PDU, is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured. The priority threshold parameter thresUL-TxPrioritization may be used to indicate the threshold used to determine whether UL transmission is down prioritized over sidelink communication, if they overlap in time. It may be pre-configured into the UE, or configured into the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case where the highest priority sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR corresponds to groupcast or broadcast transmission. (4) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the logical channel(s) in the UL MAC PDU, is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured, and the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the value of the highest priority of the logical channel(s) in the UL MAC PDU. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case where the highest priority sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR corresponds to groupcast or broadcast transmission. (5) if the value of the highest priority of the sidelink logical channel(s) for the sidelink data contributing to the sidelink BSR is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, wherein the parameter thresSL-TxPrioritization may be (pre)configured into the UE specific to each sidelink transmission destination, or specific to each transmission mode or specific to each sidelink transmission destination and transmission mode. The V2X sidelink transmission may be prioritized over a non-padding BSR, if the following conditions are met:(1) if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communication simultaneously at the time of the transmission; and(2) if UL data contributing to the BSR is not prioritized by upper layer over sidelink data; and(3) if the value of the highest priority of the sidelink logical channel(s) in the sidelink MAC PDU is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case of unicast transmission. Examples of alternatives to the third condition may include the following: (1) if the value of the highest priority of the sidelink logical channel(s) in the sidelink MAC PDU is lower than the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for example in the case of unicast transmission. (2) if the value of the highest priority of the sidelink logical channel(s) in the sidelink MAC PDU is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the sidelink logical channel(s) in the sidelink MAC PDU is lower than the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR. The priority threshold parameter thresSL-TxPrioritization may be used to indicate the threshold used to determine whether SL V2X transmission is prioritized over uplink transmission if they overlap in time. It may be pre-configured into the UE, or configured into the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for broadcast transmission or groupcast transmission. (3) if the value of the highest priority of the sidelink logical channel(s) in the sidelink MAC PDU is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR, is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured. The priority threshold parameter thresUL-TxPrioritization may be used to indicate the threshold used to determine whether UL transmission is down prioritized over sidelink communication, if they overlap in time. It may be pre-configured into the UE, or configured into the UE through RRC dedicated signaling RRC common signaling for example RRC broadcast signaling. While this criteria may be used for all sidelink transmission modes, this criteria may be specifically used for broadcast transmission or groupcast transmission. (4) if the value of the highest priority of the sidelink logical channel(s) in the sidelink MAC PDU is lower than the sidelink transmission prioritization threshold value (thresSL-TxPrioritization), if this value is configured, and the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR, is higher than the uplink transmission down-prioritization threshold value (thresUL-TxPriortization), if this value is configured, and the value of the highest priority of the sidelink logical channel(s) in the sidelink MAC PDU is lower than the value of the highest priority of the logical channel(s) for the UL data contributing to the BSR. While this criterion may be used for all sidelink transmission modes, this criterion may be specifically used for broadcast transmission or groupcast transmission. In the sidelink transmission versus uplink transmission prioritization solutions described herein, the parameter thresSL-TxPrioritization may be (pre) configured into the UE and may be specific to each sidelink transmission destination, or specific to each transmission mode or specific to each sidelink transmission destination and transmission mode. Such configuration signaling may be done using Uu RRC signaling, PC5-RRC signaling, or PC3-S signaling. While priority as one of the QoS characteristics is used in the prioritization of sidelink transmission versus uplink transmission, any other QoS characteristic may be used instead of priority or in combination with priority. For example, in the prioritization methods described above, priority may be replaced by any other QoS metric such that the sidelink transmission is prioritized over UL transmission if the sidelink transmission QoS requires prioritization over UL transmission. Similarly, sidelink BSR such as non-padding sidelink BSR may be prioritized over UL transmissions, including UL BSR if the logical channel(s) of the data contributing to the sidelink BSR requires prioritization over UL transmission or UL BSR. Example of QoS characteristics that may be used in the prioritization methods described above, instead of priority may be latency, range, a combination of priority, latency, range or any other single metric that may be used to represent QoS such as PQI (PC5 QoS Identifier). Examples of different overlapping grant scenarios are illustrated inFIG.7,FIG.8,FIG.9,FIG.10, andFIG.11. FIG.7illustrates a case where both the grants fully overlap, the earliest allowed transmission starting points are the same but the two grants have different durations. Similarly,FIG.8, illustrates a case where both the grants fully overlap, the latest allowed transmission end points are the same but the two grants have different durations.FIG.9andFIG.10illustrate a case where the grants are of different durations, partially overlap where either the earliest allowed transmission starting point and the latest allowed transmission end point of each of the grants are different.FIG.11illustrates a case where the grants are of different durations, fully overlap where the earlier allowed transmission starting and the latest allowed transmission end point of each of the grant are different. In LTE V2X, the first condition for sidelink communication prioritization over uplink transmission prioritization as discussed above is “if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communication simultaneously at the time of the transmission”. The statement “at the time of the transmission” may be confusing and is not clear enough in the context of overlapping grant. Therefore, the condition “if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communication simultaneously at the time of the transmission” needs to be clarified, updated or enhanced in order to avoid ambiguous UE behavior. For example, considering a scenario where a transmission is underway on the uplink and then later a transmission is triggered on the sidelink, and accordingly to the existing LTE V2X transmission prioritization rule the sidelink transmission may be prioritized. In this scenario, it is not clear how to handle the ongoing uplink transmission (e.g. interrupting or discarding the uplink transmission may be performed). Therefore, the first condition of the three LTE V2X conditions for prioritizing V2X sidelink transmission over uplink transmission may be enhanced with one of the following conditions: (1) if the MAC entity is not able to perform uplink transmissions and transmissions of V2X sidelink communication simultaneously at the time of the transmission. The prioritization may not force an uplink TB already submitted to the lower layer for transmission for this uplink transmission period, to be discarded. Example of scenarios where the prioritization rule of LTE V2X rule may have forced uplink transmission to be discarded where it is impractical or infeasible to do so may be scenario of a sidelink and uplink transmission overlap as illustrated inFIG.8,FIG.10, andFIG.11. Alternatively, a fourth condition for prioritizing V2X sidelink transmission over uplink transmission may be the following: (1) The prioritization may not force an uplink TB already submitted to the lower layer for transmission for this uplink transmission period, to be discarded. Regarding random access transmission, the sidelink transmission may be prioritized over random access e.g., over random access message 1 transmission if the priority of the sidelink transmission is higher than the event that triggers the random access. For example, if random access is triggered by UL data arrival during RRC CONNECTED when there are no PUCCH resources for SR available, and the UL data may be of lower priority than the sidelink transmission then the sidelink transmission should be prioritized over the random access. Determination of the transmission parameters associated with a radio resource grant is described herein. One issue that needs to be addressed is how the MAC determines the transmission parameters associated with a resource grant whether it is scheduled grant or configured or autonomous grant. Determination of allowed transmission mode(s) associated with a radio resource grant may comprise the following steps: The UE MAC may determine the transmission mode(s) allowed for a radio resource grant based on one or more of the following: The grant may include an indication of the allowed transmission mode(s) associated with the grant. The PHY may indicate to the MAC the allowed transmission mode(s) associated with the grant. sidelink radio resources (e.g., Physical sidelink Shared Channel resources) (pre) configured into the UE e.g., by RRC may include configurations of allowed transmission mode(s) associated with the resources. The sidelink radio resources (e.g., Physical sidelink Shared Channel resources) (pre) configured into the UE e.g., by RRC may be transmission mode specific. When a scheduled e.g. a dynamic grant or mode1grant is assigned to the UE, the MAC may determine the transmission mode of the grant based on allowed transmission mode(s) of the sidelink radio resources configured into the UE, to which the grant maps. sidelink resource pool(s) (pre)configured into the UE e.g., by RRC, may include configurations of allowed transmission mode(s) associated with the resources. The sidelink radio resource pool(s) (pre) configured into the UE e.g., by RRC may be transmission mode specific. When an autonomous grant or mode2grant is selected by the UE, the MAC may determine the transmission mode of the grant based on allowed transmission mode(s) of the sidelink radio resource pool(s) configured into the UE, to which the grant maps. Determination of allowed transmission range(s) associated with a radio resource grant may comprise the following steps: The UE MAC may determine the transmission range(s) allowed for radio resource grant based one or more of the following: The grant may include an indication of the allowed transmission range(s) associated with the grant. The PHY may indicate to the MAC the allowed transmission range(s) associated with the grant. sidelink radio resources (e.g., Physical sidelink Shared Channel resources) (pre) configured into the UE e.g., by RRC, may include configurations of allowed transmission range(s) associated with the resources. The sidelink radio resources (e.g., Physical sidelink Shared Channel resources) (pre) configured into the UE e.g., by RRC, may be transmission range specific. When a scheduled, e.g. a dynamic grant or mode1, grant is assigned to the UE, the MAC may determine the transmission range of the grant based on allowed transmission range(s) of the sidelink radio resources configured into the UE, to which the grant maps. sidelink resource pool(s) (pre)configured into the UE e.g., by RRC, may include configurations of allowed transmission range(s) associated with the resources. The sidelink radio resource pool(s) (pre) configured into the UE e.g., by RRC, may be transmission range specific. When an autonomous grant or mode2grant is selected by the UE, the MAC may determine the transmission range of the grant based on the allowed transmission mode(s) of the sidelink radio resource pool(s) configured into the UE, to which the grant maps. The base station may figure out the transmission mode associated with a Buffer Status Report (BSR), based on the Logical channel (LCH) or Logical Channel Group (LCG) included or associated with the BSR, or a combination of LCG & destination or a combination of LCH and destination included or associated with the BSR. Similarly, the base station may figure out the transmission range associated with a BSR, based on the LCH or LCG included or associated with the BSR, or a combination of LCG & destination or a combination of LCH and destination associated or included in the BSR. Based on (pre)configuration information in the UE, the UE may associate the LCH or LCG with a transmission mode. Similarly, based on (pre)configuration information into the UE, the UE may associate LCH or LCG with transmission range. The 3rd Generation Partnership Project (3GPP) develops technical standards for cellular telecommunications network technologies, including radio access, the core transport network, and service capabilities—including work on codecs, security, and quality of service. Recent radio access technology (RAT) standards include WCDMA (commonly referred as 3G), LTE (commonly referred as 4G), and LTE-Advanced standards. 3GPP has begun working on the standardization of next generation cellular technology, called New Radio (NR), which is also referred to as “5G”. 3GPP NR standards development is expected to include the definition of next generation radio access technology (new RAT), which is expected to include the provision of new flexible radio access below 6 GHz, and the provision of new ultra-mobile broadband radio access above 6 GHz. The flexible radio access is expected to consist of a new, non-backwards compatible radio access in new spectrum below 6 GHz, and it is expected to include different operating modes that may be multiplexed together in the same spectrum to address a broad set of 3GPP NR use cases with diverging requirements. The ultra-mobile broadband is expected to include cmWave and mmWave spectrum that will provide the opportunity for ultra-mobile broadband access for, e.g., indoor applications and hotspots. In particular, the ultra-mobile broadband is expected to share a common design framework with the flexible radio access below 6 GHz, with cmWave and mmWave specific design optimizations. 3GPP has identified a variety of use cases that NR is expected to support, resulting in a wide variety of user experience requirements for data rate, latency, and mobility. The use cases include the following general categories: enhanced mobile broadband (e.g., broadband access in dense areas, indoor ultra-high broadband access, broadband access in a crowd, 50+ Mbps everywhere, ultra-low cost broadband access, mobile broadband in vehicles), critical communications, massive machine type communications, network operation (e.g., network slicing, routing, migration and interworking, energy savings), and enhanced vehicle-to-everything (eV2X) communications, which may include any of Vehicle-to-Vehicle Communication (V2V), Vehicle-to-Infrastructure Communication (V2I), Vehicle-to-Network Communication (V2N), Vehicle-to-Pedestrian Communication (V2P), and vehicle communications with other entities. Specific service and applications in these categories include, e.g., monitoring and sensor networks, device remote controlling, bi-directional remote controlling, personal cloud computing, video streaming, wireless cloud-based office, first responder connectivity, automotive ecall, disaster alerts, real-time gaming, multi-person video calls, autonomous driving, augmented reality, tactile internet, and virtual reality to name a few. All of these use cases and others are contemplated herein. FIG.22Aillustrates one embodiment of an example communications system100in which the methods and apparatuses described and claimed herein may be embodied. As shown, the example communications system100may include wireless transmit/receive units (WTRUs)102a,102b,102c,102d,102e,102f, and/or102g(which generally or collectively may be referred to as WTRU102), a radio access network (RAN)103/104/105/103b/104b/105b, a core network106/107/109, a public switched telephone network (PSTN)108, the Internet110, other networks112, and V2X server (or ProSe function and server)113, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs102a,102b,102c,102d,102e,102f,102gmay be any type of apparatus or device configured to operate and/or communicate in a wireless environment. Although each WTRU102a,102b,102c,102d,102e,102f,102gis depicted inFIGS.22A-22Eas a hand-held wireless communications apparatus, it is understood that with the wide variety of use cases contemplated for 5G wireless communications, each WTRU may comprise or be embodied in any type of apparatus or device configured to transmit and/or receive wireless signals, including, by way of example only, user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a tablet, a netbook, a notebook computer, a personal computer, a wireless sensor, consumer electronics, a wearable device such as a smart watch or smart clothing, a medical or eHealth device, a robot, industrial equipment, a drone, a vehicle such as a car, truck, train, or airplane, and the like. The communications system100may also include a base station114aand a base station114b. Base stations114amay be any type of device configured to wirelessly interface with at least one of the WTRUs102a,102b,102cto facilitate access to one or more communication networks, such as the core network106/107/109, the Internet110, and/or the other networks112. Base stations114bmay be any type of device configured to wiredly and/or wirelessly interface with at least one of the RRHs (Remote Radio Heads)118a,118b, TRPs (Transmission and Reception Points)119a,119b, and/or RSUs (Roadside Units)120aand120bto facilitate access to one or more communication networks, such as the core network106/107/109, the Internet110, the other networks112, and/or V2X server (or ProSe function and server)113. RRHs118a,118bmay be any type of device configured to wirelessly interface with at least one of the WTRU102c, to facilitate access to one or more communication networks, such as the core network106/107/109, the Internet110, and/or the other networks112. TRPs119a,119bmay be any type of device configured to wirelessly interface with at least one of the WTRU102d, to facilitate access to one or more communication networks, such as the core network106/107/109, the Internet110, and/or the other networks112. RSUs120aand120bmay be any type of device configured to wirelessly interface with at least one of the WTRU102eor102f, to facilitate access to one or more communication networks, such as the core network106/107/109, the Internet110, the other networks112, and/or V2X server (or ProSe function and server)113. By way of example, the base stations114a,114bmay be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While the base stations114a,114bare each depicted as a single element, it will be appreciated that the base stations114a,114bmay include any number of interconnected base stations and/or network elements. The base station114amay be part of the RAN103/104/105, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station114bmay be part of the RAN103b/104b/105b, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station114amay be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The base station114bmay be configured to transmit and/or receive wired and/or wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with the base station114amay be divided into three sectors. Thus, in an embodiment, the base station114amay include three transceivers, e.g., one for each sector of the cell. In an embodiment, the base station114amay employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell. The base stations114amay communicate with one or more of the WTRUs102a,102b,102cover an air interface115/116/117, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, cmWave, mmWave, etc.). The air interface115/116/117may be established using any suitable radio access technology (RAT). The base stations114bmay communicate with one or more of the RRHs118a,118b, TRPs119a,119b, and/or RSUs120aand120b, over a wired or air interface115b/116b/117b, which may be any suitable wired (e.g., cable, optical fiber, etc.) or wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, cmWave, mmWave, etc.). The air interface115b/116b/117bmay be established using any suitable radio access technology (RAT). The RRHs118a,118b, TRPs119a,119band/or RSUs120a,120b, may communicate with one or more of the WTRUs102c,102d,102e,102fover an air interface115c/116c/117c, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, cmWave, mmWave, etc.). The air interface115c/116c/117cmay be established using any suitable radio access technology (RAT). The WTRUs102a,102b,102c,102d,102e,102f, and/or102gmay communicate with one another over an air interface115d/116d/117d(not shown in the figures), which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, cmWave, mmWave, etc.). The air interface115d/116d/117dmay be established using any suitable radio access technology (RAT). More specifically, as noted above, the communications system100may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station114ain the RAN103/104/105and the WTRUs102a,102b,102c, or RRHs118a,118b, TRPs119a,119band RSUs120a,120b, in the RAN103b/104b/105band the WTRUs102c,102d,102e,102f, may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface115/116/117or115c/116c/117crespectively using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA). In an embodiment, the base station114aand the WTRUs102a,102b,102c, or RRHs118a,118b, TRPs119a,119b, and/or RSUs120a,120b, in the RAN103b/104b/105band the WTRUs102c,102d, may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface115/116/117or115c/116c/117crespectively using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A). In the future, the air interface115/116/117may implement 3GPP NR technology. The LTE and LTE-A technology includes LTE D2D and V2X technologies and interface (such as Sidelink communications and etc). The 3GPP NR technology includes NR V2X technologies and interface (such as Sidelink communications and etc). In an embodiment, the base station114ain the RAN103/104/105and the WTRUs102a,102b,102c, or RRHs118a,118b, TRPs119a,119band/or RSUs120a,120b, in the RAN103b/104b/105band the WTRUs102c,102d,102e,102fmay implement radio technologies such as IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1×, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like. The base station114cinFIG.22Amay be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like. In an embodiment, the base station114cand the WTRUs102e, may implement a radio technology such as IEEE 802.11 to establish a wireless local area network (WLAN). In an embodiment, the base station114cand the WTRUs102d, may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station114cand the WTRUs102e, may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish a picocell or femtocell. As shown inFIG.22A, the base station114bmay have a direct connection to the Internet110. Thus, the base station114cmay not be required to access the Internet110via the core network106/107/109. The RAN103/104/105and/or RAN103b/104b/105bmay be in communication with the core network106/107/109, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs102a,102b,102c,102d. For example, the core network106/107/109may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown inFIG.22A, it will be appreciated that the RAN103/104/105and/or RAN103b/104b/105band/or the core network106/107/109may be in direct or indirect communication with other RANs that employ the same RAT as the RAN103/104/105and/or RAN103b/104b/105bor a different RAT. For example, in addition to being connected to the RAN103/104/105and/or RAN103b/104b/105b, which may be utilizing an E-UTRA radio technology, the core network106/107/109may also be in communication with another RAN (not shown) employing a GSM radio technology. The core network106/107/109may also serve as a gateway for the WTRUs102a,102b,102c,102d,102eto access the PSTN108, the Internet110, and/or other networks112. The PSTN108may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet110may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite. The networks112may include wired or wireless communications networks owned and/or operated by other service providers. For example, the networks112may include another core network connected to one or more RANs, which may employ the same RAT as the RAN103/104/105and/or RAN103b/104b/105bor a different RAT. Some or all of the WTRUs102a,102b,102c,102din the communications system100may include multi-mode capabilities, e.g., the WTRUs102a,102b,102c,102d, and102emay include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU102eshown inFIG.22Amay be configured to communicate with the base station114a, which may employ a cellular-based radio technology, and with the base station114c, which may employ an IEEE 802 radio technology. FIG.22Bis a block diagram of an example apparatus or device configured for wireless communications in accordance with the embodiments illustrated herein, such as for example, a WTRU102. As shown inFIG.22B, the example WTRU102may include a processor118, a transceiver120, a transmit/receive element122, a speaker/microphone124, a keypad126, a display/touchpad/indicators128, non-removable memory130, removable memory132, a power source134, a global positioning system (GPS) chipset136, and other peripherals138. It will be appreciated that the WTRU102may include any sub-combination of the foregoing elements while remaining consistent with an embodiment. Also, embodiments contemplate that the base stations114aand114b, and/or the nodes that base stations114aand114bmay represent, such as but not limited to transceiver station (BTS), a Node-B, a site controller, an access point (AP), a home node-B, an evolved home node-B (eNodeB), a home evolved node-B (HeNB), a home evolved node-B gateway, and proxy nodes, among others, may include some or all of the elements depicted inFIG.22Band described herein. The processor118may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor118may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU102to operate in a wireless environment. The processor118may be coupled to the transceiver120, which may be coupled to the transmit/receive element122. WhileFIG.22Bdepicts the processor118and the transceiver120as separate components, it will be appreciated that the processor118and the transceiver120may be integrated together in an electronic package or chip. The transmit/receive element122may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station114a) over the air interface115/116/117. For example, in an embodiment, the transmit/receive element122may be an antenna configured to transmit and/or receive RF signals. In an embodiment, the transmit/receive element122may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet an embodiment, the transmit/receive element122may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element122may be configured to transmit and/or receive any combination of wireless signals. In addition, although the transmit/receive element122is depicted inFIG.22Bas a single element, the WTRU102may include any number of transmit/receive elements122. More specifically, the WTRU102may employ MIMO technology. Thus, in an embodiment, the WTRU102may include two or more transmit/receive elements122(e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface115/116/117. The transceiver120may be configured to modulate the signals that are to be transmitted by the transmit/receive element122and to demodulate the signals that are received by the transmit/receive element122. As noted above, the WTRU102may have multi-mode capabilities. Thus, the transceiver120may include multiple transceivers for enabling the WTRU102to communicate via multiple RATs, such as UTRA and IEEE 802.11, for example. The processor118of the WTRU102may be coupled to, and may receive user input data from, the speaker/microphone124, the keypad126, and/or the display/touchpad/indicators128(e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit). The processor118may also output user data to the speaker/microphone124, the keypad126, and/or the display/touchpad/indicators128. In addition, the processor118may access information from, and store data in, any type of suitable memory, such as the non-removable memory130and/or the removable memory132. The non-removable memory130may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory132may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In an embodiment, the processor118may access information from, and store data in, memory that is not physically located on the WTRU102, such as on a server or a home computer (not shown). The processor118may receive power from the power source134, and may be configured to distribute and/or control the power to the other components in the WTRU102. The power source134may be any suitable device for powering the WTRU102. For example, the power source134may include one or more dry cell batteries, solar cells, fuel cells, and the like. The processor118may also be coupled to the GPS chipset136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU102. In addition to, or in lieu of, the information from the GPS chipset136, the WTRU102may receive location information over the air interface115/116/117from a base station (e.g., base stations114a,114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU102may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment. The processor118may further be coupled to other peripherals138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals138may include various sensors such as an accelerometer, biometrics (e.g., finger print) sensors, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port or other interconnect interfaces, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like. The WTRU102may be embodied in other apparatuses or devices, such as a sensor, consumer electronics, a wearable device such as a smart watch or smart clothing, a medical or eHealth device, a robot, industrial equipment, a drone, a vehicle such as a car, truck, train, or airplane. The WTRU102may connect to other components, modules, or systems of such apparatuses or devices via one or more interconnect interfaces, such as an interconnect interface that may comprise one of the peripherals138. FIG.22Cis a system diagram of the RAN103and the core network106according to an embodiment. As noted above, the RAN103may employ a UTRA radio technology to communicate with the WTRUs102a,102b, and102cover the air interface115. The RAN103may also be in communication with the core network106. As shown inFIG.22C, the RAN103may include Node-Bs140a,140b,140c, which may each include one or more transceivers for communicating with the WTRUs102a,102b,102cover the air interface115. The Node-Bs140a,140b,140cmay each be associated with a particular cell (not shown) within the RAN103. The RAN103may also include RNCs142a,142b. It will be appreciated that the RAN103may include any number of Node-Bs and RNCs while remaining consistent with an embodiment. As shown inFIG.22C, the Node-Bs140a,140bmay be in communication with the RNC142a. Additionally, the Node-B140cmay be in communication with the RNC142b. The Node-Bs140a,140b,140cmay communicate with the respective RNCs142a,142bvia an Iub interface. The RNCs142a,142bmay be in communication with one another via an Iur interface. Each of the RNCs142a,142bmay be configured to control the respective Node-Bs140a,140b,140cto which it is connected. In addition, each of the RNCs142a,142bmay be configured to carry out or support other functionality, such as outer loop power control, load control, admission control, packet scheduling, handover control, macro-diversity, security functions, data encryption, and the like. The core network106shown inFIG.22Cmay include a media gateway (MGW)144, a mobile switching center (MSC)146, a serving GPRS support node (SGSN)148, and/or a gateway GPRS support node (GGSN)150. While each of the foregoing elements are depicted as part of the core network106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. The RNC142ain the RAN103may be connected to the MSC146in the core network106via an IuCS interface. The MSC146may be connected to the MGW144. The MSC146and the MGW144may provide the WTRUs102a,102b,102cwith access to circuit-switched networks, such as the PSTN108, to facilitate communications between the WTRUs102a,102b,102cand traditional land-line communications devices. The RNC142ain the RAN103may also be connected to the SGSN148in the core network106via an IuPS interface. The SGSN148may be connected to the GGSN150. The SGSN148and the GGSN150may provide the WTRUs102a,102b,102cwith access to packet-switched networks, such as the Internet110, to facilitate communications between and the WTRUs102a,102b,102cand IP-enabled devices. As noted above, the core network106may also be connected to the networks112, which may include other wired or wireless networks that are owned and/or operated by other service providers. FIG.22Dis a system diagram of the RAN104and the core network107according to an embodiment. As noted above, the RAN104may employ an E-UTRA radio technology to communicate with the WTRUs102a,102b, and102cover the air interface116. The RAN104may also be in communication with the core network107. The RAN104may include eNode-Bs160a,160b,160c, though it will be appreciated that the RAN104may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs160a,160b,160cmay each include one or more transceivers for communicating with the WTRUs102a,102b,102cover the air interface116. In an embodiment, the eNode-Bs160a,160b,160cmay implement MIMO technology. Thus, the eNode-B160a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU102a. Each of the eNode-Bs160a,160b, and160cmay be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown inFIG.22D, the eNode-Bs160a,160b,160cmay communicate with one another over an X2 interface. The core network107shown inFIG.22Dmay include a mobility management gateway (MME)162, a serving gateway164, and a packet data network (PDN) gateway166. While each of the foregoing elements are depicted as part of the core network107, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. The MME162may be connected to each of the eNode-Bs160a,160b, and160cin the RAN104via an S1 interface and may serve as a control node. For example, the MME162may be responsible for authenticating users of the WTRUs102a,102b,102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs102a,102b,102c, and the like. The MME162may also provide a control plane function for switching between the RAN104and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA. The serving gateway164may be connected to each of the eNode-Bs160a,160b, and160cin the RAN104via the S1 interface. The serving gateway164may generally route and forward user data packets to/from the WTRUs102a,102b,102c. The serving gateway164may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs102a,102b,102c, managing and storing contexts of the WTRUs102a,102b,102c, and the like. The serving gateway164may also be connected to the PDN gateway166, which may provide the WTRUs102a,102b,102cwith access to packet-switched networks, such as the Internet110, to facilitate communications between the WTRUs102a,102b,102cand IP-enabled devices. The core network107may facilitate communications with other networks. For example, the core network107may provide the WTRUs102a,102b,102cwith access to circuit-switched networks, such as the PSTN108, to facilitate communications between the WTRUs102a,102b,102cand traditional land-line communications devices. For example, the core network107may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network107and the PSTN108. In addition, the core network107may provide the WTRUs102a,102b,102cwith access to the networks112, which may include other wired or wireless networks that are owned and/or operated by other service providers. FIG.22Eis a system diagram of the RAN105and the core network109according to an embodiment. The RAN105may be an access service network (ASN) that employs IEEE 802.16 radio technology to communicate with the WTRUs102a,102b, and102cover the air interface117. As will be further discussed below, the communication links between the different functional entities of the WTRUs102a,102b,102c, the RAN105, and the core network109may be defined as reference points. As shown inFIG.22E, the RAN105may include base stations180a,180b,180c, and an ASN gateway182, though it will be appreciated that the RAN105may include any number of base stations and ASN gateways while remaining consistent with an embodiment. The base stations180a,180b,180cmay each be associated with a particular cell in the RAN105and may include one or more transceivers for communicating with the WTRUs102a,102b,102cover the air interface117. In an embodiment, the base stations180a,180b,180cmay implement MIMO technology. Thus, the base station180a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU102a. The base stations180a,180b,180cmay also provide mobility management functions, such as handoff triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like. The ASN gateway182may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network109, and the like. The air interface117between the WTRUs102a,102b,102cand the RAN105may be defined as an R1 reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs102a,102b, and102cmay establish a logical interface (not shown) with the core network109. The logical interface between the WTRUs102a,102b,102cand the core network109may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management. The communication link between each of the base stations180a,180b, and180cmay be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations. The communication link between the base stations180a,180b,180cand the ASN gateway182may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs102a,102b,102c. As shown inFIG.22E, the RAN105may be connected to the core network109. The communication link between the RAN105and the core network109may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example. The core network109may include a mobile IP home agent (MIP-HA)184, an authentication, authorization, accounting (AAA) server186, and a gateway188. While each of the foregoing elements are depicted as part of the core network109, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. The MIP-HA may be responsible for IP address management, and may enable the WTRUs102a,102b, and102cto roam between different ASNs and/or different core networks. The MIP-HA184may provide the WTRUs102a,102b,102cwith access to packet-switched networks, such as the Internet110, to facilitate communications between the WTRUs102a,102b,102cand IP-enabled devices. The AAA server186may be responsible for user authentication and for supporting user services. The gateway188may facilitate interworking with other networks. For example, the gateway188may provide the WTRUs102a,102b,102cwith access to circuit-switched networks, such as the PSTN108, to facilitate communications between the WTRUs102a,102b,102cand traditional land-line communications devices. In addition, the gateway188may provide the WTRUs102a,102b,102cwith access to the networks112, which may include other wired or wireless networks that are owned and/or operated by other service providers. Although not shown inFIG.22E, it will be appreciated that the RAN105may be connected to other ASNs and the core network109may be connected to other core networks. The communication link between the RAN105the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of the WTRUs102a,102b,102cbetween the RAN105and the other ASNs. The communication link between the core network109and the other core networks may be defined as an R5 reference, which may include protocols for facilitating interworking between home core networks and visited core networks. The core network entities described herein and illustrated inFIGS.22A,22C,22D, and22Eare identified by the names given to those entities in certain existing 3GPP specifications, but it is understood that in the future those entities and functionalities may be identified by other names and certain entities or functions may be combined in future specifications published by 3GPP, including future 3GPP NR specifications. Thus, the particular network entities and functionalities described and illustrated inFIGS.22A,22B,22C,22D, and22Eare provided by way of example only, and it is understood that the subject matter disclosed and claimed herein may be embodied or implemented in any similar communication system, whether presently defined or defined in the future. FIG.22Fis a block diagram of an exemplary computing system90in which one or more apparatuses of the communications networks illustrated inFIGS.22A,22C,22D and22Emay be embodied, such as certain nodes or functional entities in the RAN103/104/105, Core Network106/107/109, PSTN108, Internet110, or Other Networks112. Computing system90may comprise a computer or server and may be controlled primarily by computer readable instructions, which may be in the form of software, wherever, or by whatever means such software is stored or accessed. Such computer readable instructions may be executed within a processor91, to cause computing system90to do work. The processor91may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor91may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the computing system90to operate in a communications network. Coprocessor81is an optional processor, distinct from main processor91, that may perform additional functions or assist processor91. Processor91and/or coprocessor81may receive, generate, and process data related to the methods and apparatuses disclosed herein. In operation, processor91fetches, decodes, and executes instructions, and transfers information to and from other resources via the computing system's main data-transfer path, system bus80. Such a system bus connects the components in computing system90and defines the medium for data exchange. System bus80typically includes data lines for sending data, address lines for sending addresses, and control lines for sending interrupts and for operating the system bus. An example of such a system bus80is the PCI (Peripheral Component Interconnect) bus. Memories coupled to system bus80include random access memory (RAM)82and read only memory (ROM)93. Such memories include circuitry that allows information to be stored and retrieved. ROMs93generally contain stored data that cannot easily be modified. Data stored in RAM82may be read or changed by processor91or other hardware devices. Access to RAM82and/or ROM93may be controlled by memory controller92. Memory controller92may provide an address translation function that translates virtual addresses into physical addresses as instructions are executed. Memory controller92may also provide a memory protection function that isolates processes within the system and isolates system processes from user processes. Thus, a program running in a first mode may access only memory mapped by its own process virtual address space; it cannot access memory within another process's virtual address space unless memory sharing between the processes has been set up. In addition, computing system90may contain peripherals controller83responsible for communicating instructions from processor91to peripherals, such as printer94, keyboard84, mouse95, and disk drive85. Display86, which is controlled by display controller96, is used to display visual output generated by computing system90. Such visual output may include text, graphics, animated graphics, and video. The visual output may be provided in the form of a graphical user interface (GUI). Display86may be implemented with a CRT-based video display, an LCD-based flat-panel display, gas plasma-based flat-panel display, or a touch-panel. Display controller96includes electronic components required to generate a video signal that is sent to display86. Further, computing system90may contain communication circuitry, such as for example a network adapter97, that may be used to connect computing system90to an external communications network, such as the RAN103/104/105, Core Network106/107/109, PSTN108, Internet110, or Other Networks112ofFIGS.22A,22B,22C,22D, and22E, to enable the computing system90to communicate with other nodes or functional entities of those networks. The communication circuitry, alone or in combination with the processor91, may be used to perform the transmitting and receiving steps of certain apparatuses, nodes, or functional entities described herein. FIG.22Gillustrates one embodiment of an example communications system111in which the methods and apparatuses described and claimed herein may be embodied. As shown, the example communications system111may include wireless transmit/receive units (WTRUs) A, B, C, D, E, F, a base station, a V2X server, and a RSUs A and B, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. One or several or all WTRUs A, B, C, D, E can be out of range of the network (for example, in the figure out of the cell coverage boundary shown as the dash line). WTRUs A, B, C form a V2X group, among which WTRU A is the group lead and WTRUs B and C are group members. WTRUs A, B, C, D, E, F may communicate over Uu interface or Sidelink (PC5) interface. It is understood that any or all of the apparatuses, systems, methods and processes described herein may be embodied in the form of computer executable instructions (e.g., program code) stored on a computer-readable storage medium which instructions, when executed by a processor, such as processors118or91, cause the processor to perform and/or implement the systems, methods and processes described herein. Specifically, any of the steps, operations or functions described herein may be implemented in the form of such computer executable instructions, executing on the processor of an apparatus or computing system configured for wireless and/or wired network communications. Computer readable storage media include volatile and nonvolatile, removable and non-removable media implemented in any non-transitory (e.g., tangible or physical) method or technology for storage of information, but such computer readable storage media do not includes signals. Computer readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible or physical medium which may be used to store the desired information and which may be accessed by a computing system.
164,072
11943653
DETAILED DESCRIPTION Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Overview Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims or can be learned by the practice of the principles set forth herein. Disclosed herein are systems, methods, and computer-readable media for dynamically adding network resources based on an AF notification. In one aspect, a method of dynamically adding network resources includes determining, by Application Function (AF) of a service provider, a network congestion on a network. The network congestion may indicate that network resources for servicing a user device using services of the service provider do not meet corresponding Quality of Service (QoS) requirements. The method further includes transmitting a notification by the AF to a core network of a network provider to request additional network resources to be allocated for servicing the user device. The network provider may provide network connectivity for the user device to receive the services provided by the service provider. In another aspect, the additional network resources to be allocated are within a spectrum owned by the AF in a radio access network operated by the network provider. In another aspect, the notification is an N5 notification. In another aspect, in determining the network congestion, the method includes determining, based on multimedia protocols, that more network resources are required to meet the corresponding QoS requirements. In another aspect, the network congestion is determined based on a Quality of Service Notification Control (QNC) notification sent by a base station of the network provider to the core network of the network provider. In another aspect, when transmitting the notification, the method of dynamically adding network resources includes transmitting the notification from the AF to a Policy Control Function (PCF) of the core network of the network provider, the PCF forwarding the notification to at least one of a Session Management Function (SMF) and an Access and Mobility Management Function (AMF) of the core network of the network provider. In another aspect, in response to receiving the notification, a Radio Access Network (RAN) of the network provider adds more network resources for servicing the user device. In another aspect, the additional network resources are allocated to a spectrum operated by the service provider. In one aspect, a system includes one or more processors and a computer-readable medium. The computer-readable medium comprise instructions stored therein, which when executed by the processors, cause the processors to determine, by Application Function (AF) of a service provider, a network congestion on a network, the network congestion indicating that network resources for servicing a user device using services of the service provider do not meet corresponding Quality of Service (QoS) requirements, and transmit a notification by the AF to a core network of a network provider to request additional network resources to be allocated for servicing the user device, the network provider providing network connectivity for the user device to receive the services provided by the service provider. In one aspect, a non-transitory computer-readable storage medium comprising instructions stored therein, which when executed by one or more processors, cause the processors to determine, by Application Function (AF) of a service provider, a network congestion on a network, the network congestion indicating that network resources for servicing a user device using services of the service provider do not meet corresponding Quality of Service (QoS) requirements, and transmit a notification by the AF to a core network of a network provider to request additional network resources to be allocated for servicing the user device, the network provider providing network connectivity for the user device to receive the services provided by the service provider. DESCRIPTION OF EXAMPLE EMBODIMENTS The following acronyms are used throughout the present disclosure, provided below for convenience.AF: Application FunctionAMF: Access and Mobility Management FunctionMME: Mobility Management EntityNEF: Network Exposure FunctionOAM: Operations, Administration, and MaintenancePCF: Policy Control FunctionRAN: Radio Access NetworkSMF: Session Management FunctionUDM: Unified Data ManagementUPF: User Plane Function Compared to previous cellular network generations, 5G extends the catalog of applicable spectrum frequency bands (e.g., high-, mid-, and low-bands) and offers access to a broad range of spectrum resources. As such, in 5G deployments, efficient use of all spectrum bands and resources is key to delivering a broad range of 5G services with optimal Quality of Service (QoS) and Quality of Experience (QoE). Therefore, entities that have access to spectrum resources or have capabilities of spectrum managements may desire to add more radio resources (e.g., spectrum, Physical Resource Blocks (PRBs)) based on application performance to optimize the QoE. Such entities can include but are not limited to 5GaaS operators/providers or service providers that may or may not own spectrum resources (e.g., Netflix, Disney+ Hotstar, Amazon Prime, etc.). Other non-limiting examples of entities include electrical utilities wishing to reach their automated grid actuators and controllers wirelessly, oil and gas companies wishing to the same, and municipalities, educational authorities, and law enforcement agencies wishing to leverage wireless connectivity. These entities may be referred to as service providers or content providers described above. According to 3GPP in 5G specifications, an introduction of QoS Notification Control (QNC) can notify the core network (e.g., 5GC) when RAN is not able to meet the QoS requirements so that the core network, upon receiving QNC notification, can change QoS rules to reduce the bandwidth requirements and/or notify Application Function (AF) to take corrective action. However, applications using HTTP adaptive streaming would have already adjusted to available radio resources, thereby the QNC notification becoming irrelevant. According to the present disclosure, a spectrum management entity can determine that there is radio congestion (e.g., by using its application analytics platform) and send an AF notification to the RAN along with the radio carriers it owns. The RAN then may add the radio spectrum and start allocating resource blocks from this additional band to subscribers of the service provider. The present technology includes systems, methods, and computer-readable media for solving these problems and discrepancies. Specifically, systems, methods, and computer-readable media for dynamically adding network resources based on an AF notification. FIG.1depicts an exemplary schematic representation of a 5G network environment100in which one or more aspects of the present disclosure may operate. As illustrated, network environment100is divided into four domains, each of which will be explained in greater depth below; User Equipment (UE) domain110, e.g., of one or more enterprise, in which a plurality of user cellphones or other connected devices112reside; Radio Access Network (RAN) domain120, in which a plurality of radio cells, base stations, towers, or other radio infrastructure122resides; Core Network130, in which a plurality of Network Functions (NFs)131,132, . . . , n reside; and Data Network140, in which one or more data communication networks such as the Internet142reside. Additionally, Data Network140can support SaaS providers configured to provide SaaSs to enterprises, e.g. to users in UE domain110. Core Network130contains a plurality of Network Functions (NFs), shown here as NF131, NF132. . . NF n. In some examples, core network130is a 5G core network (5GC) in accordance with one or more accepted 5GC architectures or designs. In some instances, core network130is an Evolved Packet Core (EPC) network, which combines aspects of the 5GC with existing 4G networks. Regardless of the particular design of core network130, the plurality of NFs typically executes in a control plane of core network130, providing a service-based architecture in which a given NF allows any other authorized NFs to access its services. For example, a Session Management Function (SMF) controls session establishment, modification, release, etc., and in the course of doing so, provides other NFs with access to these constituent SMF services. In some examples, the plurality of NFs of core network130can include one or more Access and Mobility Management Functions (AMF; typically used when core network130is a 5GC network) and Mobility Management Entities (MME; typically used when core network130is an EPC network), collectively referred to herein as an AMF/MME for purposes of simplicity and clarity. In some instances, an AMF/MME can be common to or otherwise shared by multiple slices of the plurality of network slices152, and in some examples an AMF/MME can be unique to a single one of the plurality of network slices152. The same is true of the remaining NFs of core network130, which can be shared amongst one or more network slices or provided as a unique instance specific to a single one of the plurality of network slices152. In addition to NFs comprising an AMF/MME as discussed above, the plurality of NFs of the core network130can additionally include one or more of the following: User Plane Functions (UPFs); Policy Control Functions (PCFs); Authentication Server Functions (AUSFs); Unified Data Management functions (UDMs); Application Functions (AFs); Network Exposure Functions (NEFs); NF Repository Functions (NRFs); and Network Slice Selection Functions (NSSFs). Various other NFs can be provided without departing from the scope of the present disclosure, as would be appreciated by one of ordinary skill in the art. Across these four domains of the 5G network environment100, an overall operator network domain150is defined. The operator network domain150is in some examples a Public Land Mobile Network (PLMN) and can be thought of as the carrier or business entity that provides cellular service to the end-users in UE domain110. Within the operator network domain150, a plurality of network slices152are created, defined, or otherwise provisioned in order to deliver a desired set of defined features and functionalities, e.g. SaaSs, for a certain use case or corresponding to other requirements or specifications. Note that network slicing for the plurality of network slices152is implemented in end-to-end fashion, spanning multiple disparate technical and administrative domains, including management and orchestration planes (not shown). In other words, network slicing is performed from at least the enterprise or subscriber edge at UE domain110, through the RAN120, through the 5G access edge and the 5G core network130, and to the data network140. Moreover, note that this network slicing may span multiple different 5G providers. For example, as shown here, the plurality of network slices152include Slice1, which corresponds to smartphone subscribers of the 5G provider who also operates network domain, and Slice2, which corresponds to smartphone subscribers of a virtual 5G provider leasing capacity from the actual operator of network domain150. Also shown is Slice3, which can be provided for a fleet of connected vehicles, and Slice4, which can be provided for an IoT goods or container tracking system across a factory network or supply chain. Note that these network slices152are provided for purposes of illustration, and in accordance with the present disclosure, and the operator network domain150can implement any number of network slices as needed, and can implement these network slices for purposes, use cases, or subsets of users and user equipment in addition to those listed above. Specifically, the operator network domain150can implement any number of network slices for provisioning SaaSs from SaaS providers to one or more enterprises. 5G mobile and wireless networks will provide enhanced mobile broadband communications and are intended to deliver a wider range of services and applications as compared to all prior generation mobile and wireless networks. Compared to prior generations of mobile and wireless networks, the 5G architecture is service-based, meaning that wherever suitable, architecture elements are defined as network functions that offer their services to other network functions via common framework interfaces. In order to support this wide range of services and network functions across an ever-growing base of user equipment (UE), 5G networks incorporate the network slicing concept utilized in previous generation architectures. Within the scope of the 5G mobile and wireless network architecture, a network slice comprises a set of defined features and functionalities that together form a complete Public Land Mobile Network (PLMN) for providing services to UEs. This network slicing permits for the controlled composition of a PLMN with the specific network functions and provided services that are required for a specific usage scenario. In other words, network slicing enables a 5G network operator to deploy multiple, independent PLMNs where each is customized by instantiating only those features, capabilities and services required to satisfy a given subset of the UEs or a related business customer needs. FIG.2illustrates an example 5G network architecture200, in which one or more aspects of the present disclosure may operate. Similar to 5G network environment100illustrated inFIG.1, 5G network architecture200comprises three domains; UE domain110(e.g., of one or more enterprise, in which a plurality of user devices112reside); RAN domain120, in which a plurality of radio cells, base stations, towers, or other infrastructure122resides; and core network130, in which a plurality of network functions (e.g., AMF131, SMF132, UPF133, PCF134, AF135, UDM136, etc.) reside. Also, 5G network architecture200further comprises service providers210A,210B,210C, etc. (collectively, service provider210), all of which may provide services such as content services to users112in UE domain110. In some examples, service provider210may be a customer of a network provider (e.g., for LTE or 5G service, etc.). According to some examples, RAN120provides radio access and assists to coordinate network resources across devices such as UEs112in UE domain110. In some instances, base stations122in RAN120are primarily connected via backhaul to core network130and reside between UEs112or any remotely controlled machine to provide a connection with core network130. As previously described with reference toFIG.1, core Network130contains a plurality of NFs that execute in a control plane of core network130. Referring toFIG.2, such NFs may include AMF131, SMF132, UPF133, PCF134, AF135, UDM136, etc. Various other NFs can be provided without departing from the scope of the present disclosure, as would be appreciated by one of ordinary skill in the art. In some instances, core network130can be a neutral host network (NHN) that allows multiple mobile network operators (MNOs) and other communications service providers (CSPs) to share infrastructure, in other words, to leverage existing cellular networks to provide services. In some examples, service provider210(also referred as application provider) may be a customer of a 5G service to reach endpoints, for example, user devices112over the 5G access so that service provider210may provide services (e.g., content or media services) to endpoints (e.g., user devices) that subscribe to their services such as audio/video streaming and media services, online video sharing services, etc. For example, some operators, neutral host providers, or other companies may provide 5G services (e.g., 5G-as-a-Service (5GaaS)) as a product to business verticals such as service providers that need to reach endpoints over 5G access. FIG.3illustrates an example communication diagram300for dynamic addition of network resources based on an AF notification according to one or more examples of the present disclosure. In particular, communication diagram300illustrates communications between various network entities and functions that can be configured to implement example network resources addition processes of the present disclosure in a 5G network. The variously illustrated network entities and functions include UE112, gNodeB122, AMF131, SMF132, UPF133, PCF134, AF135, UDM136, and oam137. As would be understood by those of skill in the art, the illustrated network entities provide examples of various network architecture components that can be used to facilitate the example network resources addition processes described herein. However, it is understood that a greater (or fewer) number of entities may be deployed to practice the disclosed technology. According to some examples, the communication process illustrated in communication diagram300begins with step305in which UE112initiates UE registration procedure from UDM136. Next, at step310, UE establishes a PDU session with UPF133, for example, for multimedia services. Then, adaptive multimedia streaming is performed at step315. For example, multimedia streaming may flow between UE112and AF135, which may be operated by service provider210as illustrated inFIG.2. According to some examples, there may be two ways to determine RAN congestion (i.e., to determine that UE112is not getting enough network resources). For example, application server based on multimedia protocols (e.g., Real-Time Transport Control Protocol (RTCP), Hypertext Transfer Protocol (HTTP) adaptive streaming, etc.) may determine that more network resources in RAN (e.g., RAN120) are required to meet the QoS requirements. For example, the primary function of RTCP is to provide feedback on the quality of QoS in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. As such, based on such statistics information, application server can determine RAN congestion, in other words, whether more network resources in RAN are required to meet the QoS requirements. Alternatively, when gNodeB122determines a RAN congestion, gNodeB122in RAN120may send a Quality of Service Notification Control (QNC) notification, for example, for Guaranteed Bit Rate (GBR) to core network130. For example, at step320, gNodeB122determines a RAN congestion. Specifically, gNodeB122may perform the scheduling of packets towards UE112. For each packet, there are QoS attributes associated and when there is an overload condition, gNodeB122in RAN120may not be able to schedule the packets par its QoS requirements resulting in packet delays since RAN120has finite radio resources. Based on such inability to schedule for packets or packet delays, gNodeB122can determine a RAN congestion. In some examples, gNodeB122sends a QNC notification to SMF132at step330, of which information is relayed to AF135. Then, information related to the RAN congestion is forwarded from SMF132to AF135at step335. At step325, AF135determines the RAN congestion. Next, at step340, once AF135determines the RAN congestion, AF135may send a notification (e.g., N5 notification) to PCF134to trigger increased bandwidth. For example, the AF notification may include a request for RAN increase. In some examples, AF135may also include spectrum information in the notification, such as the spectrum it may own. According to some example, PCF134performs QoS authorization at step345. For example, when a PDU session is established, SMF132will query PCF134for QoS policies for the various QoS flows present in the PDU session. In response, PCF134provides QoS policies such as Maximum Bit Rate (MBR) or Guaranteed Bit Rate (GBR) QoS flow ID (QFI) and many other QoS attributes. Network ensures that MBR or GBR is met or it will hit the user experience. In some examples, PCF134forwards the notification to RAN120via SMF132/AMF131to trigger additional RAN resources at step350, which is then relayed to gNodeB122at step355. In some examples, gNodeB122then determines if more RAN resources can be added at step360. Next, gNodeB122may send a RAN resources request to OAM137and receives a response at step365. According to some examples, gNodeB122may add more RAN resources at step370. Then, adaptive multimedia streaming can continue with additional RAN resources at step375. The disclosure now turns toFIG.4, where an example flow chart for performing a dynamic addition of network resources based on an AF notification is described.FIG.4may embody the details of the non-limiting example process ofFIG.3described above. FIG.4illustrates a flowchart of a method400for dynamically adding network resources based on an AF notification according to one or more examples of the present disclosure. Although the example method400depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of method400. In other examples, different components of an example device or system that implements method400may perform functions at substantially the same time or in a specific sequence. At step410, method400includes determining, by an AF of a service provider, network congestion on a network. In some examples, the network congestion may indicate that network resources for servicing a user device using services of the service provider do not meet corresponding Quality of Service (QoS) requirements. For example, AF135illustrated inFIGS.2and3may determine network congestion on a network. By way of an example, the network congestion may indicate that network resources for servicing a user device (e.g., UE112illustrated inFIGS.1-3) using services of the service provider (e.g., service provider210inFIG.2) do not meet corresponding Quality of Service (QoS) requirements. QoS requirements may be determined according to existing and/or agreed upon Service Level Agreements (SLAs) between users (e.g., subscribers) and their respective service provider and/or network provider. In some examples, the network congestion may be determined based on a Quality of Service Notification Control (QNC) notification sent by a base station (e.g., gNodeB122) of the network provider to the core network (e.g., core network130inFIGS.1and2) of the network provider. In another example of determining the network congestion at step410, method400comprises determining that more network resources are required to meet the corresponding QoS requirements based on multimedia protocols. For example, AF135illustrated inFIGS.2and3may determine, based on multimedia protocols such as RTCP or HTTP, that more network resources are required to meet the corresponding QoS requirements. According to some examples, at step420, method400includes transmitting a notification by the AF to a core network of a network provider to request additional network resources to be allocated for servicing the user device. In some examples, the network provider may provide network connectivity for the user device to receive the services provided by the service provider. While a conventional method of codec adaptation may degrade the service when radio conditions change, the method of allocating RAN resources based on the service feedback in accordance with the present disclosure does not cause the degraded service. For example, AF135illustrated inFIGS.2and3may transmit a notification to a core network (e.g., core network130inFIGS.1and2) of a network provider to request additional network resources to be allocated for servicing the user device (e.g., UE112inFIGS.1-3) where the network provider provides network connectivity for the user device (e.g., UE112inFIGS.1-3) to receive the services provided by the service provider. In some examples, the notification may be an N5 notification. For example, the notification can be transmitted via the N5 interface, which is a reference point between PCF and AF (e.g., PCF134and AF135as illustrated inFIGS.2and3). In some examples, the transmitting of the notification at step420may include transmitting the notification from AF to PCF of the core network of the network provider and forwarding the notification, by the PCF, to at least one of a SMF and an AMF of the core network of the network provider. For example, referring toFIGS.2and3, AF135may transmit the notification to request additional network resources to PCF134. Then, PCF134may forward the notification to SMF132and/or AMF131of core network130. In some instances, the notification from AF to trigger additional network resources may further include meta-data associated with the spectrum resources. For example, the visibility at AF135can benefit the user experience. The user experience has a relation to the spectrum resources, chosen slice, and the QoS template chosen for the QoS flows. AF135may provide such indications in the form of Rx or other interfaces and allow the network elements to perform specific actions. Furthermore, meta-data included in the trigger (e.g., the notification from AF) can include, but is not limited to, spectrum resources and specific actions. In some instances, AF135can translate such visibility that it has from the application layer and send the explicit events to the elements in the mobile network operator (MNO) network. In some examples, in response to receiving the notification, a RAN of the network provider adds more network resources for servicing the user device. For example, RAN120as illustrated inFIG.2of the network provider may add more network resources for servicing UEs112in UE domain110. In some examples, the added network resources in the form of carriers can be grouped into a carrier aggregation group. In some examples, the additional network resources may be allocated to a spectrum operated by the service provider. For example, service provider210as illustrated inFIG.2may own spectrum (e.g., Netflix has access to a spectrum of 1800-1820 Mhz and has spectrum management capabilities via its corresponding application analytics platform). In some examples, the additional network resources provided for the network may be in the form of either carriers or physical resource blocks (PRBs). In some examples, the transmitting of the notification from the AF to the core network may be mediated by a NEF. For example, when AF135triggers to add additional RAN resources (e.g., as described in step340inFIG.3), such trigger may be mediated by NEF (not shown inFIG.3). According to some examples, the additional network resources may be allocated to one or more subscribers of the network provider. For example, a base station (e.g., gNodeB122as illustrated inFIGS.2and3) may allocate the additional network resources to one or more subscribers of the network provider. In some examples, the additional network resources to be allocated can be within a spectrum owned by the AF in a radio access network operated by the network provider. In one example, the addition of network resources based on the AF notification in the above-described deployment can provide the advantage of enabling the service provider itself to monitor the network performance and request additional network resources to optimize the QoE. A service provider (i.e., content provider or application provider) has more visibility on the end-user experience than the network provider (e.g., 5GaaS provider). With the use of the HTTP adaptive bit-rate streaming, the service provider may have a superlative indication of the network performance that the end-user is receiving by simply monitoring the streaming rate, which has the virtue of adapting to channel conditions. Also, endpoints (e.g., streaming devices or UE112as illustrated inFIGS.1-3) have the telemetry on the over-the-top (OTT) media service channel, which may provide additional performance indications. Based on the telemetry, the service provider can draw performance maps of the 5G service in different locations and may choose to modify the spectrum resources based on the application performance visibility (e.g., to provide additional network resources in the locations that need more resources so that the QoE can be optimized). A network provider may not have the same visibility into the use of an application provided by a service provider to its subscribers, the way the service provider may have visibility. This inferior visibility by the network provider may be due to a number of factors including, but not limited to, encrypted traffic between the service provider and its subscribers. Accordingly, a network provider may not be capable of performing the above-described process of the present technology. Furthermore, an example advantage also includes having an AF provide a request for additional network resources, for example, in the form of the N5 notification or other applicable interfaces since AF performs operations for retrieving resources, applications traffic routing, exposing services to end-users, and exposing the application layer for interacting with 5G network resources and therefore has the visibility into the user experience (e.g., spectrum resources, chosen slice, or QoS templates chosen for the QoS flows, etc.). More specifically, the visibility on the application behavior and user experience is at the AF and not at the RAN of the network provider. As such, there exists a value in moving such intelligence (e.g., visibility) in the form of the notification and meta-data to the RAN via the AF. FIG.5illustrates an example network device500suitable for performing switching, routing, load balancing, and other networking operations. Network device500includes a central processing unit (CPU)504, interfaces502, and a bus510(e.g., a PCI bus). When acting under the control of appropriate software or firmware, the CPU504is responsible for executing packet management, error detection, and/or routing functions. The CPU504preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU504may include one or more processors508, such as a processor from the INTEL X86 family of microprocessors. In some cases, processor508can be specially designed hardware for controlling the operations of network device500. In some cases, a memory506(e.g., non-volatile RAM, ROM, etc.) also forms part of CPU504. However, there are many different ways in which memory could be coupled to the system. The interfaces502are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device500. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control, signal processing, crypto processing, and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master CPU504to efficiently perform routing computations, network diagnostics, security functions, etc. Although the system shown inFIG.5is one specific network device of the present technology, it is by no means the only network device architecture on which the present technology can be implemented. For example, an architecture having a single processor that handles communications as well as routing computations, etc., is often used. Further, other types of interfaces and media could also be used with the network device500. Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory506) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory506could also hold various software containers and virtualized execution environments and data. The network device500can also include an application-specific integrated circuit (ASIC), which can be configured to perform routing and/or switching operations. The ASIC can communicate with other components in the network device500via the bus510, to exchange data and signals and coordinate various types of operations by the network device500, such as routing, switching, and/or data storage operations, for example. FIG.6illustrates an example computing system600including components in electrical communication with each other using a connection605upon which one or more aspects of the present disclosure can be implemented. Connection605can be a physical connection via a bus, or a direct connection into processor610, such as in a chipset architecture. Connection605can also be a virtual connection, networked connection, or logical connection. In some examples, computing system600is a distributed system in which the functions described in this disclosure can be distributed within a datacenter, multiple datacenters, a peer network, etc. In some instances, one or more of the described system components represents many such components each performing some or all of the function for which the component is described. In some examples, the components can be physical or virtual devices. Example system600includes at least one processing unit (CPU or processor)610and connection605that couples various system components including system memory615, such as read only memory (ROM)620and random access memory (RAM)625to processor610. Computing system600can include a cache of high-speed memory612connected directly with, in close proximity to, or integrated as part of processor610. Processor610can include any general purpose processor and a hardware service or software service, such as services632,634, and636stored in storage device630, configured to control processor610as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor610may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction, computing system600includes an input device645, which can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech, etc. Computing system600can also include output device635, which can be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input/output to communicate with computing system600. Computing system600can include communications interface640, which can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device630can be a non-volatile memory device and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs), read only memory (ROM), and/or some combination of these devices. The storage device630can include software services, servers, services, etc., that when the code that defines such software is executed by the processor610, it causes the system to perform a function. In some examples, a hardware service that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor610, connection605, output device635, etc., to carry out the function. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Any of the steps, operations, functions, or processes described herein may be performed or implemented by a combination of hardware and software services or services, alone or in combination with other devices. In some embodiments, a service can be software that resides in memory of a client device and/or one or more servers of a content management system and perform one or more functions when a processor executes the software associated with the service. In some embodiments, a service is a program, or a collection of programs that carry out a specific function. In some embodiments, a service can be considered a server. The memory can be a non-transitory computer-readable medium. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, solid state memory devices, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include servers, laptops, smart phones, small form factor personal computers, personal digital assistants, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B.
42,819
11943654
BEST MODE FOR CARRYING OUT THE INVENTION Reference will now be made in detail to the exemplary implementations of the present disclosure, examples of which are illustrated in the accompanying drawings. The detailed description, which will be given below with reference to the accompanying drawings, is intended to explain exemplary implementations of the present disclosure, rather than to show the only implementations that can be implemented according to the disclosure. The following detailed description includes specific details in order to provide a thorough understanding of the present disclosure. However, it will be apparent to those skilled in the art that the present disclosure may be practiced without such specific details. The following techniques, apparatuses, and systems may be applied to a variety of wireless multiple access systems. Examples of the multiple access systems include a code division multiple access (CDMA) system, a frequency division multiple access (FDMA) system, a time division multiple access (TDMA) system, an orthogonal frequency division multiple access (OFDMA) system, a single carrier frequency division multiple access (SC-FDMA) system, and a multicarrier frequency division multiple access (MC-FDMA) system. CDMA may be embodied through radio technology such as universal terrestrial radio access (UTRA) or CDMA2000. TDMA may be embodied through radio technology such as global system for mobile communications (GSM), general packet radio service (GPRS), or enhanced data rates for GSM evolution (EDGE). OFDMA may be embodied through radio technology such as institute of electrical and electronics engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, or evolved UTRA (E-UTRA). UTRA is a part of a universal mobile telecommunications system (UMTS). 3rd generation partnership project (3GPP) long term evolution (LTE) is a part of evolved UMTS (E-UMTS) using E-UTRA. 3GPP LTE employs OFDMA in DL and SC-FDMA in UL. LTE-advanced (LTE-A) is an evolved version of 3GPP LTE. For convenience of description, implementations of the present disclosure are mainly described in regards to a 3GPP based wireless communication system. However, the technical features of the present disclosure are not limited thereto. For example, although the following detailed description is given based on a mobile communication system corresponding to a 3GPP based wireless communication system, aspects of the present disclosure that are not limited to 3GPP based wireless communication system are applicable to other mobile communication systems. For terms and technologies which are not specifically described among the terms of and technologies employed in the present disclosure, the wireless communication standard documents published before the present disclosure may be referenced. For example, the following documents may be referenced. 3GPP LTE3GPP TS 36.211: Physical channels and modulation3GPP TS 36.212: Multiplexing and channel coding3GPP TS 36.213: Physical layer procedures3GPP TS 36.214: Physical layer; Measurements3GPP TS 36.300: Overall description3GPP TS 36.304: User Equipment (UE) procedures in idle mode3GPP TS 36.314: Layer 2—Measurements3GPP TS 36.321: Medium Access Control (MAC) protocol3GPP TS 36.322: Radio Link Control (RLC) protocol3GPP TS 36.323: Packet Data Convergence Protocol (PDCP)3GPP TS 36.331: Radio Resource Control (RRC) protocol3GPP NR (e.g. 5G)3GPP TS 38.211: Physical channels and modulation3GPP TS 38.212: Multiplexing and channel coding3GPP TS 38.213: Physical layer procedures for control3GPP TS 38.214: Physical layer procedures for data3GPP TS 38.215: Physical layer measurements3GPP TS 38.300: Overall description3GPP TS 38.304: User Equipment (UE) procedures in idle mode and in RRC inactive state3GPP TS 38.321: Medium Access Control (MAC) protocol3GPP TS 38.322: Radio Link Control (RLC) protocol3GPP TS 38.323: Packet Data Convergence Protocol (PDCP)3GPP TS 38.331: Radio Resource Control (RRC) protocol3GPP TS 37.324: Service Data Adaptation Protocol (SDAP)3GPP TS 37.340: Multi-connectivity; Overall description In the present disclosure, a user equipment (UE) may be a fixed or mobile device. Examples of the UE include various devices that transmit and receive user data and/or various kinds of control information to and from a base station (BS). In the present disclosure, a BS generally refers to a fixed station that performs communication with a UE and/or another BS, and exchanges various kinds of data and control information with the UE and another BS. The BS may be referred to as an advanced base station (ABS), a node-B (NB), an evolved node-B (eNB), a base transceiver system (BTS), an access point (AP), a processing server (PS), etc. Especially, a BS of the UMTS is referred to as a NB, a BS of the enhanced packet core (EPC)/long term evolution (LTE) system is referred to as an eNB, and a BS of the new radio (NR) system is referred to as a gNB. In the present disclosure, a node refers to a point capable of transmitting/receiving a radio signal through communication with a UE. Various types of BSs may be used as nodes irrespective of the terms thereof. For example, a BS, a node B (NB), an e-node B (eNB), a pico-cell eNB (PeNB), a home eNB (HeNB), a relay, a repeater, etc. may be a node. In addition, the node may not be a BS. For example, the node may be a radio remote head (RRH) or a radio remote unit (RRU). The RRH or RRU generally has a lower power level than a power level of a BS. Since the RRH or RRU (hereinafter, RRH/RRU) is generally connected to the BS through a dedicated line such as an optical cable, cooperative communication between RRH/RRU and the BS can be smoothly performed in comparison with cooperative communication between BSs connected by a radio line. At least one antenna is installed per node. The antenna may include a physical antenna or an antenna port or a virtual antenna. In the present disclosure, the term “cell” may refer to a geographic area to which one or more nodes provide a communication system, or refer to radio resources. A “cell” of a geographic area may be understood as coverage within which a node can provide service using a carrier and a “cell” as radio resources (e.g. time-frequency resources) is associated with bandwidth (BW) which is a frequency range configured by the carrier. The “cell” associated with the radio resources is defined by a combination of downlink resources and uplink resources, for example, a combination of a downlink (DL) component carrier (CC) and an uplink (UL) CC. The cell may be configured by downlink resources only, or may be configured by downlink resources and uplink resources. Since DL coverage, which is a range within which the node is capable of transmitting a valid signal, and UL coverage, which is a range within which the node is capable of receiving the valid signal from the UE, depends upon a carrier carrying the signal, the coverage of the node may be associated with coverage of the “cell” of radio resources used by the node. Accordingly, the term “cell” may be used to represent service coverage of the node sometimes, radio resources at other times, or a range that signals using the radio resources can reach with valid strength at other times. In the present disclosure, a physical downlink control channel (PDCCH), and a physical downlink shared channel (PDSCH) refer to a set of time-frequency resources or resource elements (REs) carrying downlink control information (DCI), and a set of time-frequency resources or REs carrying downlink data, respectively. In addition, a physical uplink control channel (PUCCH), a physical uplink shared channel (PUSCH) and a physical random access channel (PRACH) refer to a set of time-frequency resources or REs carrying uplink control information (UCI), a set of time-frequency resources or REs carrying uplink data and a set of time-frequency resources or REs carrying random access signals, respectively. In carrier aggregation (CA), two or more CCs are aggregated. A UE may simultaneously receive or transmit on one or multiple CCs depending on its capabilities. CA is supported for both contiguous and non-contiguous CCs. When CA is configured the UE only has one radio resource control (RRC) connection with the network. At RRC connection establishment/re-establishment/handover, one serving cell provides the non-access stratum (NAS) mobility information, and at RRC connection reestablishment/handover, one serving cell provides the security input. This cell is referred to as the Primary Cell (PCell). The PCell is a cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. Depending on UE capabilities, Secondary Cells (SCells) can be configured to form together with the PCell a set of serving cells. An SCell is a cell providing additional radio resources on top of Special Cell. The configured set of serving cells for a UE therefore always consists of one PCell and one or more SCells. In the present disclosure, for dual connectivity (DC) operation, the term “special Cell” refers to the PCell of the master cell group (MCG) or the PSCell of the secondary cell group (SCG), and otherwise the term Special Cell refers to the PCell. An SpCell supports physical uplink control channel (PUCCH) transmission and contention-based random access, and is always activated. The MCG is a group of serving cells associated with a master node, comprising of the SpCell (PCell) and optionally one or more SCells. The SCG is the subset of serving cells associated with a secondary node, comprising of the PSCell and zero or more SCells, for a UE configured with DC. For a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the PCell. For a UE in RRC_CONNECTED configured with CA/DC the term “serving cells” is used to denote the set of cells comprising of the SpCell(s) and all SCells. The MCG is a group of serving cells associated with a master BS which terminates at least S1-MME, and the SCG is a group of serving cells associated with a secondary BS that is providing additional radio resources for the UE but is not the master BS. The SCG includes a primary SCell (PSCell) and optionally one or more SCells. In DC, two MAC entities are configured in the UE: one for the MCG and one for the SCG. Each MAC entity is configured by RRC with a serving cell supporting PUCCH transmission and contention based Random Access. In the present disclosure, the term SpCell refers to such cell, whereas the term SCell refers to other serving cells. The term SpCell either refers to the PCell of the MCG or the PSCell of the SCG depending on if the MAC entity is associated to the MCG or the SCG, respectively. In the present disclosure, monitoring a channel refers to attempting to decode the channel. For example, monitoring a physical downlink control channel (PDCCH) refers to attempting to decode PDCCH(s) (or PDCCH candidates). In the present disclosure, “C-RNTI” refers to a cell RNTI, “SI-RNTI” refers to a system information RNTI, “P-RNTI” refers to a paging RNTI, “RA-RNTI” refers to a random access RNTI, “SC-RNTI” refers to a single cell RNTI”, “SL-RNTI” refers to a sidelink RNTI, “SPS C-RNTI” refers to a semi-persistent scheduling C-RNTI, and “CS-RNTI” refers to a configured scheduling RNTI. FIG.2is a block diagram illustrating examples of communication devices which can perform a method according to the present disclosure. Referring toFIG.2, a first wireless device100and a second wireless device200may transmit/receive radio signals to/from an external device through a variety of RATs (e.g., LTE and NR). InFIG.2, {the first wireless device100and the second wireless device200} may correspond to {the wireless device100ato100fand the BS200} and/or {the wireless device100ato100fand the wireless device100ato100f} of FIG.1. The first wireless device100may include one or more processors102and one or more memories104and additionally further include one or more transceivers106and/or one or more antennas108. The processor(s)102may control the memory(s)104and/or the transceiver(s)106and may be configured to implement the functions, procedures, and/or methods described in the present disclosure. For example, the processor(s)102may process information within the memory(s)104to generate first information/signals and then transmit radio signals including the first information/signals through the transceiver(s)106. The processor(s)102may receive radio signals including second information/signals through the transceiver106and then store information obtained by processing the second information/signals in the memory(s)104. The memory(s)104may be connected to the processor(s)102and may store a variety of information related to operations of the processor(s)102. For example, the memory(s)104may store software code including commands for performing a part or the entirety of processes controlled by the processor(s)102or for performing the procedures and/or methods described in the present disclosure. Herein, the processor(s)102and the memory(s)104may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver(s)106may be connected to the processor(s)102and transmit and/or receive radio signals through one or more antennas108. Each of the transceiver(s)106may include a transmitter and/or a receiver. The transceiver(s)106may be interchangeably used with radio frequency (RF) unit(s). In the present invention, the wireless device may represent a communication modem/circuit/chip. The second wireless device200may include one or more processors202and one or more memories204and additionally further include one or more transceivers206and/or one or more antennas208. The processor(s)202may control the memory(s)204and/or the transceiver(s)206and may be configured to implement the functions, procedures, and/or methods described in the present disclosure. For example, the processor(s)202may process information within the memory(s)204to generate third information/signals and then transmit radio signals including the third information/signals through the transceiver(s)206. The processor(s)202may receive radio signals including fourth information/signals through the transceiver(s)106and then store information obtained by processing the fourth information/signals in the memory(s)204. The memory(s)204may be connected to the processor(s)202and may store a variety of information related to operations of the processor(s)202. For example, the memory(s)204may store software code including commands for performing a part or the entirety of processes controlled by the processor(s)202or for performing the procedures and/or methods described in the present disclosure. Herein, the processor(s)202and the memory(s)204may be a part of a communication modem/circuit/chip designed to implement RAT (e.g., LTE or NR). The transceiver(s)206may be connected to the processor(s)202and transmit and/or receive radio signals through one or more antennas208. Each of the transceiver(s)206may include a transmitter and/or a receiver. The transceiver(s)206may be interchangeably used with RF unit(s). In the present invention, the wireless device may represent a communication modem/circuit/chip. Hereinafter, hardware elements of the wireless devices100and200will be described more specifically. One or more protocol layers may be implemented by, without being limited to, one or more processors102and202. For example, the one or more processors102and202may implement one or more layers (e.g., functional layers such as PHY, MAC, RLC, PDCP, RRC, and SDAP). The one or more processors102and202may generate one or more Protocol Data Units (PDUs) and/or one or more Service Data Unit (SDUs) according to the functions, procedures, proposals, and/or methods disclosed in the present disclosure. The one or more processors102and202may generate messages, control information, data, or information according to the functions, procedures, proposals, and/or methods disclosed in the present disclosure. The one or more processors102and202may generate signals (e.g., baseband signals) including PDUs, SDUs, messages, control information, data, or information according to the functions, procedures, proposals, and/or methods disclosed in the present disclosure and provide the generated signals to the one or more transceivers106and206. The one or more processors102and202may receive the signals (e.g., baseband signals) from the one or more transceivers106and206and acquire the PDUs, SDUs, messages, control information, data, or information according to the functions, procedures, proposals, and/or methods disclosed in the present disclosure. The one or more processors102and202may be referred to as controllers, microcontrollers, microprocessors, or microcomputers. The one or more processors102and202may be implemented by hardware, firmware, software, or a combination thereof. As an example, one or more Application Specific Integrated Circuits (ASICs), one or more Digital Signal Processors (DSPs), one or more Digital Signal Processing Devices (DSPDs), one or more Programmable Logic Devices (PLDs), or one or more Field Programmable Gate Arrays (FPGAs) may be included in the one or more processors102and202. The functions, procedures, proposals, and/or methods disclosed in the present disclosure may be implemented using firmware or software and the firmware or software may be configured to include the modules, procedures, or functions. Firmware or software configured to perform the functions, procedures, proposals, and/or methods disclosed in the present disclosure may be included in the one or more processors102and202or stored in the one or more memories104and204so as to be driven by the one or more processors102and202. The functions, procedures, proposals, and/or methods disclosed in the present disclosure may be implemented using firmware or software in the form of code, commands, and/or a set of commands. The one or more memories104and204may be connected to the one or more processors102and202and store various types of data, signals, messages, information, programs, code, instructions, and/or commands. The one or more memories104and204may be configured by Read-Only Memories (ROMs), Random Access Memories (RAMs), Electrically Erasable Programmable Read-Only Memories (EPROMs), flash memories, hard drives, registers, cash memories, computer-readable storage media, and/or combinations thereof. The one or more memories104and204may be located at the interior and/or exterior of the one or more processors102and202. The one or more memories104and204may be connected to the one or more processors102and202through various technologies such as wired or wireless connection. The one or more transceivers106and206may transmit user data, control information, and/or radio signals/channels, mentioned in the methods and/or operational flowcharts of the present disclosure, to one or more other devices. The one or more transceivers106and206may receive user data, control information, and/or radio signals/channels, mentioned in the functions, procedures, proposals, methods, and/or operational flowcharts disclosed in the present disclosure, from one or more other devices. For example, the one or more transceivers106and206may be connected to the one or more processors102and202and transmit and receive radio signals. For example, the one or more processors102and202may perform control so that the one or more transceivers106and206may transmit user data, control information, or radio signals to one or more other devices. The one or more processors102and202may perform control so that the one or more transceivers106and206may receive user data, control information, or radio signals from one or more other devices. The one or more transceivers106and206may be connected to the one or more antennas108and208and the one or more transceivers106and206may be configured to transmit and receive user data, control information, and/or radio signals/channels, mentioned in the functions, procedures, proposals, methods, and/or operational flowcharts disclosed in the present disclosure, through the one or more antennas108and208. In the present disclosure, the one or more antennas may be a plurality of physical antennas or a plurality of logical antennas (e.g., antenna ports). The one or more transceivers106and206may convert received radio signals/channels etc. from RF band signals into baseband signals in order to process received user data, control information, radio signals/channels, etc. using the one or more processors102and202. The one or more transceivers106and206may convert the user data, control information, radio signals/channels, etc. processed using the one or more processors102and202from the base band signals into the RF band signals. To this end, the one or more transceivers106and206may include (analog) oscillators and/or filters. For example, the transceivers106and206can up-convert OFDM baseband signals to a carrier frequency by their (analog) oscillators and/or filters under the control of the processors102and202and transmit the up-converted OFDM signals at the carrier frequency. The transceivers106and206may receive OFDM signals at a carrier frequency and down-convert the OFDM signals into OFDM baseband signals by their (analog) oscillators and/or filters under the control of the transceivers102and202. In the implementations of the present disclosure, a UE may operate as a transmitting device in uplink (UL) and as a receiving device in downlink (DL). In the implementations of the present disclosure, a BS may operate as a receiving device in UL and as a transmitting device in DL. Hereinafter, for convenience of description, it is mainly assumed that the first wireless device100acts as the UE, and the second wireless device200acts as the BS, unless otherwise mentioned or described. For example, the processor(s)102connected to, mounted on or launched in the first wireless device100may be configured to perform the UE behaviour according to an implementation of the present disclosure or control the transceiver(s)106to perform the UE behaviour according to an implementation of the present disclosure. The processor(s)202connected to, mounted on or launched in the second wireless device200may be configured to perform the BS behaviour according to an implementation of the present disclosure or control the transceiver(s)206to perform the BS behaviour according to an implementation of the present disclosure. FIG.3illustrates another example of a wireless device which can perform implementations of the present invention. The wireless device may be implemented in various forms according to a use-case/service (refer toFIG.1). Referring toFIG.3, wireless devices100and200may correspond to the wireless devices100and200ofFIG.2and may be configured by various elements, components, units/portions, and/or modules. For example, each of the wireless devices100and200may include a communication unit110, a control unit120, a memory unit130, and additional components140. The communication unit may include a communication circuit112and transceiver(s)114. For example, the communication circuit112may include the one or more processors102and202ofFIG.2and/or the one or more memories104and204ofFIG.2. For example, the transceiver(s)114may include the one or more transceivers106and206ofFIG.2and/or the one or more antennas108and208ofFIG.2. The control unit120is electrically connected to the communication unit110, the memory130, and the additional components140and controls overall operation of the wireless devices. For example, the control unit120may control an electric/mechanical operation of the wireless device based on programs/code/commands/information stored in the memory unit130. The control unit120may transmit the information stored in the memory unit130to the exterior (e.g., other communication devices) via the communication unit110through a wireless/wired interface or store, in the memory unit130, information received through the wireless/wired interface from the exterior (e.g., other communication devices) via the communication unit110. The additional components140may be variously configured according to types of wireless devices. For example, the additional components140may include at least one of a power unit/battery, input/output (I/O) unit (e.g. audio I/O port, video I/O port), a driving unit, and a computing unit. The wireless device may be implemented in the form of, without being limited to, the robot (100aofFIG.1), the vehicles (100b-1and100b-2ofFIG.1), the XR device (100cofFIG.1), the hand-held device (100dofFIG.1), the home appliance (100eofFIG.1), the IoT device (100fofFIG.1), a digital broadcast terminal, a hologram device, a public safety device, an MTC device, a medicine device, a Fintech device (or a finance device), a security device, a climate/environment device, the AI server/device (400ofFIG.1), the BSs (200ofFIG.1), a network node, etc. The wireless device may be used in a mobile or fixed place according to a use-example/service. InFIG.3, the entirety of the various elements, components, units/portions, and/or modules in the wireless devices100and200may be connected to each other through a wired interface or at least a part thereof may be wirelessly connected through the communication unit110. For example, in each of the wireless devices100and200, the control unit120and the communication unit110may be connected by wire and the control unit120and first units (e.g.,130and140) may be wirelessly connected through the communication unit110. Each element, component, unit/portion, and/or module within the wireless devices100and200may further include one or more elements. For example, the control unit120may be configured by a set of one or more processors. As an example, the control unit120may be configured by a set of a communication control processor, an application processor, an electronic control unit (ECU), a graphical processing unit, and a memory control processor. As another example, the memory130may be configured by a random access memory (RAM), a dynamic RAM (DRAM), a read only memory (ROM)), a flash memory, a volatile memory, a non-volatile memory, and/or a combination thereof. FIG.4illustrates an example of protocol stacks in a 3GPP based wireless communication system. In particular,FIG.4(a)illustrates an example of a radio interface user plane protocol stack between a UE and a base station (BS) andFIG.4(b)illustrates an example of a radio interface control plane protocol stack between a UE and a BS. The control plane refers to a path through which control messages used to manage call by a UE and a network are transported. The user plane refers to a path through which data generated in an application layer, for example, voice data or Internet packet data are transported. Referring toFIG.4(a), the user plane protocol stack may be divided into a first layer (Layer 1) (i.e., a physical (PHY) layer) and a second layer (Layer 2). Referring toFIG.4(b), the control plane protocol stack may be divided into Layer 1 (i.e., a PHY layer), Layer 2, Layer 3 (e.g., a radio resource control (RRC) layer), and a non-access stratum (NAS) layer. Layer 1, Layer 2 and Layer 3 are referred to as an access stratum (AS). The NAS control protocol is terminated in an access management function (AMF) on the network side, and performs functions such as authentication, mobility management, security control and etc. In the 3GPP LTE system, the layer 2 is split into the following sublayers: medium access control (MAC), radio link control (RLC), and packet data convergence protocol (PDCP). In the 3GPP New Radio (NR) system, the layer 2 is split into the following sublayers: MAC, RLC, PDCP and SDAP. The PHY layer offers to the MAC sublayer transport channels, the MAC sublayer offers to the RLC sublayer logical channels, the RLC sublayer offers to the PDCP sublayer RLC channels, the PDCP sublayer offers to the SDAP sublayer radio bearers. The SDAP sublayer offers to 5G Core Network quality of service (QoS) flows. In the 3GPP NR system, the main services and functions of SDAP include: mapping between a QoS flow and a data radio bearer; marking QoS flow ID (QFI) in both DL and UL packets. A single protocol entity of SDAP is configured for each individual PDU session. In the 3GPP NR system, the main services and functions of the RRC sublayer include: broadcast of system information related to AS and NAS; paging initiated by 5G core (5GC) or NG-RAN; establishment, maintenance and release of an RRC connection between the UE and NG-RAN; security functions including key management; establishment, configuration, maintenance and release of signalling radio bearers (SRBs) and data radio bearers (DRBs); mobility functions (including: handover and context transfer; UE cell selection and reselection and control of cell selection and reselection; Inter-RAT mobility); QoS management functions; UE measurement reporting and control of the reporting; detection of and recovery from radio link failure; NAS message transfer to/from NAS from/to UE. In the 3GPP NR system, the main services and functions of the PDCP sublayer for the user plane include: sequence numbering; header compression and decompression: ROHC only; transfer of user data; reordering and duplicate detection; in-order delivery; PDCP PDU routing (in case of split bearers); retransmission of PDCP SDUs; ciphering, deciphering and integrity protection; PDCP SDU discard; PDCP re-establishment and data recovery for RLC AM; PDCP status reporting for RLC AM; duplication of PDCP PDUs and duplicate discard indication to lower layers. The main services and functions of the PDCP sublayer for the control plane include: sequence numbering; ciphering, deciphering and integrity protection; transfer of control plane data; reordering and duplicate detection; in-order delivery; duplication of PDCP PDUs and duplicate discard indication to lower layers. The RLC sublayer supports three transmission modes: Transparent Mode (TM); Unacknowledged Mode (UM); and Acknowledged Mode (AM). The RLC configuration is per logical channel with no dependency on numerologies and/or transmission durations. In the 3GPP NR system, the main services and functions of the RLC sublayer depend on the transmission mode and include: Transfer of upper layer PDUs; sequence numbering independent of the one in PDCP (UM and AM); error correction through ARQ (AM only); segmentation (AM and UM) and re-segmentation (AM only) of RLC SDUs; reassembly of SDU (AM and UM); duplicate detection (AM only); RLC SDU discard (AM and UM); RLC re-establishment; protocol error detection (AM only). In the 3GPP NR system, the main services and functions of the MAC sublayer include: mapping between logical channels and transport channels; multiplexing/demultiplexing of MAC SDUs belonging to one or different logical channels into/from transport blocks (TB) delivered to/from the physical layer on transport channels; scheduling information reporting; error correction through HARQ (one HARQ entity per cell in case of carrier aggregation (CA)); priority handling between UEs by means of dynamic scheduling; priority handling between logical channels of one UE by means of logical channel prioritization; padding. A single MAC entity may support multiple numerologies, transmission timings and cells. Mapping restrictions in logical channel prioritization control which numerology(ies), cell(s), and transmission timing(s) a logical channel can use. Different kinds of data transfer services are offered by MAC. To accommodate different kinds of data transfer services, multiple types of logical channels are defined i.e. each supporting transfer of a particular type of information. Each logical channel type is defined by what type of information is transferred. Logical channels are classified into two groups: Control Channels and Traffic Channels. Control channels are used for the transfer of control plane information only, and traffic channels are used for the transfer of user plane information only. Broadcast Control Channel (BCCH) is a downlink logical channel for broadcasting system control information, paging Control Channel (PCCH) is a downlink logical channel that transfers paging information, system information change notifications and indications of ongoing PWS broadcasts, Common Control Channel (CCCH) is a logical channel for transmitting control information between UEs and network and used for UEs having no RRC connection with the network, and Dedicated Control Channel (DCCH) is a point-to-point bi-directional logical channel that transmits dedicated control information between a UE and the network and used by UEs having an RRC connection. Dedicated Traffic Channel (DTCH) is a point-to-point logical channel, dedicated to one UE, for the transfer of user information. A DTCH can exist in both uplink and downlink. In Downlink, the following connections between logical channels and transport channels exist: BCCH can be mapped to BCH; BCCH can be mapped to downlink shared channel (DL-SCH); PCCH can be mapped to PCH; CCCH can be mapped to DL-SCH; DCCH can be mapped to DL-SCH; and DTCH can be mapped to DL-SCH. In Uplink, the following connections between logical channels and transport channels exist: CCCH can be mapped to uplink shared channel (UL-SCH); DCCH can be mapped to UL-SCH; and DTCH can be mapped to UL-SCH. FIG.5illustrates an example of a frame structure in a 3GPP based wireless communication system. The frame structure illustrated inFIG.5is purely exemplary and the number of subframes, the number of slots, and/or the number of symbols in a frame may be variously changed. In the 3GPP based wireless communication system, OFDM numerologies (e.g., subcarrier spacing (SCS), transmission time interval (TTI) duration) may be differently configured between a plurality of cells aggregated for one UE. For example, if a UE is configured with different SCSs for cells aggregated for the cell, an (absolute time) duration of a time resource (e.g. a subframe, a slot, or a TTI) including the same number of symbols may be different among the aggregated cells. Herein, symbols may include OFDM symbols (or CP-OFDM symbols), SC-FDMA symbols (or discrete Fourier transform-spread-OFDM (DFT-s-OFDM) symbols). Referring toFIG.5, downlink and uplink transmissions are organized into frames. Each frame has Tf=10 ms duration. Each frame is divided into two half-frames, where each of the half-frames has 5 ms duration. Each half-frame consists of 5 subframes, where the duration Tsfper subframe is 1 ms. Each subframe is divided into slots and the number of slots in a subframe depends on a subcarrier spacing. Each slot includes 14 or 12 OFDM symbols based on a cyclic prefix (CP). In a normal CP, each slot includes 14 OFDM symbols and, in an extended CP, each slot includes 12 OFDM symbols. The numerology is based on exponentially scalable subcarrier spacing Δf=2u*15 kHz. The following table shows the number of OFDM symbols per slot, the number of slots per frame, and the number of slots per for the normal CP, according to the subcarrier spacing Δf=2u*15 kHz. TABLE 1μNslotsymbNframe,uslotNsubframe,uslot01410111420221440431480841416016 The following table shows the number of OFDM symbols per slot, the number of slots per frame, and the number of slots per for the extended CP, according to the subcarrier spacing Δf=2u*15 kHz. TABLE 2μNslotsymbNframe,uslotNsubframe,uslot212404 A slot includes plural symbols (e.g., 14 or 12 symbols) in the time domain. For each numerology (e.g. subcarrier spacing) and carrier, a resource grid of Nsize,ugrid,x*NRBscsubcarriers and Nsubframe,usymbOFDM symbols is defined, starting at common resource block (CRB) Nstart,ugridindicated by higher-layer signaling (e.g. radio resource control (RRC) signaling), where Nsize,ugrid,xis the number of resource blocks in the resource grid and the subscript x is DL for downlink and UL for uplink. NRBscis the number of subcarriers per resource blocks. In the 3GPP based wireless communication system, NRBscis 12 generally. There is one resource grid for a given antenna port p, subcarrier spacing configuration u, and transmission direction (DL or UL). The carrier bandwidth Nsize,ugridfor subcarrier spacing configuration u is given by the higher-layer parameter (e.g. RRC parameter). Each element in the resource grid for the antenna port p and the subcarrier spacing configuration u is referred to as a resource element (RE) and one complex symbol may be mapped to each RE. Each RE in the resource grid is uniquely identified by an index k in the frequency domain and an index 1 representing a symbol location relative to a reference point in the time domain. In the 3GPP based wireless communication system, a resource block is defined by 12 consecutive subcarriers in the frequency domain. In the 3GPP NR system, resource blocks are classified into CRBs and physical resource blocks (PRBs). CRBs are numbered from 0 and upwards in the frequency domain for subcarrier spacing configuration u. The center of subcarrier 0 of CRB 0 for subcarrier spacing configuration u coincides with ‘point A’ which serves as a common reference point for resource block grids. In the 3GPP NR system, PRBs are defined within a bandwidth part (BWP) and numbered from 0 to NsizeBWP,i−1, where i is the number of the bandwidth part. The relation between the physical resource block nPRBin the bandwidth part i and the common resource block nCRBis as follows: nPRB=nCRB+NsizeBWP,i, where NsizeBWP,iis the common resource block where bandwidth part starts relative to CRB 0. The BWP includes a plurality of consecutive resource blocks. A carrier may include a maximum of N (e.g., 5) BWPs. A UE may be configured with one or more BWPs on a given component carrier. Only one BWP among BWPs configured to the UE can active at a time. The active BWP defines the UE's operating bandwidth within the cell's operating bandwidth. NR frequency bands are defined as 2 types of frequency range, FR1 and FR2. FR2 is may also called millimeter wave (mmW). The frequency ranges in which NR can operate are identified as described in Table 3. TABLE 3FrequencyRangeCorrespondingdesignationfrequency rangeSubcarrier SpacingFR1450 MHz-7125 MHz15, 30, 60 kHzFR224250 MHz-52600 MHz60, 120, 240 kHz FIG.6illustrates a data flow example in the 3GPP NR system. InFIG.6, “RB” denotes a radio bearer, and “H” denotes a header. Radio bearers are categorized into two groups: data radio bearers (DRB) for user plane data and signalling radio bearers (SRB) for control plane data. The MAC PDU is transmitted/received using radio resources through the PHY layer to/from an external device. The MAC PDU arrives to the PHY layer in the form of a transport block. In the PHY layer, the uplink transport channels UL-SCH and RACH are mapped to physical uplink shared channel (PUSCH) and physical random access channel (PRACH), respectively, and the downlink transport channels DL-SCH, BCH and PCH are mapped to physical downlink shared channel (PDSCH), physical broad cast channel (PBCH) and PDSCH, respectively. In the PHY layer, uplink control information (UCI) is mapped to PUCCH, and downlink control information (DCI) is mapped to PDCCH. A MAC PDU related to UL-SCH is transmitted by a UE via a PUSCH based on an UL grant, and a MAC PDU related to DL-SCH is transmitted by a BS via a PDSCH based on a DL assignment. Hereinafter, Buffer Status reporting (BSR) procedure in the NR system is described. The BSR procedure is used to provide the serving gNB with information about UL data volume in the MAC entity. RRC configures the following parameters to control the BSR:periodicBSR-Timer;retxBSR-Timer;logicalChannelSR-DelayTimerApplied;logicalChannelSR-DelayTimer;logicalChannelSR-Mask;logicalChannelGroup. Each logical channel may be allocated to an LCG using the logicalChannelGroup. The maximum number of LCGs is eight. The MAC entity determines the amount of UL data available for a logical channel according to the data volume calculation procedure. Hereinafter, Logical channel prioritization in the NR system is described. The Logical Channel Prioritization procedure is applied when a new transmission is performed. RRC controls the scheduling of uplink data by signalling for each logical channel: priority where an increasing priority value indicates a lower priority level, prioritisedBitRate which sets the Prioritized Bit Rate (PBR), bucketSizeDuration which sets the Bucket Size Duration (BSD). The MAC entity shall maintain a variable Bj for each logical channel j. Bj shall be initialized to zero when the related logical channel is established, and incremented by the product PBR×TTI duration for each TTI, where PBR is Prioritized Bit Rate of logical channel j. However, the value of Bj can never exceed the bucket size and if the value of Bj is larger than the bucket size of logical channel j, it shall be set to the bucket size. The bucket size of a logical channel is equal to PBR×BSD, where PBR and BSD are configured by upper layers. The MAC entity shall perform the following Logical Channel Prioritization procedure when a new transmission is performed:The MAC entity shall allocate resources to the logical channels in the following steps 1-3: Step 1: All the logical channels with Bj>0 are allocated resources in a decreasing priority order. If the PBR of a logical channel is set to “infinity”, the MAC entity shall allocate resources for all the data that is available for transmission on the logical channel before meeting the PBR of the lower priority logical channel(s); Step 2: the MAC entity shall decrement Bj by the total size of MAC SDUs served to logical channel j in Step 1. Especially, the value of Bj can be negative. Step 3: if any resources remain, all the logical channels are served in a strict decreasing priority order (regardless of the value of Bj) until either the data for that logical channel or the UL grant is exhausted, whichever comes first. Logical channels configured with equal priority should be served equally.The UE shall also follow the rules (1)-(4) below during the scheduling procedures above (1) the UE should not segment an RLC SDU (or partially transmitted SDU or retransmitted RLC PDU) if the whole SDU (or partially transmitted SDU or retransmitted RLC PDU) fits into the remaining resources of the associated MAC entity. (2) if the UE segments an RLC SDU from the logical channel, it shall maximize the size of the segment to fill the grant of the associated MAC entity as much as possible. (3) the UE should maximise the transmission of data. (4) if the MAC entity is given an UL grant size that is equal to or larger than 4 bytes while having data available for transmission, the MAC entity shall not transmit only padding BSR and/or padding. The MAC entity shall not generate a MAC PDU for the HARQ entity if the following conditions are satisfied:the MAC entity is configured with skipUplinkTxDynamic with value true and the grant indicated to the HARQ entity was addressed to a C-RNTI, or the grant indicated to the HARQ entity is a configured uplink grant; andthere is no aperiodic CSI requested for this PUSCH transmission; andthe MAC PDU includes zero MAC SDUs; andthe MAC PDU includes only the periodic BSR and there is no data available for any LCG, or the MAC PDU includes only the padding BSR. Logical channels shall be prioritised in accordance with the following order (highest priority listed first):a) C-RNTI MAC CE or data from UL-CCCH;b) Configured Grant Confirmation MAC CE;c) MAC CE for BSR, with exception of BSR included for padding;d) Single Entry PHR MAC CE or Multiple Entry PHR MAC CE;e) data from any Logical Channel, except data from UL-CCCH;f) MAC CE for Recommended bit rate query;g) MAC CE for BSR included for padding. In the LTE system and the NR system, Logical channel group (LCG) is used to trigger and report a buffer size regarding a logical channel. In specific, if a logical channel belongs to an LCG, the new data arrival for that logical channel triggers a BSR whereas a logical channel not belonging to any LCG does not trigger a BSR even if the new data arrives for that logical channel. The reason was that:The logical channel not belonging to any LCG is likely to be of lower priority so that there is no need to trigger a BSR. For these kind of lower priority logical channels, it is sufficient to transmit the data when the received UL grant remains after including all higher priority logical channel data.The scheduler may be able to know the characteristics of the traffic, e.g., data size, periodicity, so that the UE waits until it is scheduled rather than triggering or reporting a BSR unnecessarily. While, IAB (Integrated access and backhaul) based radio access network (RAN) architecture consists of one or more IAB nodes, which support wireless access to UEs and wirelessly backhauls the access traffic, and one or more IAB donors which provide UE's interface to core network and wireless backhauling functionality to IAB nodes. FIG.7shows an example of IAB based RAN architectures. Especially, inFIG.7, the IAB based RAN architectures consist of one or more IAB nodes, which support wireless access to UEs and wirelessly backhauls the access traffic, and one or more IAB donors which provide UE's interface to core network and wireless backhauling functionality to IAB nodes. Each adaptation layer of these IAB-nodes and IAB donors carries the following information in order to identify UE and/or radio bearer for control-plane or user-plane data:UE-bearer-specific IDUE-specific IDRoute ID, IAB-node or IAB-donor addressQoS information Recently, according to the NR MAC layer standard, if more than one logical channel group (LCG) has data available for transmission when a MAC PDU containing a BSR is to be built, a MAC entity of a wireless node (for example,) shall report Long BSR for all LCGs which have data available for transmission. When a Long BSR is reported, 1 byte LCG field should be always included into the Long BSR format regardless of whether how many LCGs have data available for transmission, i.e., LCG field length is fixed to be able to contain maximum LCG ID which is 7. In NR standard, unified design was selected to support 1:1 and N:1 bearer mapping together. However, to support 1:1 bearer mapping in IAB, each logical channel (LoCH) should be associated with one bearer at every hop over the path. This means that if an IAB node supports 100 UEs and each UE has 25 bearers, the IAB node should have at least 250 LoCHs to support 1:1 bearer mapping. With this understanding, RAN2 identified that the current LCID space and LCG space would not be enough to support 1:1 bearer mapping in IAB. Thus, if LCG space increases, a new BSR format should be defined. If a new BSR format follows current BSR format and possible LCG ID increases up to 31, 4 byte LCG field should be always included into the BSR format. Considering the current BSR format with the increased LCG ID which is larger than 7, however, a generated BSR MAC CE may have unnecessary overhead because even though a LCG has no data available for transmission, the LCG should be indicated with 0 in the LCG field for the BSR MAC CE and be reported to a network. More specifically, when 4 bytes LCG field is used to cover LCG ID=31, even if only one LCG whose ID is 0 has data available for transmission and to be reported to the network, the BSR MAC CE should include 4 bytes LCG field and only corresponding bit for LCG ID=0 sets to 1 and all other bits for LCG ID=1 through LCG ID=31 set to 0 unnecessarily. This means that 3 bytes of meaningless overhead is required to report from LCG ID=1 to LCG ID=31 by the current BSR format. This overhead would be even worse to the IAB node because IAB nodes which is close to the IAB donor should support a lot of UE bearers to provide 1:1 bearer mapping, maybe a couple of hundred logical channels. Given that 8 LCGs for 32 logical channels are provided in the current MAC specification, a couple of hundred logical channels may require 8 times of logical channel ID (LCID) for a logical channels and LCG space may increase proportionally to the increased LCID space for logical channels. Therefore, this unnecessary overhead should be removed and a new format of a BSR MAC CE should be considered. FIG.8illustrates a BSR procedure according to the present disclosure. Referring toFIG.8, a wireless node (for example, a UE or IAB node) may generate a BSR MAC CE including a LOP field, a LCG field and a buffer size field (S1001). Here, the LOP field can be expressed as a first bitmap field. Next, the wireless node transmits the BSR MAC CE (S1002). In generation of a Buffer Size Report (BSR) MAC Control Element (CE) consisting of multiple octets including LCG field, the UE or IAB node (or a MAC entity at the UE/AB node) includes an octet including LCG field only if at least one LCG among LCGs associated with that octet has data available for transmission. For example, the BSR MAC CE may include an octet for LCGi to LCGi+7, where i=0, 8, 16, . . . , only when a buffer size field for at least one LCG among LCGi to LCGi+7 is included in the BSR MAC CE. Here, each of the multiple octets can be expressed as a second bitmap field. That is, if the multiple octets is included in the BSR MAC CE, it can be understood that the BSR MAC CE includes multiple second bitmap fields. In other words, the UE or IAB node (or a MAC entity at the UE/JAB node) does not include an octet including LCG field, if no LCG among LCGs associated with that octet has data available for transmission. In order to inform the presence of an octet including LCG field to a network which receives a BSR MAC CE, the UE or IAB node (or a MAC entity at the UE/JAB node) includes a LCG Octet Presence (LOP) field indicating whether a corresponding octet including LCG field is present or not in the BSR MAC CE. The maximum number of octets to be included in the BSR MAC CE and the size of the LOP field (e.g. the number of LOP bits) may be determined based on the maximum number of LCGs that can be configured to the UE or JAB node, or based on the highest LCG ID configured to the UE or IAB node. When this procedure is applied, the wireless node (e.g., MAC entity at the wireless node) behavior is as follows. A wireless node is configured with at least one LCG, which is associated with an LCG ID by receiving a configuration of LCG via L2 or L3 signaling. The L2 or L3 signaling can be one of the MAC, RLC, PDCP, or RRC signaling. The wireless node has a BSR triggered and not cancelled, and an UL grant that can be used for transmission of a BSR MAC CE. The wireless node may receive the UL grant dynamically on a physical downlink control channel (PDCCH) or in a random access response (RAR), or may be configured with Configured Grant (CG) from a network. The wireless node generates the BSR MAC CE as follows, where the BSR MAC CE consists of one or more octets including LCG field, LOP field, and Buffer Size (BS) field. The BSR MAC CE can include multiple octets including LCG field. Each octet including LCG field can indicate presence of BS field of multiple LCGs, e.g., 8 LCGs. The wireless node determines presence of each octet among multiple octets including LCG field according to the condition below.An octet including LCG field is present if at least one LCG among LCGs associated with that octet has data available for transmission. For example, if an octet including LCG field indicates presence of BS field LCG0to LCG7, the UE or IAB node (or a MAC entity at the UE/JAB node) includes the octet into a BSR MAC CE if at least one of LCG0to LCG7has data available for transmission.An octet including LCG field is not present if no LCG among LCGs associated with that octet has data available for transmission. For example, if an octet including LCG field indicates presence of BS field LCG0to LCG7, the UE or IAB node (or a MAC entity at the UE/JAB node) does not include the octet if no LCG among LCG0to LCG7has data available for transmission. Here, a LCG has data available for transmission means that at least one logical channel belonging to the LCG has data available for transmission.The MAC entity includes octets including LCG field according to the determination above, and indicates the presence of each octet including LCG field via a LOP field. The number of LOP fields is based on the maximum number of octets including LCG field. For example, if maximum 4 octets including LCG field are included in a BSR MAC CE, 4 bits of LOP fields indicates presence of corresponding octet including LCG field. The UE or IAB node (or a MAC entity at the UE/JAB node) sets each LOP field to a value based on presence of corresponding octet including LCG field. LOP field is set to 1 if corresponding octet including LCG field is present according to the determination above. In other words, a LOP field set to 1 indicates that at least one LCG among LCGs associated with an octet corresponding to the LOP field has data available for transmission. LOP field is set to 0 if corresponding octet including LCG field is present according to the determination above. In other words, LOP field set to 0 indicates that no LCG among LCGs associated with an octet corresponding to the LOP field has data available for transmission. The wireless node includes BS field for the LCGs having data available for transmission, and transmits the generated BSR MAC CE by using the UL grant to a network. In the MAC subheader corresponding to the BSR MAC CE including LOP field and one or more LCG field, the wireless node identifies the BSR MAC CE by using an LCID value which is different from the LCID values used for other BSR MAC CEs. Then, the wireless node may receive an UL grant to transmit the UL data available for a logical channel in response to the transmitted BSR MAC CE, and transmits the UL data from the logical channel which has data available for transmission to the network using the received UL grant. FIG.9illustrates an example of a BSR MAC control element (CE) according to an implementation of the present disclosure. InFIG.9, it is assumed that maximum 4 octets including LCG field can be included in a BSR MAC CE Referring toFIG.9, OX0, OX1, OX2and OX3are an LOP field for an octet including LCG field for LCG0to LCG7, an LOP field for an octet including LCG field for LCG8to LCG15, an LOP field for an octet including LCG field for LCG16to LCG23and an LOP field for an octet including LCG field for LCG24to LCG31, respectively. FIG.9(a)illustrates a case when only an octet including LCG field for LCG16to LCG23is present in the BSR MAC CE because at least one of the LCG among LCG16to LCG23has data available for transmission whereas no LCG among LCG0to LCG7, LCG8to LCG15, and LCG24to LCG31has data available for transmission. FIG.9(b)illustrates a case when only octets including LCG field for LCG0to LCG7and LCG24to LCG31are present in the BSR MAC CE because at least one of the LCG among LCG0to LCG7and at least one of the LCG among LCG24to LCG31have data available for transmission whereas no LCG among LCG8to LCG15, and LCG16to LCG23has data available for transmission. FIG.9(c)illustrates a case when 4 octets including LCG field are present because at least one of the LCG among LCG0to LCG7, at least one of the LCG among LCG8to LCG15, at least one of the LCG among LCG16to LCG23, at least one of the LCG among LCG24to LCG31have data available for transmission. Hereinafter, the network behavior according to the present disclosure is described. When a network receives a BSR MAC CE including LOP field and one or more LCG field, the network recognizes BS of each LCG as following behavior. Specifically, the network checks presence of each octet among multiple octets including LCG field based on the LOP field of the BSR MAC CE. If LOP field is 1, the corresponding octet including LCG field is present. In this case, if a LCG field in the corresponding octet has 1, the BS field of corresponding LCG is present and the network recognize BS of the corresponding LCG. While, if a LCG field in the corresponding octet has 0, the BS field of corresponding LCG is not present and the network recognize the corresponding LCG having no data available for transmission. If LOP field is 0, the corresponding octet including LCG field is not present and the network does not check a LCG field in the corresponding octet including LCG field and recognizes all LCGs in the corresponding octet having no data available for transmission. In response to the received BSR MAC CE including LOP field and one or more LCG field, the network provides an UL grant to the UE, and receives the MAC PDU including user data via the provided UL grant. FIG.10illustrates an example of a BSR MAC control element (CE) transmission procedure according to an implementation of the present disclosure. Further,FIG.11shows an example of a BSR MAC CE generated through a BSR MAC CE transmission procedure according to an implementation of the present disclosure. Referring toFIG.10andFIG.11, the network configures LCG ID=31 to a UE or a IAB node. The MAC entity at the UE or IAB node can include up to four octets (i.e., four second bitmap fields) including LCG field into a BSR MAC CE. Then, the MAC entity triggers a BSR and may generate a BSR MAC CE when the MAC entity has a UL grant. Referring toFIG.11, while generating a BSR MAC CE after receiving a UL grant, the MAC entity sets LOP0and LOP3(i.e, first and third bits of the first bitmap field) to 1 (but other LOP fields set to 0) and includes the associated two octets (i.e., two second bitmap fields) including LCG field and sets LCG0and LCG31to 1 because only LCG whose LCG ID is 0 and LCG whose LCG ID is 31 have data available for transmission. The corresponding Buffer Size fields are also included. The generated BSR MAC CE may look likeFIG.11. After generating the BSR MAC CE including two octets (i.e., two second bitmap fields) including LCG field and the associated Buffer Size information, the MAC entity transmits the generated BSR MAC CE using the UL grant. The BSR MAC CE in the present disclosure is(are) transmitted/received on a physical channel (e.g., PUSCH) based on resource allocation (e.g., UL grant). In the present disclosure, uplink resource allocation is also referred to as uplink grant, and downlink resource allocation is also referred to as downlink assignment. The resource allocation includes time domain resource allocation and frequency domain resource allocation. In the present disclosure, an uplink grant is either received by the UE dynamically on PDCCH, in a Random Access Response, or configured to the UE semi-persistently by RRC. In the present disclosure, downlink assignment is either received by the UE dynamically on the PDCCH, or configured to the UE semi-persistently by RRC signalling from the BS. In the present disclosure, a processor (hereinafter, UE processor), which is mounted on, installed on, or connected to a UE or IAB node), may be configured to generate a BSR MAC CE such that the BSR MAC CE includes an LOP field, an LCG field configured with one or more octets, and a buffer size field, as described above. The UE processor transmits (or control a UE transceiver operably coupled to the UE processor to transmit) the BSR MAC CE based on the UL grant available to the UE processor. In the present disclosure, a transceiver (hereinafter, BS transceiver) at a BS may receive the BSR MAC CE based on the UL grant under the control of a processor (BS processor) operably coupled to the BS transceiver. The BS processor may receive the BSR MAC CE, and determine whether the BSR MAC CE has an octet for LCGi to LCGi+7 based on a LOP filed for the octet for LCGi to LCGi+7, where i=0, 8, 16 . . . . If the octet for LCGi to LCGi+7 is present in the BSR MAC CE, the BS processor may determine whether the BSR MAC CE has respective buffer size information for LCGi to LCGi+7. When LCG space increases, a new BSR format with the increased LCG space should be defined. One possible option to define a new BSR format for the increased LCG space is to add more LCG field as in the current BSR format, i.e., a fixed size of LCG field is always included. If LCG ID increases up to 31, 4 bytes of LCG field should be always included into a BSR MAC CE regardless of whether each LCG has data available for transmission or not. As explained above, however, this approach may cause unnecessary overhead. As mentioned above, the LOP field can be expressed as a first bitmap field, and each of the multiple octets can be expressed as a second bitmap field. In this case, essential procedures of the present disclosure may be summarized that the BSR includes a first bitmap, each bit of the first bitmap indicates whether a corresponding second bitmap is present or not in the BSR, and each bit of at least one second bitmap which is present in the BSR indicates whether buffer size information of a corresponding LCG is present or not in the BSR. According to the invention above, each BSR MAC CE consists of multiple octets including LCG field and LOP field. Thereby, the UE include an octet including LCG field into the BSR MAC CE only when at least one LCG among LCGs associated with that octet has data available for transmission. This makes the UE can avoid unnecessary overhead of the BSR MAC CE. In order to transmit data unit(s) of the present disclosure on UL-SCH, a UE shall have uplink resources available to the UE. In order to receive data unit(s) of the present disclosure on DL-SCH, a UE shall have downlink resources available to the UE. The resource allocation includes time domain resource allocation and frequency domain resource allocation. In the present disclosure, uplink resource allocation is also referred to as uplink grant, and downlink resource allocation is also referred to as downlink assignment. An uplink grant is either received by the UE dynamically on PDCCH, in a Random Access Response, or configured to the UE semi-persistently by RRC. Downlink assignment is either received by the UE dynamically on the PDCCH, or configured to the UE semi-persistently by RRC signaling from the BS. In UL, the BS can dynamically allocate resources to UEs via the Cell Radio Network Temporary Identifier (C-RNTI) on PDCCH(s). A UE always monitors the PDCCH(s) in order to find possible grants for uplink transmission when its downlink reception is enabled (activity governed by discontinuous reception (DRX) when configured). In addition, with Configured Grants, the BS can allocate uplink resources for the initial HARQ transmissions to UEs. Two types of configured uplink grants are defined: Type 1 and Type 2. With Type 1, RRC directly provides the configured uplink grant (including the periodicity). With Type 2, RRC defines the periodicity of the configured uplink grant while PDCCH addressed to Configured Scheduling RNTI (CS-RNTI) can either signal and activate the configured uplink grant, or deactivate it; i.e. a PDCCH addressed to CS-RNTI indicates that the uplink grant can be implicitly reused according to the periodicity defined by RRC, until deactivated. In DL, the BS can dynamically allocate resources to UEs via the C-RNTI on PDCCH(s). A UE always monitors the PDCCH(s) in order to find possible assignments when its downlink reception is enabled (activity governed by DRX when configured). In addition, with Semi-Persistent Scheduling (SPS), the BS can allocate downlink resources for the initial HARQ transmissions to UEs: RRC defines the periodicity of the configured downlink assignments while PDCCH addressed to CS-RNTI can either signal and activate the configured downlink assignment, or deactivate it. In other words, a PDCCH addressed to CS-RNTI indicates that the downlink assignment can be implicitly reused according to the periodicity defined by RRC, until deactivated. <Resource Allocation by PDCCH (i.e. Resource Allocation by DCI)> PDCCH can be used to schedule DL transmissions on PDSCH and UL transmissions on PUSCH, where the downlink control information (DCI) on PDCCH includes: downlink assignments containing at least modulation and coding format (e.g., modulation and coding scheme (MCS) index IMCS), resource allocation, and hybrid-ARQ information related to DL-SCH; or uplink scheduling grants containing at least modulation and coding format, resource allocation, and hybrid-ARQ information related to UL-SCH. The size and usage of the DCI carried by one PDCCH are varied depending on DCI formats. For example, in the 3GPP NR system, DCI format 0_0 or DCI format 0_1 is used for scheduling of PUSCH in one cell, and DCI format 1_0 or DCI format 1_1 is used for scheduling of PDSCH in one cell. FIG.12illustrates an example of PDSCH time domain resource allocation by PDCCH, and an example of PUSCH time resource allocation by PDCCH. Downlink control information (DCI) carried by a PDCCH for scheduling PDSCH or PUSCH includes a value m for a row index m+1 to an allocation table for PDSCH or PUSCH. Either a predefined default PDSCH time domain allocation A, B or C is applied as the allocation table for PDSCH, or RRC configured pdsch-TimeDomainAllocationList is applied as the allocation table for PDSCH. Either a predefined default PUSCH time domain allocation A is applied as the allocation table for PUSCH, or the RRC configured pusch-TimeDomainAllocationList is applied as the allocation table for PUSCH. Which PDSCH time domain resource allocation configuration to apply and which PUSCH time domain resource allocation table to apply are determined according to a fixed/predefined rule (e.g. Table 5.1.2.1.1-1 in 3GPP TS 38.214 v15.3.0, Table 6.1.2.1.1-1 in 3GPP TS 38.214 v15.3.0). Each indexed row in PDSCH time domain allocation configurations defines the slot offset K0, the start and length indicator SLIV, or directly the start symbol S and the allocation length L, and the PDSCH mapping type to be assumed in the PDSCH reception. Each indexed row in PUSCH time domain allocation configurations defines the slot offset K2, the start and length indicator SLIV, or directly the start symbol S and the allocation length L, and the PUSCH mapping type to be assumed in the PUSCH reception. K0for PDSCH, or K2 for PUSCH is the timing difference between a slot with a PDCCH and a slot with PDSCH or PUSCH corresponding to the PDCCH. SLIV is a joint indication of starting symbol S relative to the start of the slot with PDSCH or PUSCH, and the number L of consecutive symbols counting from the symbol S. For PDSCH/PUSCH mapping type, there are two mapping types: one is Mapping Type A where demodulation reference signal (DMRS) is positioned in 3rd or 4th symbol of a slot depending on the RRC signaling, and other one is Mapping Type B where DMRS is positioned in the first allocated symbol. The scheduling DCI includes the Frequency domain resource assignment field which provides assignment information on resource blocks used for PDSCH or PUSCH. For example, the Frequency domain resource assignment field may provide a UE with information on a cell for PDSCH or PUSCH transmission, information on a bandwidth part for PDSCH or PUSCH transmission, information on resource blocks for PDSCH or PUSCH transmission. <Resource Allocation by RRC> As mentioned above, in uplink, there are two types of transmission without dynamic grant: configured grant Type 1 where an uplink grant is provided by RRC, and stored as configured grant; and configured grant Type 2 where an uplink grant is provided by PDCCH, and stored or cleared as configured uplink grant based on L1 signaling indicating configured uplink grant activation or deactivation. Type 1 and Type 2 are configured by RRC per serving cell and per BWP. Multiple configurations can be active simultaneously only on different serving cells. For Type 2, activation and deactivation are independent among the serving cells. For the same serving cell, the MAC entity is configured with either Type 1 or Type 2. A UE is provided with at least the following parameters via RRC signaling from a BS when the configured grant type 1 is configured:cs-RNTI which is CS-RNTI for retransmission;periodicity which provides periodicity of the configured grant Type 1;timeDomainOffset which represents offset of a resource with respect to SFN=0 in time domain;timeDomainAllocation value m which provides a row index m+1 pointing to an allocation table, indicating a combination of a start symbol S and length L and PUSCH mapping type;frequencyDomainAllocation which provides frequency domain resource allocation; andmcsAndTBS which provides IMCS representing the modulation order, target code rate and transport block size. Upon configuration of a configured grant Type 1 for a serving cell by RRC, the UE stores the uplink grant provided by RRC as a configured uplink grant for the indicated serving cell, and initialise or re-initialise the configured uplink grant to start in the symbol according to timeDomainOffset and S (derived from SLIV), and to reoccur with periodicity. After an uplink grant is configured for a configured grant Type 1, the UE considers that the uplink grant recurs associated with each symbol for which: [(SFN*numberOfSlotsPerFrame (numberOfSymbolsPerSlot)+(slot number in the frame×numberOfSymbolsPerSlot)+symbol number in the slot]=(timeDomainOffset*numberOfSymbolsPerSlot+S+N*periodicity) modulo (1024*numberOfSlotsPerFrame*numberOfSymbolsPerSlot), for all N>=0. A UE is provided with at least the following parameters via RRC signaling from a BS when the configured gran Type 2 is configured:cs-RNTI which is CS-RNTI for activation, deactivation, and retransmission; andperiodicity which provides periodicity of the configured grant Type 2. The actual uplink grant is provided to the UE by the PDCCH (addressed to CS-RNTI). After an uplink grant is configured for a configured grant Type 2, the UE considers that the uplink grant recurs associated with each symbol for which: [(SFN*numberOfSlotsPerFrame*numberOfSymbolsPerSlot)+(slot number in the frame*numberOfSymbolsPerSlot)+symbol number in the slot]=[(SFNstart time*numberOfSlotsPerFrame*numberOfSymbolsPerSlot+slotstart time*numberOfSymbolsPerSlot+symbolstart time)+N*periodicity] modulo (1024×numberOfSlotsPerFrame*numberOfSymbolsPerSlot), for all N>=0, where SFNstart time, slotstart time, and symbolstart timeare the SFN, slot, and symbol, respectively, of the first transmission opportunity of PUSCH where the configured uplink grant was (re-)initialised. numberOfSlotsPerFrame and numberOfSymbolsPerSlot refer to the number of consecutive slots per frame and the number of consecutive OFDM symbols per slot, respectively. For configured uplink grants, the HARQ Process ID associated with the first symbol of a UL transmission is derived from the following equation: HARQ Process ID=[floor(CURRENT_symbol/periodicity)]modulo nrofHARQ-Processes where CURRENT_symbol=(SFN×numberOfSlotsPerFrame×numberOfSymbolsPerSlot+slot number in the frame×numberOfSymbolsPerSlot+symbol number in the slot), and numberOfSlotsPerFrame and numberOfSymbolsPerSlot refer to the number of consecutive slots per frame and the number of consecutive symbols per slot, respectively as specified in TS 38.211. CURRENT_symbol refers to the symbol index of the first transmission occasion of a repetition bundle that takes place. A HARQ process is configured for a configured uplink grant if the configured uplink grant is activated and the associated HARQ process ID is less than nrofHARQ-Processes. For downlink, a UE may be configured with semi-persistent scheduling (SPS) per serving cell and per BWP by RRC signaling from a BS. Multiple configurations can be active simultaneously only on different serving cells. Activation and deactivation of the DL SPS are independent among the serving cells. For DL SPS, a DL assignment is provided to the UE by PDCCH, and stored or cleared based on L1 signaling indicating SPS activation or deactivation. A UE is provided with the following parameters via RRC signaling from a BS when SPS is configured:cs-RNTI which is CS-RNTI for activation, deactivation, and retransmission;nrofHARQ-Processes: which provides the number of configured HARQ processes for SPS;periodicity which provides periodicity of configured downlink assignment for SPS. When SPS is released by upper layers, all the corresponding configurations shall be released. After a downlink assignment is configured for SPS, the UE considers sequentially that the Nthdownlink assignment occurs in the slot for which: (numberOfSlotsPerFrame*SFN+slot number in the frame)=[(numberOfSlotsPerFrame*SFNstart time+slotstart time)+N*periodicity*numberOfSlotsPerFrame/10] modulo (1024*numberOfSlotsPerFrame), where SFNstart timeand slotstart timeare the SFN and slot, respectively, of the first transmission of PDSCH where the configured downlink assignment was (re-)initialised. For configured downlink assignments, the HARQ Process ID associated with the slot where the DL transmission starts is derived from the following equation: HARQ Process ID=[floor(CURRENT_slot×10/(numberOfSlotsPerFrame×periodicity))]modulo nrofHARQ-Processes where CURRENT_slot=[(SFN×numberOfSlotsPerFrame)+slot number in the frame] and numberOfSlotsPerFrame refers to the number of consecutive slots per frame as specified in TS 38.211. A UE validates, for scheduling activation or scheduling release, a DL SPS assignment PDCCH or configured UL grant type 2 PDCCH if the cyclic redundancy check (CRC) of a corresponding DCI format is scrambled with CS-RNTI provided by the RRC parameter cs-RNTI and the new data indicator field for the enabled transport block is set to 0. Validation of the DCI format is achieved if all fields for the DCI format are set according to Table 4 or Table 5. Table 4 shows special fields for DL SPS and UL grant Type 2 scheduling activation PDCCH validation, and Table 5 shows special fields for DL SPS and UL grant Type 2 scheduling release PDCCH validation. TABLE 4DCI format 0_0/0_1DCI format 1_0DCI format 1_1HARQ processset to all ′0′sset to all ′0′sset to all ′0′snumberRedundancyset to ′00′set to ′00′For the enabledversiontransport block:set to ′00′ TABLE 5DCI format 0_0DCI format 1_0HARQ process numberset to all ′0′sset to all ′0′sRedundancy versionset to ′00′set to ′00′Modulation and codingset to all ′1′sset to all ′1′sschemeResource blockset to all ′1′sset to all ′1′sassignment Actual DL assignment and actual UL grant, and the corresponding modulation and coding scheme are provided by the resource assignment fields (e.g. time domain resource assignment field which provides Time domain resource assignment value m, frequency domain resource assignment field which provides the frequency resource block allocation, modulation and coding scheme field) in the DCI format carried by the DL SPS and UL grant Type 2 scheduling activation PDCCH. If validation is achieved, the UE considers the information in the DCI format as valid activation or valid release of DL SPS or configured UL grant Type 2. For UL, the processor(s)102of the present disclosure may transmit (or control the transceiver(s)106to transmit) the data unit of the present disclosure based on the UL grant available to the UE. The processor(s)202of the present disclosure may receive (or control the transceiver(s)206to receive) the data unit of the present disclosure based on the UL grant available to the UE. For DL, the processor(s)102of the present disclosure may receive (or control the transceiver(s)106to receive) DL data of the present disclosure based on the DL assignment available to the UE. The processor(s)202of the present disclosure may transmit (or control the transceiver(s)206to transmit) DL data of the present disclosure based on the DL assignment available to the UE. The data unit(s) of the present disclosure is(are) subject to the physical layer processing at a transmitting side before transmission via radio interface, and the radio signals carrying the data unit(s) of the present disclosure are subject to the physical layer processing at a receiving side. For example, a MAC PDU including the PDCP PDU according to the present disclosure may be subject to the physical layer processing as follows. FIG.13illustrates an example of physical layer processing at a transmitting side. The following tables show the mapping of the transport channels (TrCHs) and control information to its corresponding physical channels. In particular, Table 6 specifies the mapping of the uplink transport channels to their corresponding physical channels, Table 7 specifies the mapping of the uplink control channel information to its corresponding physical channel, Table 8 specifies the mapping of the downlink transport channels to their corresponding physical channels, and Table 9 specifies the mapping of the downlink control channel information to its corresponding physical channel. TABLE 6TrCHPhysical ChannelUL-SCHPUSCHRACHPRACH TABLE 7Control informationPhysical ChannelUCIPUCCH, PUSCH TABLE 8TrCHPhysical ChannelDL-SCHPDSCHBCHPBCHPCHPDSCH TABLE 9Control informationPhysical ChannelDCIPDCCH <Encoding> Data and control streams from/to MAC layer are encoded to offer transport and control services over the radio transmission link in the PHY layer. For example, a transport block from MAC layer is encoded into a codeword at a transmitting side. Channel coding scheme is a combination of error detection, error correcting, rate matching, interleaving and transport channel or control information mapping onto/splitting from physical channels. In the 3GPP NR system, following channel coding schemes are used for the different types of TrCH and the different control information types. TABLE 10TrCHCoding schemeUL-SCHLDPCDL-SCHPCHBCHPolar code TABLE 11Control InformationCoding schemeDCIPolar codeUCIBlock codePolar code For transmission of a DL transport block (i.e. a DL MAC PDU) or a UL transport block (i.e. a UL MAC PDU), a transport block CRC sequence is attached to provide error detection for a receiving side. In the 3GPP NR system, the communication device uses low density parity check (LDPC) codes in encoding/decoding UL-SCH and DL-SCH. The 3GPP NR system supports two LDPC base graphs (i.e. two LDPC base matrixes): LDPC base graph 1 optimized for small transport blocks and LDPC base graph 2 for larger transport blocks. Either LDPC base graph 1 or 2 is selected based on the size of the transport block and coding rate R. The coding rate R is indicated by the modulation coding scheme (MCS) index IMCS. The MCS index is dynamically provided to a UE by PDCCH scheduling PUSCH or PDSCH, provided to a UE by PDCCH activating or (re-)initializing the UL configured grant 2 or DL SPS, or provided to a UE by RRC signaling related to the UL configured grant Type 1. If the CRC attached transport block is larger than the maximum code block size for the selected LDPC base graph, the CRC attached transport block may be segmented into code blocks, and an additional CRC sequence is attached to each code block. The maximum code block sizes for the LDPC base graph 1 and the LDPC base graph 2 are 8448 bits and 3480 bits, respectively. If the CRC attached transport block is not larger than the maximum code block size for the selected LDPC base graph, the CRC attached transport block is encoded with the selected LDPC base graph. Each code block of the transport block is encoded with the selected LDPC base graph. The LDPC coded blocks are then individually rat matched. Code block concatenation is performed to create a codeword for transmission on PDSCH or PUSCH. For PDSCH, up to 2 codewords (i.e. up to 2 transport blocks) can be transmitted simultaneously on the PDSCH. PUSCH can be used for transmission of UL-SCH data and layer 1/2 control information. Although not shown inFIG.8, the layer 1/2 control information may be multiplexed with the codeword for UL-SCH data. <Scrambling and Modulation> The bits of the codeword are scrambled and modulated to generate a block of complex-valued modulation symbols. <Layer Mapping> The complex-valued modulation symbols of the codeword are mapped to one or more multiple input multiple output (MIMO) layers. A codeword can be mapped to up to 4 layers. A PDSCH can carry two codewords, and thus a PDSCH can support up to 8-layer transmission. A PUSCH supports a single codeword, and thus a PUSCH can support up to 4-layer transmission. <Transform Precoding> The DL transmission waveform is conventional OFDM using a cyclic prefix (CP). For DL, transform precoding (in other words, discrete Fourier transform (DFT)) is not applied. The UL transmission waveform is conventional OFDM using a CP with a transform precoding function performing DFT spreading that can be disabled or enabled. In the 3GPP NR system, for UL, the transform precoding can be optionally applied if enabled. The transform precoding is to spread UL data in a special way to reduce peak-to-average power ratio (PAPR) of the waveform. The transform precoding is a form of DFT. In other words, the 3GPP NR system supports two options for UL waveform: one is CP-OFDM (same as DL waveform) and the other one is DFT-s-OFDM. Whether a UE has to use CP-OFDM or DFT-s-OFDM is configured by a BS via RRC parameters. <Subcarrier Mapping> The layers are mapped to antenna ports. In DL, for the layers to antenna ports mapping, a transparent manner (non-codebook based) mapping is supported and how beamforming or MIMO precoding is performed is transparent to the UE. In UL, for the layers to antenna ports mapping, both the non-codebook based mapping and a codebook based mapping are supported. For each antenna port (i.e. layer) used for transmission of the physical channel (e.g. PDSCH, PUSCH), the complex-valued modulation symbols are mapped to subcarriers in resource blocks allocated to the physical channel. <OFDM Modulation> The communication device at the transmitting side generates a time-continuous OFDM baseband signal on antenna port p and subcarrier spacing configuration u for OFDM symbol 1 in a TTI for a physical channel by adding a cyclic prefix (CP) and performing IFFT. For example, for each OFDM symbol, the communication device at the transmitting side may perform inverse fast Fourier transform (IFFT) on the complex-valued modulation symbols mapped to resource blocks in the corresponding OFDM symbol and add a CP to the IFFT-ed signal to generate the OFDM baseband signal. <Up-Conversion> The communication device at the transmitting side up-convers the OFDM baseband signal for antenna port p, subcarrier spacing configuration u and OFDM symbol 1 to a carrier frequency f0 of a cell to which the physical channel is assigned. The processors102and202inFIG.2may be configured to perform encoding, scrambling, modulation, layer mapping, transform precoding (for UL), subcarrier mapping, and OFDM modulation. The processors102and202may control the transceivers106and206connected to the processors102and202to up-convert the OFDM baseband signal onto the carrier frequency to generate radio frequency (RF) signals. The radio frequency signals are transmitted through antennas108and208to an external device. FIG.14illustrates an example of physical layer processing at a receiving side. The physical layer processing at the receiving side is basically the inverse processing of the physical layer processing at the transmitting side. <Frequency Down-Conversion> The communication device at a receiving side receives RF signals at a carrier frequency through antennas. The transceivers106and206receiving the RF signals at the carrier frequency down-converts the carrier frequency of the RF signals into the baseband in order to obtain OFDM baseband signals. <OFDM Demodulation> The communication device at the receiving side obtains complex-valued modulation symbols via CP detachment and FFT. For example, for each OFDM symbol, the communication device at the receiving side removes a CP from the OFDM baseband signals and performs FFT on the CP-removed OFDM baseband signals to obtain complex-valued modulation symbols for antenna port p, subcarrier spacing u and OFDM symbol 1. <Subcarrier Demapping> The subcarrier demapping is performed on the complex-valued modulation symbols to obtain complex-valued modulation symbols of a corresponding physical channel. For example, the processor(s)102may obtain complex-valued modulation symbols mapped to subcarriers belong to PDSCH from among complex-valued modulation symbols received in a bandwidth part. For another example, the processor(s)202may obtain complex-valued modulation symbols mapped to subcarriers belong to PUSCH from among complex-valued modulation symbols received in a bandwidth part. <Transform De-Precoding> Transform de-precoding (e.g. IDFT) is performed on the complex-valued modulation symbols of the uplink physical channel if the transform precoding has been enabled for the uplink physical channel. For the downlink physical channel and for the uplink physical channel for which the transform precoding has been disabled, the transform de-precoding is not performed. <Layer Demapping> The complex-valued modulation symbols are de-mapped into one or two codewords. <Demodulation and Descrambling> The complex-valued modulation symbols of a codeword are demodulated and descrambled into bits of the codeword. <Decoding> The codeword is decoded into a transport block. For UL-SCH and DL-SCH, either LDPC base graph 1 or 2 is selected based on the size of the transport block and coding rate R. The codeword may include one or multiple coded blocks. Each coded block is decoded with the selected LDPC base graph into a CRC-attached code block or CRC-attached transport block. If code block segmentation was performed on a CRC-attached transport block at the transmitting side, a CRC sequence is removed from each of CRC-attached code blocks, whereby code blocks are obtained. The code blocks are concatenated into a CRC-attached transport block. The transport block CRC sequence is removed from the CRC-attached transport block, whereby the transport block is obtained. The transport block is delivered to the MAC layer. In the above described physical layer processing at the transmitting and receiving sides, the time and frequency domain resources (e.g. OFDM symbol, subcarriers, carrier frequency) related to subcarrier mapping, OFDM modulation and frequency up/down conversion can be determined based on the resource allocation (e.g., UL grant, DL assignment). For uplink data transmission, the processor(s)102of the present disclosure may apply (or control the transceiver(s)106to apply) the above described physical layer processing of the transmitting side to the data unit of the present disclosure to transmit the data unit wirelessly. For downlink data reception, the processor(s)102of the present disclosure may apply (or control the transceiver(s)106to apply) the above described physical layer processing of the receiving side to received radio signals to obtain the data unit of the present disclosure. For downlink data transmission, the processor(s)202of the present disclosure may apply (or control the transceiver(s)206to apply) the above described physical layer processing of the transmitting side to the data unit of the present disclosure to transmit the data unit wirelessly. For uplink data reception, the processor(s)202of the present disclosure may apply (or control the transceiver(s)206to apply) the above described physical layer processing of the receiving side to received radio signals to obtain the data unit of the present disclosure. FIG.15illustrates operations of the wireless devices based on the implementations of the present disclosure. The first wireless device100ofFIG.2may generate first information/signals according to the functions, procedures, and/or methods described in the present disclosure, and then transmit radio signals including the first information/signals wirelessly to the second wireless device200ofFIG.2(S10). The first information/signals may include the data unit(s) (e.g. PDU, SDU, RRC message) of the present disclosure. The first wireless device100may receive radio signals including second information/signals from the second wireless device200(S30), and then perform operations based on or according to the second information/signals (S50). The second information/signals may be transmitted by the second wireless device200to the first wireless device100in response to the first information/signals. The second information/signals may include the data unit(s) (e.g. PDU, SDU, RRC message) of the present disclosure. The first information/signals may include contents request information, and the second information/signals may include contents specific to the usage of the first wireless device100. Some examples of operations specific to the usages of the wireless devices100and200will be described below. In some scenarios, the first wireless device100may be a hand-held device100dofFIG.1, which performs the functions, procedures, and/or methods described in the present disclosure. The hand-held device100dmay acquire information/signals (e.g., touch, text, voice, images, or video) input by a user, and convert the acquired information/signals into the first information/signals. The hand-held devices100dmay transmit the first information/signals to the second wireless device200(S10). The second wireless device200may be any one of the wireless devices100ato100finFIG.1or a BS. The hand-held device100dmay receive the second information/signals from the second wireless device200(S30), and perform operations based on the second information/signals (S50). For example, the hand-held device100dmay output the contents of the second information/signals to the user (e.g. in the form of text, voice, images, video, or haptic) through the I/O unit of the hand-held device100d. In some scenarios, the first wireless device100may be a vehicle or an autonomous driving vehicle100b, which performs the functions, procedures, and/or methods described in the present disclosure. The vehicle100bmay transmit (S10) and receive (S30) signals (e.g. data and control signals) to and from external devices such as other vehicles, BSs (e.g. gNBs and road side units), and servers, through its communication unit (e.g. communication unit110ofFIG.1C). The vehicle100bmay include a driving unit, and the driving unit may cause the vehicle100bto drive on a road. The driving unit of the vehicle100bmay include an engine, a motor, a powertrain, a wheel, a brake, a steering device, etc. The vehicle100bmay include a sensor unit for acquiring a vehicle state, ambient environment information, user information, etc. The vehicle100bmay generate and transmit the first information/signals to the second wireless device200(S10). The first information/signals may include vehicle state information, ambient environment information, user information, and etc. The vehicle100bmay receive the second information/signals from the second wireless device200(S30). The second information/signals may include vehicle state information, ambient environment information, user information, and etc. The vehicle100bmay drive on a road, stop, or adjust speed, based on the second information/signals (S50). For example, the vehicle100bmay receive map the second information/signals including data, traffic information data, etc. from an external server (S30). The vehicle100bmay generate an autonomous driving path and a driving plan based on the second information/signals, and may move along the autonomous driving path according to the driving plan (e.g., speed/direction control) (S50). For another example, the control unit or processor(s) of the vehicle100bmay generate a virtual object based on the map information, traffic information, and vehicle position information obtained through a GPS sensor of the vehicle100band an I/O unit140of the vehicle100bmay display the generated virtual object in a window in the vehicle100b(S50). In some scenarios, the first wireless device100may be an XR device100cofFIG.1, which performs the functions, procedures, and/or methods described in the present disclosure. The XR device100cmay transmit (S10) and receive (S30) signals (e.g., media data and control signals) to and from external devices such as other wireless devices, hand-held devices, or media servers, through its communication unit (e.g. communication unit110ofFIG.1C). For example, the XR device100ctransmits content request information to another device or media server (S10), and download/stream contents such as films or news from another device or the media server (S30), and generate, output or display an XR object (e.g. an AR/VR/MR object), based on the second information/signals received wirelessly, through an I/O unit of the XR device (S50). In some scenarios, the first wireless device100may be a robot100aofFIG.1, which performs the functions, procedures, and/or methods described in the present disclosure. The robot100amay be categorized into an industrial robot, a medical robot, a household robot, a military robot, etc., according to a used purpose or field. The robot100amay transmit (S10) and receive (S30) signals (e.g., driving information and control signals) to and from external devices such as other wireless devices, other robots, or control servers, through its communication unit (e.g. communication unit110ofFIG.1C). The second information/signals may include driving information and control signals for the robot100a. The control unit or processor(s) of the robot100amay control the movement of the robot100abased on the second information/signals. In some scenarios, the first wireless device100may be an AI device400ofFIG.1. The AI device may be implemented by a fixed device or a mobile device, such as a TV, a projector, a smartphone, a PC, a notebook, a digital broadcast terminal, a tablet PC, a wearable device, a Set Top Box (STB), a radio, a washing machine, a refrigerator, a digital signage, a robot, a vehicle, etc. The AI device400may transmit (S10) and receive (S30) wired/radio signals (e.g., sensor information, user input, learning models, or control signals) to and from external devices such as other AI devices (e.g.,100a, . . . ,100f,200, or400ofFIG.1) or an AI server (e.g.,400ofFIG.1) using wired/wireless communication technology. The control unit or processor(s) of the AI device400may determine at least one feasible operation of the AI device400, based on information which is determined or generated using a data analysis algorithm or a machine learning algorithm. The AI device400may request that external devices such as other AI devices or AI server provide the AI device400with sensor information, user input, learning models, control signals and etc. (S10). The AI device400may receive second information/signals (e.g., sensor information, user input, learning models, or control signals) (S30), and the AI device400may perform a predicted operation or an operation determined to be preferred among at least one feasible operation based on the second information/signals (S50).
95,657
11943656
DETAILED DESCRIPTION Before turning to the figures, which illustrate certain embodiments in detail, it should be understood that the present disclosure is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology used herein is for the purpose of description only and should not be regarded as limiting. Streams of traffic may be characterized by different types of traffic. For instance, an application may be characterized by latency sensitive traffic (e.g., video/voice (VI/VO), real time interactive applications, and the like) or regular traffic (e.g., best effort/background applications (BE/BK)). Latency sensitive traffic may be identifiable, in part, based on its bursty nature (e.g., periodic bursts of traffic), in some embodiments. For instance, video display traffic may be driven by a refresh rate of 60 Hz, 72 Hz, 90 Hz, or 120 Hz. An application and/or device may have combinations of traffic types (e.g., latency sensitive traffic and non latency sensitive traffic). Further, each stream of traffic for the application and/or device may be more or less spontaneous and/or aperiodic as compared to the other streams of traffic for the application and/or device. Accordingly, traffic may vary according to applications and/or channel rate dynamics. In some implementations, devices may communicate using allocated channel transmission bandwidth such that only admitted (e.g., registered or assigned) devices have access to the channel. In other implementations, devices may communicate using broadcast transmissions. Each frame used in communication may include subframes or slots, which further includes data symbols. Devices configured to support multi-link operation (MLO) may be capable of supporting flexible traffic steering and load balancing. In some implementations, devices may load balance different links by differentiating the services of the links. For example, a device (such as an access point (AP)) may direct a station (STA) carrying latency sensitive traffic to operate over one link or a subset of links. Assigning slots to traffic streams improves the quality of service for latency sensitive applications by dedicating and allocating slots in links to latency sensitive traffic. A device (AP, soft AP, console) may configure latency sensitive slots such that the latency sensitive slots are prioritized over regular slots. A device (such as a STA or AP) may classify/identify each of the traffic streams according to source/destination address/identification and/or on a per traffic identifier (TID) basis using an attribute (e.g., an L-marker). A TID may comprise an identifier to identify a traffic stream. Traffic identified as latency sensitive (e.g., having a defined latency requirement, for instance to be within a specific latency range or below a defined latency threshold) may be communicated using a latency sensitive slot, for example. In some applications, latency sensitive traffic that is not prioritized may degrade a user experience. For example, in an AR context, latency between a movement of a user wearing an AR device and an image corresponding to the user movement and displayed to the user using the AR device may cause judder, resulting in motion sickness. FIG.1is a block diagram of an example artificial reality system environment100in which a console110operates.FIG.1provides an example environment in which devices may communicate traffic streams with different latency sensitivities/requirements. In some embodiments, the artificial reality system environment100includes a HWD150worn by a user, and a console110providing content of artificial reality to the HWD150. A head wearable display (HWD) may be referred to as, include, or be part of a head mounted display (HMD), head mounted device (HMD), head wearable device (HWD), head worn display (HWD) or head worn device (HWD). In one aspect, the HWD150may include various sensors to detect a location, an orientation, and/or a gaze direction of the user wearing the HWD150, and provide the detected location, orientation and/or gaze direction to the console110through a wired or wireless connection. The HWD150may also identify objects (e.g., body, hand face). The console110may determine a view within the space of the artificial reality corresponding to the detected location, orientation and/or the gaze direction, and generate an image depicting the determined view. The console110may also receive one or more user inputs and modify the image according to the user inputs. The console110may provide the image to the HWD150for rendering. The image of the space of the artificial reality corresponding to the user's view can be presented to the user. In some embodiments, the artificial reality system environment100includes more, fewer, or different components than shown inFIG.1. In some embodiments, functionality of one or more components of the artificial reality system environment100can be distributed among the components in a different manner than is described here. For example, some of the functionality of the console110may be performed by the HWD150, and/or some of the functionality of the HWD150may be performed by the console110. In some embodiments, the HWD150is an electronic component that can be worn by a user and can present or provide an artificial reality experience to the user. The HWD150may render one or more images, video, audio, or some combination thereof to provide the artificial reality experience to the user. In some embodiments, audio is presented via an external device (e.g., speakers and/or headphones) that receives audio information from the HWD150, the console110, or both, and presents audio based on the audio information. In some embodiments, the HWD150includes sensors155, eye trackers160, a communication interface165, an image renderer170, an electronic display175, a lens180, and a compensator185. These components may operate together to detect a location of the HWD150and/or a gaze direction of the user wearing the HWD150, and render an image of a view within the artificial reality corresponding to the detected location of the HWD150and/or the gaze direction of the user. In other embodiments, the HWD150includes more, fewer, or different components than shown inFIG.1. In some embodiments, the sensors155include electronic components or a combination of electronic components and software components that detect a location and/or an orientation of the HWD150. Examples of sensors155can include: one or more imaging sensors, one or more accelerometers, one or more gyroscopes, one or more magnetometers, or another suitable type of sensor that detects motion and/or location. For example, one or more accelerometers can measure translational movement (e.g., forward/back, up/down, left/right) and one or more gyroscopes can measure rotational movement (e.g., pitch, yaw, roll). In some embodiments, the sensors155detect the translational movement and/or the rotational movement, and determine an orientation and location of the HWD150. In one aspect, the sensors155can detect the translational movement and/or the rotational movement with respect to a previous orientation and location of the HWD150, and determine a new orientation and/or location of the HWD150by accumulating or integrating the detected translational movement and/or the rotational movement. Assuming for an example that the HWD150is oriented in a direction 25 degrees from a reference direction, in response to detecting that the HWD150has rotated 20 degrees, the sensors155may determine that the HWD150now faces or is oriented in a direction 45 degrees from the reference direction. Assuming for another example that the HWD150was located two feet away from a reference point in a first direction, in response to detecting that the HWD150has moved three feet in a second direction, the sensors155may determine that the HWD150is now located at a vector multiplication of the two feet in the first direction and the three feet in the second direction. In some embodiments, the eye trackers160include electronic components or a combination of electronic components and software components that determine a gaze direction of the user of the HWD150. In some embodiments, the HWD150, the console110or a combination may incorporate the gaze direction of the user of the HWD150to generate image data for artificial reality. In some embodiments, the eye trackers160include two eye trackers, where each eye tracker160captures an image of a corresponding eye and determines a gaze direction of the eye. In one example, the eye tracker160determines an angular rotation of the eye, a translation of the eye, a change in the torsion of the eye, and/or a change in shape of the eye, according to the captured image of the eye, and determines the relative gaze direction with respect to the HWD150, according to the determined angular rotation, translation and the change in the torsion of the eye. In one approach, the eye tracker160may shine or project a predetermined reference or structured pattern on a portion of the eye, and capture an image of the eye to analyze the pattern projected on the portion of the eye to determine a relative gaze direction of the eye with respect to the HWD150. In some embodiments, the eye trackers160incorporate the orientation of the HWD150and the relative gaze direction with respect to the HWD150to determine a gaze direction of the user. Assuming for an example that the HWD150is oriented at a direction 30 degrees from a reference direction, and the relative gaze direction of the HWD150is −10 degrees (or 350 degrees) with respect to the HWD150, the eye trackers160may determine that the gaze direction of the user is 20 degrees from the reference direction. In some embodiments, a user of the HWD150can configure the HWD150(e.g., via user settings) to enable or disable the eye trackers160. In some embodiments, a user of the HWD150is prompted to enable or disable the eye trackers160. In some embodiments, the hand tracker162includes an electronic component or a combination of an electronic component and a software component that tracks a hand of the user. In some embodiments, the hand tracker162includes or is coupled to an imaging sensor (e.g., camera) and an image processor that can detect a shape, a location and/or an orientation of the hand. The hand tracker162may generate hand tracking measurements indicating the detected shape, location and/or orientation of the hand. In some embodiments, the communication interface165includes an electronic component or a combination of an electronic component and a software component that communicates with the console110. The communication interface165may communicate with a communication interface115of the console110through a communication link. The communication link may be a wireless link, a wired link, or both. Examples of the wireless link can include a cellular communication link, a near field communication link, Wi-Fi, Bluetooth, or any communication wireless communication link. Examples of the wired link can include a USB, Ethernet, Firewire, HDMI, or any wired communication link. In embodiments in which the console110and the head wearable display150are implemented on a single system, the communication interface165may communicate with the console110through a bus connection or a conductive trace. Through the communication link, the communication interface165may transmit to the console110sensor measurements indicating the determined location of the HWD150, orientation of the HWD150, the determined gaze direction of the user, and/or hand tracking measurements. Moreover, through the communication link, the communication interface165may receive from the console110sensor measurements indicating or corresponding to an image to be rendered. Using the communication interface, the console110(or HWD150) may coordinate operations on link101to reduce collisions or interferences. For example, the console110may coordinate communication between the console110and the HWD150. In some implementations, the console110may transmit a beacon frame periodically to announce/advertise a presence of a wireless link between the console110and the HWD150(or between two HWDs). In an implementation, the HWD150may monitor for or receive the beacon frame from the console110, and can schedule communication with the HWD150(e.g., using the information in the beacon frame, such as an offset value) to avoid collision or interference with communication between the console110and/or HWD150and other devices. The console110and HWD150may communicate using link101(e.g., intralink). Data (e.g., a traffic stream) may flow in a direction on link101. For example, the console110may communicate using a downlink (DL) communication to the HWD150and the HWD150may communicate using an uplink (UL) communication to the console110. In some embodiments, the image renderer170includes an electronic component or a combination of an electronic component and a software component that generates one or more images for display, for example, according to a change in view of the space of the artificial reality. In some embodiments, the image renderer170is implemented as a processor (or a graphical processing unit (GPU)) that executes instructions to perform various functions described herein. The image renderer170may receive, through the communication interface165, data describing an image to be rendered, and render the image through the electronic display175. In some embodiments, the data from the console110may be encoded, and the image renderer170may decode the data to generate and render the image. In one aspect, the image renderer170receives the encoded image from the console110, and decodes the encoded image, such that a communication bandwidth between the console110and the HWD150can be reduced. In some embodiments, the image renderer170receives, from the console,110additional data including object information indicating virtual objects in the artificial reality space and depth information indicating depth (or distances from the HWD150) of the virtual objects. Accordingly, the image renderer170may receive from the console110object information and/or depth information. The image renderer170may also receive updated sensor measurements from the sensors155. The process of detecting, by the HWD150, the location and the orientation of the HWD150and/or the gaze direction of the user wearing the HWD150, and generating and transmitting, by the console110, a high resolution image (e.g.,1920by 1080 pixels, or2048by 1152 pixels) corresponding to the detected location and the gaze direction to the HWD150may be computationally exhaustive and may not be performed within a frame time (e.g., less than 11 ms or 8 ms). In some implementations, the image renderer170may perform shading, reprojection, and/or blending to update the image of the artificial reality to correspond to the updated location and/or orientation of the HWD150. Assuming that a user rotated their head after the initial sensor measurements, rather than recreating the entire image responsive to the updated sensor measurements, the image renderer170may generate a small portion (e.g., 10%) of an image corresponding to an updated view within the artificial reality according to the updated sensor measurements, and append the portion to the image in the image data from the console110through reprojection. The image renderer170may perform shading and/or blending on the appended edges. Hence, without recreating the image of the artificial reality according to the updated sensor measurements, the image renderer170can generate the image of the artificial reality. In other implementations, the image renderer170generates one or more images through a shading process and a reprojection process when an image from the console110is not received within the frame time. For example, the shading process and the reprojection process may be performed adaptively, according to a change in view of the space of the artificial reality. In some embodiments, the electronic display175is an electronic component that displays an image. The electronic display175may, for example, be a liquid crystal display or an organic light emitting diode display. The electronic display175may be a transparent display that allows the user to see through. In some embodiments, when the HWD150is worn by a user, the electronic display175is located proximate (e.g., less than 3 inches) to the user's eyes. In one aspect, the electronic display175emits or projects light towards the user's eyes according to image generated by the image renderer170. In some embodiments, the lens180is a mechanical component that alters received light from the electronic display175. The lens180may magnify the light from the electronic display175, and correct for optical error associated with the light. The lens180may be a Fresnel lens, a convex lens, a concave lens, a filter, or any suitable optical component that alters the light from the electronic display175. Through the lens180, light from the electronic display175can reach the pupils, such that the user can see the image displayed by the electronic display175, despite the close proximity of the electronic display175to the eyes. In some embodiments, the compensator185includes an electronic component or a combination of an electronic component and a software component that performs compensation to compensate for any distortions or aberrations. In one aspect, the lens180introduces optical aberrations such as a chromatic aberration, a pin-cushion distortion, barrel distortion, etc. The compensator185may determine a compensation (e.g., predistortion) to apply to the image to be rendered from the image renderer170to compensate for the distortions caused by the lens180, and apply the determined compensation to the image from the image renderer170. The compensator185may provide the predistorted image to the electronic display175. In some embodiments, the console110is an electronic component or a combination of an electronic component and a software component that provides content to be rendered to the HWD150. In one aspect, the console110includes a communication interface115and a content provider130. These components may operate together to determine a view (e.g., a field of view (FOV) of the user) of the artificial reality corresponding to the location of the HWD150and/or the gaze direction of the user of the HWD150, and can generate an image of the artificial reality corresponding to the determined view. In other embodiments, the console110includes more, fewer, or different components than shown inFIG.1. In some embodiments, the console110is integrated as part of the HWD150. In some embodiments, the communication interface115is an electronic component or a combination of an electronic component and a software component that communicates with the HWD150. The communication interface115may be a counterpart component to the communication interface165to communicate with a communication interface115of the console110through a communication link (e.g., USB cable, a wireless link). Through the communication link, the communication interface115may receive from the HWD150sensor measurements indicating the determined location and/or orientation of the HWD150, the determined gaze direction of the user, and/or hand tracking measurements. Moreover, through the communication link, the communication interface115may transmit to the HWD150data describing an image to be rendered. The content provider130can include or correspond to a component that generates content to be rendered according to the location and/or orientation of the HWD150, the gaze direction of the user and/or hand tracking measurements. In one aspect, the content provider130determines a view of the artificial reality according to the location and orientation of the HWD150and/or the gaze direction of the user of the HWD150. For example, the content provider130maps the location of the HWD150in a physical space to a location within an artificial reality space, and determines a view of the artificial reality space along a direction corresponding to an orientation of the HWD150and/or the gaze direction of the user from the mapped location in the artificial reality space. The content provider130may generate image data describing an image of the determined view of the artificial reality space, and transmit the image data to the HWD150through the communication interface115. The content provider may also generate a hand model (or other virtual object) corresponding to a hand of the user according to the hand tracking measurement, and generate hand model data indicating a shape, a location, and an orientation of the hand model in the artificial reality space. In some embodiments, the content provider130generates metadata including motion vector information, depth information, edge information, object information, etc., associated with the image, and transmits the metadata with the image data to the HWD150through the communication interface115. The content provider130may encode and/or encode the data describing the image, and can transmit the encoded and/or encoded data to the HWD150. In some embodiments, the content provider130generates and provides the image to the HWD150periodically (e.g., every one second). The scheduler190A of the HWD150and the scheduler190B of the console (hereinafter referred to as “scheduler190”) may be used to facilitate communication between the HWD150and the console110. For example, the HWD150and/or console110may access link101based on a scheduled agreement. The scheduler190may communicate traffic such that the HWD150and console110(or console110and other device, or HWD150and other device), may agree on a distribution of traffic/slots of link101. The scheduler190may also facilitate agreements involving the distribution of carriers and sub-carriers of link101. The scheduler190may be used to assign (identify, or classify) traffic based on access categories, TIDs, source/destinations, the direction of traffic (e.g., UL/DL), and/or a predicted traffic pattern (e.g., the expected traffic originating from the device and/or application, traffic expected by the device and/or application, and/or expected peer-to-peer traffic). Upon agreeing to latency sensitive traffic using the scheduler190, the console110and/or HWD150may communicate the latency sensitive traffic using a latency sensitive slot (e.g., a prioritized slot) to transmit a portion of the traffic identified as being latency sensitive. The console110and/or HWD150may also access any of the regular (or non-prioritized) slots for portions of the traffic identified as being regular (non latency sensitive) traffic. In addition, the scheduler190may schedule (e.g., assign, or allocate) particular slot locations as latency sensitive slots. In an example, the scheduler190may schedule latency sensitive slots in a cyclic pattern among slots for regular traffic. Additionally or alternatively, scheduler190may schedule multiple contiguous slots (e.g., a service period of latency sensitive slots) as latency sensitive slots. In an example, if the console110and/or HWD is configured for multi-link operation (MLO), the scheduler190may schedule one set of TIDs to a subset of links, and schedule a different set of TIDs to a different subset of links. Additionally or alternatively, the scheduler190may dedicate a subset of links for UL triggering/traffic and dedicate a different subset of links for other traffic (e.g., DL triggering/traffic). FIG.2is a diagram of a HWD150, in accordance with an example embodiment. In some embodiments, the HWD150includes a front rigid body205and a band210. The front rigid body205includes the electronic display175(not shown inFIG.2), the lens180(not shown inFIG.2), the sensors155, the eye trackers160A,160B, the communication interface165, and the image renderer170. In the embodiment shown byFIG.2, the sensors155are located within the front rigid body205, and may not visible to the user. In other embodiments, the HWD150has a different configuration than shown inFIG.2. For example, the image renderer170, the eye trackers160A,160B, and/or the sensors155may be in different locations than shown inFIG.2. Various operations described herein can be implemented on computer systems.FIG.3shows a block diagram of a representative computing system314usable to implement the present disclosure. In some embodiments, the console110, the HWD150or both ofFIG.1are implemented by the computing system314. Computing system314can be implemented, for example, as a consumer device such as a smartphone, other mobile phone, tablet computer, wearable computing device (e.g., smart watch, eyeglasses, head wearable display), desktop computer, laptop computer, or implemented with distributed computing devices. The computing system314can be implemented to provide VR, AR, MR experience. In some embodiments, the computing system314can include conventional computer components such as processors316, storage device318, network interface320, user input device322, and user output device324. Network interface320can provide a connection to a wide area network (e.g., the Internet) to which WAN interface of a remote server system is also connected. Network interface320can include a wired interface (e.g., Ethernet) and/or a wireless interface implementing various RF data communication standards such as Wi-Fi, Bluetooth, or cellular data network standards (e.g., 3G, 4G, 5G, 60 GHz, LTE, etc.). The network interface320may include a transceiver to allow the computing system314to transmit and receive data from a remote device (e.g., an AP, a STA) using a transmitter and receiver. The transceiver may be configured to support transmission/reception supporting industry standards that enables bi-directional communication. An antenna may be attached to transceiver housing and electrically coupled to the transceiver. Additionally or alternatively, a multi-antenna array may be electrically coupled to the transceiver such that a plurality of beams pointing in distinct directions may facilitate in transmitting and/or receiving data. A transmitter may be configured to wirelessly transmit frames, slots, or symbols generated by the processor unit316. Similarly, a receiver may be configured to receive frames, slots or symbols and the processor unit316may be configured to process the frames. For example, the processor unit316can be configured to determine a type of frame and to process the frame and/or fields of the frame accordingly. User input device322can include any device (or devices) via which a user can provide signals to computing system314; computing system314can interpret the signals as indicative of particular user requests or information. User input device322can include any or all of a keyboard, touch pad, touch screen, mouse or other pointing device, scroll wheel, click wheel, dial, button, switch, keypad, microphone, sensors (e.g., a motion sensor, an eye tracking sensor, etc.), and so on. User output device324can include any device via which computing system314can provide information to a user. For example, user output device324can include a display to display images generated by or delivered to computing system314. The display can incorporate various image generation technologies, e.g., a liquid crystal display (LCD), light-emitting diode (LED) including organic light-emitting diodes (OLED), projection system, cathode ray tube (CRT), or the like, together with supporting electronics (e.g., digital-to-analog or analog-to-digital converters, signal processors, or the like). A device such as a touchscreen that function as both input and output device can be used. Output devices324can be provided in addition to or instead of a display. Examples include indicator lights, speakers, tactile “display” devices, printers, and so on. FIGS.1-2illustrate devices that communicate traffic streams some of which may be latency sensitive (e.g., those carrying AR/VR information/content).FIG.4is an interaction/flow diagram showing a process400of communicating slot assignment(s) of traffic stream(s) between two devices, according to an example implementation of the present disclosure. In some embodiments, the process400is performed by a first device401and a second device402. The first device401and second device402may be some combination of an AP (e.g., console110, router), a soft AP, and/or a station (e.g., HWD150). In some embodiments, the process400is performed by other entities. In some embodiments, the process400includes more, fewer, or different steps than shown inFIG.4. In more details of operation403, the first device401may generate a request message with one or more request values. A value in the request message (e.g., a request value in the request message) may be a value corresponding to a traffic stream and/or the traffic stream's TID. In at least one field of the request message, the device (e.g., first device401and/or second device402) may configure/include a latency marker (L-marker) with a value (e.g., a request value) to identify one or more traffic streams. In other embodiments, the request message may include a different marker having a request value to identify one or more traffic streams using a different characteristic/attribute of the traffic stream. For example, other types of data may be prioritized using other markers. The L-marker may be one bit in length/size, in some embodiments. For example, the L-marker (e.g., a request value of the L-marker) may indicate whether a TID is associated with latency sensitive traffic. As discussed herein, the L-marker may be used to indicate and distinguish latency sensitive traffic (e.g., prioritized traffic) over regular traffic (e.g., non-prioritized traffic, regular traffic). If the L-marker is set to ‘1’ (or ‘0’), the corresponding TID may be associated with latency sensitive traffic, and if the L-marker is set to ‘0’ (or ‘1’) the TID may be associated with regular traffic. One or more values, (e.g., a request value) may indicate whether a traffic stream is latency sensitive. In some implementations, values (e.g., a request value) of the L-marker may be one bit. In other implementations, values (e.g., request values) of the L-marker may be multiple bits. If the L-marker includes multiple bits (e.g., a bitmap of k bits), then in some implementations, each of the k bits of the bitmap may indicate whether a corresponding traffic stream communicated (e.g., in UL or DL) or to be communicated between the first device401and the second device402, is latency sensitive. The L-marker may also include other bits in the bitmap (e.g., management bit(s)). In the event the L-marker is multiple bits, the L-marker may indicate (or distinguish between) various types of traffic using various characteristics (or attributes) of the traffic. For example, an L-marker/value of the L-marker may in some embodiments distinguish at least four types of traffic (e.g., prioritized traffic, preferred traffic, regular traffic). The L-marker may also indicate a specific direction of traffic between the first device401and the second device402. For example, the L-marker may indicate whether a traffic stream should be communicated in UL and/or DL traffic. In some implementations, the L-marker may contain/include a first value (e.g., a first request value) indicating whether a first traffic stream in a first direction between the first device401and the second device402is latency sensitive. A second value of the L-marker (e.g., a second request value) may indicate whether a second traffic stream in a second direction between the first device401and the second device402is latency sensitive. For instance a particular stream of UL traffic may be latency sensitive and a particular stream of DL traffic may be latency sensitive. The L-marker may indicate/include a first bitmap (e.g., 2 bytes or 16 bits) of first values (e.g., request values) that indicates a direction of traffic (e.g., UL) between the first device401and the second device402, and whether corresponding traffic streams are latency sensitive. The first bitmap may include one management bit and/or one defined bit to indicate whether all identified traffic streams are latency sensitive. The L-marker may also indicate/include a second bitmap (e.g., 2 bytes or 16 bits) of second values (e.g., second request values) that indicates a direction of traffic (e.g., DL) between the second device402and the first device401, and whether other corresponding traffic streams are latency sensitive. The second bitmap may include one management bit and/or one defined bit to indicate whether all identified traffic streams are latency sensitive. In some implementations, the first bitmap and second bitmap are contiguous (e.g., being portions of a larger bitmap). In some implementations, the contiguous nature of the bitmaps may indicate a first direction (e.g., UL) associated with the first bitmap, and a second direction (e.g., DL) associated with the second bitmap. In more details of operation420, the first device401may transmit the request message to the second device402. For example, the first device401may transmit the request message (e.g., an add block acknowledgement (ADDBA) request frame) as part of a handshake process for establishment of a block acknowledgement (BA) session. A bit may be appended/added/repurposed in a field (or information element) to indicate whether the traffic corresponding to the BA is latency sensitive. For example, a sub-field may indicate whether a TID to be aggregated in a BA session is a latency sensitive TID. The L-marker may comprise/set only one bit, because the ADDBA request frame (or response frame) is configured/established for a TID specified in the BA parameter set. In some implementations, the L-marker may be configured in (or as an) information element (IE). For example, the IE may be a latency sensitive traffic identifier/description/configuration used in (appended to, repurposed for, inserted in) other protocols, including the BA establishment session as discussed herein. The IE may be configured/defined with header information such as element ID information (which may be 1 byte or other number of bytes/bits long), IE length (which may be 1 byte or other number of bytes/bits long), and/or Element ID Extension (which may be 1 byte or other number of bytes/bits long). The IE may also include the L-marker bitmap, as described herein. The L-marker bitmap may be 2 bytes (or other number of bytes/bits long). The IE may be represented for example, by: |Element ID (1 byte)|Length (1 byte)|Element ID Extension (1 byte)|L-Marker bitmap (2 bytes)| As example, the first device401may transmit the request message (with the IE) as any type of handshake action frame. For instance, a negotiation process (such as process400) may be executed using the IE in one or more messages that are communicated within the negotiation process. Additionally or alternatively, the first device401may transmit an L-marker as/in a field in a request message. For example, the request message can be a slot request frame as part of a slot request handshake (or in other protocols/process that exchange frames for negotiation or to achieve agreements). That is, one or more bits in a field of the request handshake frame may be appended/incorporated/configured/modified such that the request handshake frame conveys latency sensitivity information. In an example, the L-marker IE/field (or bitmap) may be appended/incorporated/configured in the slot request handshake frame. In more details of operation404, the second device402may receive the request message transmitted by the first device401. The second device402may extract information from the request message such as L-marker information. In operation406, the second device402may generate a response message in response to the received information extracted from the request message (e.g., including the L-marker information). For example, the response message may comprise an ADDBA response frame (sent in response to the ADDBA request frame), as part of the handshake process for establishment of a BA session. In another example, the response message may comprise a handshake action frame (e.g., a slot response handshake frame, in response to the slot handshake request frame, for slot assignment). In some embodiments, the response message may be similar to the request message (e.g., structurally, operationally). In other embodiments, the response message may be different from the request message (e.g., structurally, operationally). In at least one field/IE of the response message, the device (e.g., first device401and/or second device402) may configure a response latency marker (response L-marker) with a value (e.g., a response value) to identify/indicate/classify one or more traffic streams (e.g., as being latency sensitive or not). In other embodiments, the response message may include a different/separate marker using a response value to identify one or more traffic streams using a different characteristic/attribute of the traffic stream. The second device402may manage (e.g., use for scheduling traffic streams) the information extracted from the response L-marker in the response message. In some embodiments, if the second device402is an AP, the second device402may schedule downlink traffic (or peer-to-peer traffic) to be transmitted from the first device401(e.g., a STA). In some embodiments, if the second device402is a STA, the second device402may schedule uplink traffic (or peer-to-peer traffic) to be transmitted from the first device401(e.g., an AP, a different STA). In some embodiments, the response L-marker generated for the response message may be identical or similar to the L-marker generated in the request message (e.g., structurally, operationally, such as using a defined IE, field and/or bitmap format). The response values may be responses corresponding to (e.g., matching) the request values in the L-marker generated in the request message. For instance, a request value may request a particular TID stream be identified as latency sensitive traffic, and a response value may accept the request by mirroring the request values of the request message. If the response values of the response value L-marker do not mirror or match the request values of the L-marker, then the requested information may be rejected (partially rejected, for one or more of the identified traffic streams, or wholly rejected for all identified traffic streams). For example, a particular traffic stream may not be denoted as a latency sensitive traffic stream for slot scheduling purposes, as indicated in the corresponding response value of the response L-marker. The traffic stream not denoted as a latency sensitive traffic stream may not receive (or be assigned to) prioritized slot(s) for communication. In other embodiments, the response L-marker generated for the response message may be different from the L-marker generated in the request message (e.g., structurally, operationally, for example using a different format, IE or field). For example, a response L-marker may accept the request values indicated in the L-marker by communicating (with a response value, for instance) a one-bit response (instead of a multi-bit bitmap). For example, the response L-marker may be set to ‘1’ (or ‘0’) to accept the information associated with the L-marker of the request message. In more details of operation422, the second device402may transmit the response message to the first device401. For example, the second device402may transmit the response message as part of a slot request-response handshake process. In more details of operation407, the first device401may receive, from the second device402, the response message. The response message may be received in response (in part) to the request message communicated to the second device402. The first device401may extract information from the response message, including a response L-marker (or L-markers) and any response values, and determine whether the L-marker and corresponding request values of the request message were (partially or completely) approved, rejected, and/or modified. For example, the first device401may determine that the second device402approved the L-marker in the request message if the response L-marker in the response message is the same as the L-marker in the request message. In a different example, the first device401may determine that the second device402modified (or rejected) the L-marker (or L-markers) in the request message if the response L-marker (or L-markers) in the response message is at least partially different. If the second device402modified the L-marker, in some embodiments, the first device402may approve/accept/re-request (e.g., automatically or by default) the newly modified L-marker designations. For example, the first device401may transmit a mirrored/matching L-marker (or L-markers) back to the second device402(not shown). In some embodiments, the response message from the second device can include (in addition to a response L-Marker, or in place of a response L-marker) an indication (e.g., in a status field) for example indicating success (e.g., the second device accepted/approved the L-marker), a rejection with suggested change(s) to the L-marker information, or a rejection (without suggested changes). The first device, responsive to receiving the indication and/or the response message, can decide/determine whether to send a further request message (e.g., a re-request, with or without the suggested change(s). In some embodiments, the first device may decide to give up or drop its request responsive to the indication (e.g., if the second device indicates a rejection in the response message). For example, the first device401may give up its request to seek/identify particular latency sensitive traffic stream(s), in some implementations, if the second device provides a partial or complete rejection. Referring back to operation407, the first device401may initiate a process according to the response message. For example, the first device401may schedule the traffic streams identified in the response message as latency sensitive traffic streams (e.g., approved/accepted by the second device402in the request message as latency sensitive traffic streams), to time slots that are prioritized, and may communicate these traffic streams as prioritized traffic streams instead of a regular traffic stream. Communicating the traffic streams as prioritized (or latency sensitive) traffic streams may include transmitting the traffic streams using prioritized slot(s), prioritized time duration(s), prioritized symbols, prioritized carrier(s), and the like. That is, the first device401may differentiate the prioritized traffic stream from a regular traffic stream. FIG.5is an example scheduling diagram based on information extracted from an L-marker (e.g., a response L-marker from an access point), according to an example implementation of the present disclosure. In some embodiments, devices (such as STA501A,501B and501C, referred to herein as “STAs501”) may have latency sensitive traffic to communicate with the access point. Each STA501may have different traffic streams, and some of these is/are to be prioritized to meet latency requirements. Each traffic stream to be communicated by each of the STAs501may include latency sensitive traffic or non-latency sensitive traffic (e.g., regular traffic, non-prioritized traffic). An L-marker may correspond to each stream of traffic. For example, as shown in STA501A, a stream of traffic may correspond to TID 3 and be identified as latency sensitive traffic (e.g., the L-marker is set to ‘1’). Another stream of traffic may correspond to TID 2 and be identified as regular traffic (e.g., the L-marker is set to ‘0’). Another stream of traffic may correspond to TID 1 and be identified as latency sensitive traffic (e.g., the L-marker is set to ‘1’). Another stream of traffic may correspond to TID 0 and be identified as regular traffic (e.g., the L-marker is set to ‘0’). As shown, each STA501may have one or more traffic streams to transmit. The traffic (traffic streams) to be transmitted may be queued by the STA501. For example, STA501A may queue and seek to transmit traffic505A, STA501B may queue and seek to transmit traffic505B, and STA501C may queue and seek to transmit505C. Each of the STAs501may communicate the traffic they seek to transmit according to traffic stream information (e.g., TIDs) and L-markers (e.g., conveyed by the access point). For example, the STAs501may transmit request messages (as described in operation403and420inFIG.4) containing L-markers to a scheduler503(e.g., schedule190A inFIG.1). The scheduler503(e.g., of an AP, a soft AP, a console110) may receive the request message and may agree to the traffic designations requested by STAs501. Accordingly, slots of a service period507may be scheduled with prioritized traffic (e.g., latency sensitive traffic) and non-prioritized traffic (e.g., regular traffic). Some implementations include electronic components, such as microprocessors, storage and memory that store computer program instructions in a computer readable storage medium (e.g., non-transitory computer readable medium). Many of the features described in this specification can be implemented as processes that are specified as a set of program instructions encoded on a computer readable storage medium. When these program instructions are executed by one or more processors, they cause the processors to perform various operation indicated in the program instructions. Examples of program instructions or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter. Through suitable programming, processor316can provide various functionality for computing system314, including any of the functionality described herein as being performed by a server or client, or other functionality associated with message management services. It will be appreciated that computing system314is illustrative and that variations and modifications are possible. Computer systems used in connection with the present disclosure can have other capabilities not specifically described here. Further, while computing system314is described with reference to particular blocks, it is to be understood that these blocks are defined for convenience of description and are not intended to imply a particular physical arrangement of component parts. For instance, different blocks can be located in the same facility, in the same server rack, or on the same motherboard. Further, the blocks need not correspond to physically distinct components. Blocks can be configured to perform various operations, e.g., by programming a processor or providing appropriate control circuitry, and various blocks might or might not be reconfigurable depending on how the initial configuration is obtained. Implementations of the present disclosure can be realized in a variety of apparatus including electronic devices implemented using any combination of circuitry and software. Having now described some illustrative implementations, it is apparent that the foregoing is illustrative and not limiting, having been presented by way of example. In particular, although many of the examples presented herein involve specific combinations of method acts or system elements, those acts and those elements can be combined in other ways to accomplish the same objectives. Acts, elements and features discussed in connection with one implementation are not intended to be excluded from a similar role in other implementations or implementations. The hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, particular processes and methods may be performed by circuitry that is specific to a given function. The memory (e.g., memory, memory unit, storage device, etc.) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage, etc.) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. According to an exemplary embodiment, the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit and/or the processor) the one or more processes described herein. The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including” “comprising” “having” “containing” “involving” “characterized by” “characterized in that” and variations thereof herein, is meant to encompass the items listed thereafter, equivalents thereof, and additional items, as well as alternate implementations consisting of the items listed thereafter exclusively. In one implementation, the systems and methods described herein consist of one, each combination of more than one, or all of the described elements, acts, or components. Any references to implementations or elements or acts of the systems and methods herein referred to in the singular can also embrace implementations including a plurality of these elements, and any references in plural to any implementation or element or act herein can also embrace implementations including only a single element. References in the singular or plural form are not intended to limit the presently disclosed systems or methods, their components, acts, or elements to single or plural configurations. References to any act or element being based on any information, act or element can include implementations where the act or element is based at least in part on any information, act, or element. Any implementation disclosed herein can be combined with any other implementation or embodiment, and references to “an implementation,” “some implementations,” “one implementation” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described in connection with the implementation can be included in at least one implementation or embodiment. Such terms as used herein are not necessarily all referring to the same implementation. Any implementation can be combined with any other implementation, inclusively or exclusively, in any manner consistent with the aspects and implementations disclosed herein. Where technical features in the drawings, detailed description or any claim are followed by reference signs, the reference signs have been included to increase the intelligibility of the drawings, detailed description, and claims. Accordingly, neither the reference signs nor their absence have any limiting effect on the scope of any claim elements. Systems and methods described herein may be embodied in other specific forms without departing from the characteristics thereof. References to “approximately,” “about” “substantially” or other terms of degree include variations of +/−10% from the given measurement, unit, or range unless explicitly indicated otherwise. Coupled elements can be electrically, mechanically, or physically coupled with one another directly or with intervening elements. Scope of the systems and methods described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalency of the claims are embraced therein. The term “coupled” and variations thereof includes the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent or fixed) or moveable (e.g., removable or releasable). Such joining may be achieved with the two members coupled directly with or to each other, with the two members coupled with each other using a separate intervening member and any additional intermediate members coupled with one another, or with the two members coupled with each other using an intervening member that is integrally formed as a single unitary body with one of the two members. If “coupled” or variations thereof are modified by an additional term (e.g., directly coupled), the generic definition of “coupled” provided above is modified by the plain language meaning of the additional term (e.g., “directly coupled” means the joining of two members without any separate intervening member), resulting in a narrower definition than the generic definition of “coupled” provided above. Such coupling may be mechanical, electrical, or fluidic. References to “or” can be construed as inclusive so that any terms described using “or” can indicate any of a single, more than one, and all of the described terms. A reference to “at least one of ‘A’ and ‘B’” can include only ‘A’, only ‘B’, as well as both ‘A’ and ‘B’. Such references used in conjunction with “comprising” or other open terminology can include additional items. Modifications of described elements and acts such as variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations can occur without materially departing from the teachings and advantages of the subject matter disclosed herein. For example, elements shown as integrally formed can be constructed of multiple parts or elements, the position of elements can be reversed or otherwise varied, and the nature or number of discrete elements or positions can be altered or varied. Other substitutions, modifications, changes and omissions can also be made in the design, operating conditions and arrangement of the disclosed elements and operations without departing from the scope of the present disclosure. References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below”) are merely used to describe the orientation of various elements in the FIGURES. The orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure.
57,241
11943657
DETAILED DESCRIPTION Those skilled in the art could understand that, as described in the background, exiting techniques cannot activate or deactivate a PDCP duplication function flexibly and effectively. In existing techniques, PDCP duplication includes CA duplication and DC duplication. In a CA scenario, the CA duplication function configures a PDCP duplication function for each RB through an RRC signaling. When a UE uses PDCP duplication, an additional duplication RLC entity can be established for the RB. The RRC message may also indicate a cell group identifier and an LCID of a primary RLC entity. The RRC message may also set a duplication initial state (for example, an active state or an inactive state) for the RB. The PDCP duplication in the CA scenario corresponds to a Medium Access Control (MAC) entity. In addition, the RRC message may be configured with two Logical CHannels (LCHs) (also referred to as two RLC entities) to be mapped to different carriers respectively. The PDCP duplication function needs to be activated after it is configured through the RRC signaling before usage. The activation and deactivation of the PDCP duplication function is achieved by activating/deactivating a MAC Control Element (CE) of a base station (for example, gNB). The MAC CE includes bitmap information, and each bit in the bitmap information corresponds to a DRB configured with a PDCP duplication function. When the bit corresponding to the DRB is 1, it means that the DRB is activated, and when the bit corresponding to the DRB is 0, it means that the DRB is deactivated. After the DRB is activated, a PDCP layer may perform duplication on the data packet, and transmit the same two duplicated PDCP PDUs to the two RLC entities corresponding to the DRB respectively. The two RLC entities may transmit the duplicated PDCP PDUs respectively. After the PDCP duplication function of the DRB is deactivated, the correspondence between the LCH and the carrier corresponding to the DRB no longer exists. The PDCP layer of the UE no longer performs duplication on the data packet, and the primary RLC entity (i.e., the primary LCH) transmits the data packet. The PDCP entity of the UE may notify the secondary RLC entity to delete data packet buffer in the secondary LCH. In a DC scenario, a PDCP duplication function uses split bearer as a baseline for duplication. Similar to the CA scenario, the PDCP duplication function can be used merely after an RRC signaling is configured and activated. The activation/deactivation of the PDCP duplication function is realized by a base station (for example, gNB) activating/deactivating a MAC CE. The MAC CE still includes bitmap information, and each bit in the bitmap information corresponds to a DRB configured with a PDCP duplication function. The mapping between the DRB and the bitmap is based on DRB ID configured with a duplication function. When the bit corresponding to the DRB is 1, it means that the PDCP duplication function of the DRB is activated, and when the bit corresponding to the DRB is 0, it means that the PDCP duplication function of the DRB is deactivated. After the DRB is activated, the PDCP entity may perform duplication on the data packet, and transmit the same two duplicated PDCP PDUs to the two RLC entities corresponding to the DRB respectively. The two RLC entities may transmit the duplicated PDCP PDUs respectively. After the DRB is deactivated, the UE reuses a split operation, and uses associated parameters of an initial split operation to perform the split operation. In the CA scenario, the PDCP duplication function configures the PDCP duplication function for each RB through an signaling. When the UE uses PDCP duplication, an additional duplication RLC entity may be established for the RB. The RRC signaling may also indicate a cell group identifier and an LCID of a primary RLC entity. The RRC signaling may also set a duplication initial state (for example, an active state or an inactive state) for the RB. The PDCP duplication in the CA scenario corresponds to a MAC entity. The RRC message may also be configured with two LCHs (i.e., two RLC entities) to be mapped to different carriers respectively. The PDCP duplication function needs to be activated after it is configured through RRC signaling before usage. The activation and deactivation of the PDCP duplication function is implemented by the base station (for example, gNB) activating/deactivating the MAC CE. The MAC CE includes bitmap information, and each bit in the bitmap information corresponds to a DRB configured with a PDCP duplication function. When the bit corresponding to the DRB is 1, it means that the DRB is activated, and when the bit corresponding to the DRB is 0, it means that the DRB is deactivated. After the DRB is activated, the PDCP layer may perform a duplication on the data packet, and transmit the same two duplicated PDCP PDUs to the two RLC entities corresponding to the DRB respectively. The two RLC entities may transmit the duplicated PDCP PDUs respectively. After the DRB is deactivated, the correspondence between the LCH and the carrier corresponding to the DRB no longer exists. The PDCP layer of the UE no longer performs duplication on the data packet, and the primary RLC entity (i.e., the primary LCH) transmits the data packet. The PDCP entity of the UE may notify the secondary RLC entity to delete data packet buffer in the secondary LCH. In the DC scenario, the PDCP duplication function uses split bearer as the baseline for duplication. Similar to the CA scenario, the PDCP duplication function can be used merely after the RRC signaling is configured and activated. The activation/deactivation of the PDCP duplication function is realized by the base station (for example, gNB) activating/deactivating the MAC CE. The MAC CE includes bitmap information, and each bit in the bitmap information corresponds to a DRB configured with a PDCP duplication function. The mapping between the DRB and the bitmap is based on DRB ID configured with a duplication function. When the bit corresponding to the DRB is 1, it means that the DRB is activated, and when the bit corresponding to the DRB is 0, it means that the DRB is deactivated. After the DRB is activated, the PDCP layer may perform duplication on the data packet, and transmit the same two duplicated PDCP PDUs to the two RLC entities corresponding to the DRB respectively. The two RLC entities may transmit the duplicated PDCP PDUs respectively. After the DRB is deactivated, the UE may fall back to an initial split operation and use its associated configuration to perform a split operation. In the 15th (Release 15, R15) version of an NR protocol, for each RB, configuration information of PDCP duplication includes following content: (1) a PDCP duplication field, used to indicate whether a PDCP duplication function is configured, wherein when this field appears, it means that an initial state of PDCP duplication is active; (2) a PDCP entity corresponding to RLC1 (for example, LCH1), RLC2 (LCH2) and RLC3 (LCH3); and (3) a cell corresponding to each LCH, wherein this parameter is configured merely in the CA scenario. In the exiting techniques, under two-legs duplication in both CA and DC scenarios, activation/deactivation MAC CE includes 8 D-fields, 1 byte in total. As shown in Table 1, Di (i is from 0 to 7) represents an activation/deactivation state of DRB i configured with the PDCP duplication function in an RLC entity associated with a same MAC entity, where i is related to DRBs configured with the PDCP duplication function arranged in an ascending order of DRB IDs. Di may be 1 or 0. Di of 1 indicates activating the PDCP duplication function of DRB i, and Di of 0 indicates deactivating the PDCP duplication function of DRB i. TABLE 1D7D6D5D4D3D2D1D0 In the existing techniques, the PDCP duplication function is configured at the granularity of RB, that is, once the PDCP duplication function is indicated by an RB, it means that when the PDCP duplication function is activated, all data packets in the RB must be duplicated. In the existing techniques, regardless of the activation/deactivation mechanism of CA duplication or DC duplication, one bit is used to indicate whether each DRB is configured with a PDCP duplication function. If the duplication function of the DRB is activated, the PDCP duplication operation is performed, and data transmission may be performed on both legs; if the duplication function of the DRB is deactivated, the PDCP duplication operation is not performed, and merely a primary leg is used for data transmission. However, for multi-connectivity duplication, the RB configured with the PDCP duplication function may be configured with more than two legs. In this case, for the RB, which legs are used for data transmission cannot be indicated by 1 bit. Therefore, the existing activation/deactivation mechanism is not applicable to multi-connectivity duplication. In embodiments of the present disclosure, a PDCP duplication function activation method is provided, including: receiving a PDCP duplication function activation signaling from a network, wherein the PDCP duplication function activation signaling includes at least one data offload indication identifier of at least one radio bearer, and the at least one radio bearer is configured with a PDCP duplication function; and determining a duplication number of a data packet of the at least one radio bearer based on the PDCP duplication function activation signaling. With the embodiments of the present disclosure, the data offload indication identifier for multi-connectivity duplication (for example, PDCP duplication combined with DC and CA) may be determined based on the activation signaling configured with the PDCP duplication function, whether to perform data offload during multi-connectivity duplication may be determined based on the data offload indication identifier, and the duplication number of the data packet may be determined accordingly, which may effectively and flexibly activate or deactivate the PDCP duplication function, so that the network may select a leg with better transmission quality based on radio leg transmission quality and other associated factors, thereby improving transmission resource utilization. In order to clarify the objects, characteristics and advantages of the disclosure, embodiments of present disclosure will be described in detail in conjunction with accompanying drawings. FIG.1is a flow chart of a PDCP duplication function activation method according to an embodiment. The method may be applied in a UE, i.e., performed by the UE. Those skilled in the art could understand that, the method may further be used for deactivating a PDCP duplication function of the UE. Referring toFIG.1, the method may include S101and S102. In S101, a PDCP duplication function activation signaling is received from a network, wherein the PDCP duplication function activation signaling includes at least one data offload indication identifier of at least one radio bearer, and the at least one radio bearer is configured with a PDCP duplication function. In S102, a duplication number of a data packet of the at least one radio bearer is determined based on the PDCP duplication function activation signaling. In some embodiments, a base station (such as NR gNB) at the network may configure first activation configuration information for the UE, where the first activation configuration information is an RRC signaling. In some embodiments, the first activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, and a cell (i.e., carrier) used by each logical channel. It should be noted that the base station may use multiple RRC messages to transmit information carried in the first activation configuration information, or use one RRC message to transmit the information carried in the first activation configuration information. In some embodiments, the PDCP duplication field used for indicating to configure the PDCP duplication function may indicate whether the PDCP duplication function is configured for an RB. When the PDCP duplication field exists, it may also indicate that an initial state is a PDCP duplication function activated state. In some embodiments, LCHs may correspond to RLC entities in one-to-one correspondence. For example, the first activation configuration information or second activation configuration information may include logical channels LCH1, LCH2 and LCH3 configured by PDCP entities. The logical channel LCH1 corresponds to an RLC entity 1, the logical channel LCH2 corresponds to an RLC entity 2, and the logical channel LCH3 corresponds to the RLC entity 3. Further, based on the LCHs configured by the PDCP entities, i.e., the correspondence between the LCHs and the RLC entities, the correspondence between the PDCP entities and the RLC entities can be obtained. In some embodiments, an RB configured with a multi-connectivity PDCP duplication function combined with DC and CA may be indicated by a cell group ID and an LCID. Those skilled in the art could understand that an RB that is merely configured with the CA PDCP duplication function may be indicated by the LCID only. Further, an LCH corresponding to an RB configured with the CA PDCP duplication function may also be configured with corresponding carrier parameter information to ensure that cell sets corresponding to different LCHs do not overlap. Those skilled in the art could understand that for an RB configured with a multi-connectivity PDCP duplication function (for example, a PDCP duplication function combined with DC and CA), if the MAC entity is merely associated with one LCH, the LCH does not need to be configured with cell parameter information. That is, when the PDCP duplication function is activated, the data packet of the LCH may be transmitted through any cell corresponding to the MAC entity. If the MAC entity is associated with multiple LCHs, the multiple LCHs need to be configured with cell parameter information to ensure that cell sets corresponding to the multiple LCHs do not overlap. After obtaining the first activation configuration information, the base station may transmit the first activation configuration information to the UE through an RRC signaling, so that the UE can indicate the at least one logical channel actually used by the PDCP duplication function based on the PDCP duplication function activation signaling. Alternatively, the base station may configure second activation configuration information for the UE, where the second activation configuration information is an RRC message. In some embodiments, the second activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, priorities of logical channels, and a carrier used by each logical channel. It should be noted that the base station may use multiple RRC messages to transmit information carried in the first activation configuration information or the second activation configuration information, or use one RRC message to transmit the information carried in the first activation configuration information and the second activation configuration information. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the second activation configuration information, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node may be separately ordered. In some embodiments, when the PDCP duplication function is a PDCP duplication function combined with DC and CA, LCHs are divided into two groups, one group of LCH belongs to the master node, and forms a master node LCH group, and the other group of LCH belongs to the secondary node and forms a secondary node LCH group. In this case, the second activation configuration information may first arrange an order of the master node LCH group and the secondary node LCH group, and then sort the LCHs in the master node LCH group and the LCHs in the secondary node LCH group to obtain priorities of the LCHs. The second activation configuration information may arrange the priorities of the LCHs of the master node and the secondary node in an order of the master node LCH group first and the secondary node LCH group last. Afterward, the priorities of the LCHs in the master node LCH group or in the secondary node LCH group may be arranged in an ascending order of LCID. Alternatively, the second activation configuration information may arrange the priorities of the LCHs of the master node and the secondary node in the order of the master node LCH group first and the secondary node LCH group last. Afterward, the priorities of the LCHs in the master node LCH group or in the secondary node LCH group may be arranged in a descending order of LCID. Alternatively, the second activation configuration information may arrange the priorities of the LCHs of the master node and the secondary node in an order of the secondary node LCH group first and the master node LCH group last. Afterward, the priorities of the LCHs in the master node LCH group or in the secondary node LCH group may be arranged in an ascending order of LCID. Alternatively, the second activation configuration information may arrange the priorities of the LCHs of the master node and the secondary node in an order of the secondary node LCH group first and the master node LCH group last. Afterward, the priorities of the LCHs in the master node LCH group or in the secondary node LCH group may be arranged in a descending order of LCID. For example, an RB corresponds to 5 LCHs, including LCH1, LCH2, LCH3, LCH4′ and LCH5′. LCH1, LCH2 and LCH3 belong to a master node, and LCH4′ and LCH5′ belong to a secondary node. If the second activation configuration information arranges the priorities in the order of the master node LCH group first and the secondary node LCH group last, and the priorities of LCHs in the master node LCH group or in the secondary node LCH group are arranged in an ascending order of LCID, a priority order of the LCHs in the second activation configuration information may be as follows: each LCH belonging to the master node has a higher priority than each LCH belonging to the secondary node, LCH1 has the highest priority, and LCH5′ has the lowest priority. Alternatively, in the second activation configuration information, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node may be ordered uniformly. In this condition, the second activation configuration information does not distinguish whether an LCH belongs to the master node or the secondary node, and the priority of each LCH may be directly configured. For example, a priority order of the LCHs is as follows: LCH1, LCH4′, LCH2, LCH3 and LCH5′, which indicates that LCH1 belonging to the master node has the highest priority, LCH4′ belonging to the secondary node has the second highest priority, and LCH5′ belonging to the secondary node has the lowest priority. After obtaining the second activation configuration information, the base station may transmit the second activation configuration information to the UE through an RRC signaling, so that the UE may indicate the at least one logical channel actually used by the PDCP duplication function based on the PDCP duplication function activation signaling. Further, the base station may determine the PDCP duplication function activation signaling to activate the PDCP duplication function. In some embodiments, the PDCP duplication function activation signaling may include at least one data offload indication identifier of RB. The data offload indication may be 0 or 1, to indicate whether the RB is allowed to perform data offload. In some embodiments, the PDCP duplication function activation signaling may further include indication information used to indicate LCHs used by the at least one radio bearer. There may be more than 3 LCHs associated with each radio bearer, but a number of LCHs (that is, used LCHs) activated by the base station may be less than a number of LCHs associated with the radio bearer. For example, the number of LCHs used by the radio bearer is two. In some embodiments, the number of RBs configured with the PDCP duplication function may be one or more. When the number of RBs configured with the PDCP duplication function is one, the PDCP duplication function activation signaling may include one indication information. In this case, the data offload indication identifiers may correspond to the RBs configured with the PDCP duplication function in one-to-one correspondence, and the data offload indication identifiers may be disposed before or after the indication information of the corresponding RBs (for example, all bits of the corresponding legs or LCHs). When there are a plurality of RBs configured with the PDCP duplication function, the PDCP duplication function activation signaling may include a plurality of data offload indication identifiers and a plurality of pieces of indication information. The plurality of pieces of indication information have a one-to-one correspondence with the plurality of RBs, and may be arranged in an ascending or descending order based on radio bearer identifiers of the radio bearers. In this case, the data offload indication identifiers may have a one-to-one correspondence with the RBs configured with the PDCP duplication function, and may be disposed before or after the indication information of the corresponding RBs. Alternatively, the data offload indication identifiers merely correspond to some of the RBs in a one-to-one correspondence, and may be disposed before or after the indication information of the corresponding RBs. In some embodiments, if the base station transmits the first activation configuration information, each indication information may use a bitmap to indicate a usage status of the LCH configured for each radio bearer. The usage status of the LCH is indicated by 0 or 1. For example, if the usage status is 0, it means that the LCH corresponding to the bit is not used, that is, the PDCP duplication function corresponding to the LCH is not activated; if the usage status is 1, it means that the LCH corresponding to the bit is used, that is, the PDCP duplication function corresponding to the LCH is activated. When the bitmap is used to indicate the usage status of the LCHs configured for the radio bearer, the LCHs may be sorted in an ascending or descending order based on LCID. Specifically, when the bitmap is adopted, each bit is associated with one LCH. The LCHs may be sorted in an ascending or descending order based on the LCID. In some embodiments, when the PDCP duplication function is a CA PDCP duplication function, in the indication information, the LCHs may be arranged in an ascending or descending order of the LCID. Those skilled in the art could understand that in practice, for example, if the LCHs include LCH1, LCH2 and LCH3, and the UE and the base station pre-negotiate that the LCHs indicated by the indication information are arranged in an ascending order of LCID, the indication information may include 3 bits of LCH1, LCH2 and LCH3. Alternatively, when the PDCP duplication function is a PDCP duplication function combined with DC and CA, in the indication information, the indication identifiers of the LCHs may be divided into two groups for record, where one group of LCH belongs to the master node, and the other group of LCH belongs to the secondary node. The indication identifiers of the logical channels in each group may be arranged in an ascending order or in a descending order based on LCID. For example, the LCHs of the RB includes LCH1, LCH2, LCH3, LCH4′ and LCH5′. LCH1, LCH2 and LCH3 belong to the master node, and LCH4′ and LCH5′ belong to the secondary node. For example, when an arrange order in the indication information is master node LCHs first and secondary node LCHs last, and each group of LCHs are arranged in an ascending order of LCID, the LCHs indicated by the indication information are LCH1, LCH2, LCH3, LCH4′ and LCH5′ in sequence. For another example, when an arrange order in the indication information is secondary node LCHs first and master node LCHs last, and each group of LCHs are arranged in a descending order of LCID, the LCHs indicated by the indication information are LCH5′, LCH4′, LCH3, LCH2 and LCH1 in sequence. Further, for any leg (for example, LCH) associated with the RB configured with the PDCP duplication function, every 1 bit is used to indicate whether to use the leg for data transmission. For example, when the bit is 1, it may indicate that the LCH corresponding to the leg is used; when the bit is 0, it indicates that the LCH corresponding to the leg is not used. For example, three LCHs of a particular RB are LCH1, LCH2 and LCH3. If the LCHs indicated by the indication information are arranged in an ascending order of LCID, when the indication information is “110”, it means that a bit value of LCH1 is 1, a bit value of LCH2 is 1, and a bit value of LCH3 is 0, that is, the RB configured with the PDCP duplication function uses LCH1 and LCH2 to duplicate and transmit data, but does not use LCH3 to transmit data. Alternatively, the indication information is “11010”, and it is known that LCHs indicated by the indication information are LCH1, LCH2, LCH3, LCH4′ and LCH5′ in order. It can be seen that a bit value corresponding to LCH1 is 1, a bit value corresponding to LCH2 is 1, a bit value corresponding to LCH3 is 0, a bit value corresponding to LCH4′ is 1, and a bit value corresponding to LCH5′ is 0. Based on each bit value, it can be known that the RB configured with the PDCP duplication function uses LCH1, LCH2 and LCH4′ legs to transmit data, and does not use LCH3 in the master node LCH group and LCH5′ in the secondary node LCH group to transmit data. Alternatively, if the base station transmits the second activation configuration information, the indication information may include the number of logical channels used by the radio bearer. In some embodiments, the UE may determine the logical channels used by the radio bearer based on the number of logical channels in the indication information and priorities of the logical channels in the second activation configuration information. Alternatively, the PDCP duplication function is a CA PDCP duplication function, and the indication information indicates the number of LCHs used by the PDCP duplication function, i.e., the number of activated legs. In this case, after receiving the indication information, the UE may obtain the priorities of the LCHs based on the second activation configuration information, and in combination with the number of LCHs used, the LCHs for data transmission are further learned. Alternatively, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and the number of LCHs belonging to the master node used by each RB in the indication information and the number of LCHs belonging to the secondary node used by the RB are separately recorded. For example, for the RB configured with the PDCP duplication function combined with DC and CA, the number of active legs on a side of the master node and the number of active legs on a side of the secondary node may be respectively indicated. The UE uses legs at each side whose number is equal to the number of the indicated LCHs and priorities are relatively high for data transmission by default. FIG.2is a structure diagram of indication information according to an embodiment. If indication information on a master node side of a data radio bearer DRB2 is “00”, it indicates that one leg is used on the master node side; if indication information on the master node side of the data radio bearer DRB2 is “01”, it indicates that two legs are used on the master node side; or if indication information on the master node side of the data radio bearer DRB2 is “10”, it indicates that three legs are used on the master node side. Accordingly, if indication information on a secondary node side of the data radio bearer DRB2 is “00”, it indicates that one leg is used on the secondary node side; if indication information on the secondary node side of the data radio bearer DRB2 is “01”, it indicates that two legs are used on the secondary node side; or if indication information on the secondary node side of the data radio bearer DRB2 is “10”, it indicates that three legs are used on the secondary node side. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and the indication information may respectively indicate the number of LCHs used by the master node, i.e., the number of active legs on the master node side, and the number of LCHs used by the secondary node, i.e., the number of active legs on the secondary node side. The UE may obtain the number of LCHs with relatively high priority in combination with the second activation configuration information, and further learn the LCHs for data transmission. For example, the indication information of bit “11” indicates the use of 4 LCHs (or called legs) with the first four highest priorities, the indication information of bit “10” indicates the use of 3 LCHs (or called legs) with the first three highest priorities, the indication information of bit “01” indicates the use of 2 LCHs (or called legs) with the first two highest priorities, and the indication information of bit “00” indicates the use of 1 LCH (or called leg) with the highest priority. As shown in Table 2, every 2 bits in Table 2 represent a DRB indication information. In this case, DRB1 may use 4 LCHs (or called legs) with the first four highest priorities, DRB2 may use 3 LCHs (or called legs) with the first three highest priorities, DRB3 may use 2 LCHs (or called legs) with the first two highest priorities, and DRB4 may use 1 LCH (or called leg) with the highest priority. TABLE 2DRBDRB1DRB2DRB3DRB4bit11100100 In some embodiments, whether the CA PDCP duplication function is activated or the PDCP duplication function combined with DC and CA is activated, two methods can be used to calculate the number of bits of the PDCP duplication function of each RB. One method is to predefine that each RB can be configured with at most N legs, and then the number of bits required at least is calculated. Afterward, for all RBs, as the number of configured legs is not greater than N, the number of bits calculated above can indicate a condition of activating all legs. In this case, even if the number of legs changes, the number of bits does not need to be changed. The other method does not limit the number of legs configured for the RB. In this case, for each RB, as the UE knows a maximum number of configurable legs for the RB, the number of bits required at least can be calculated. In this case, the number of bits of the leg used by each RB is associated with the number of legs configured for each RB. The number of bits of different RBs may be different, and when the leg changes, the number of bits may also change. In S102, the UE may determine the duplication number of the data packet of the RB based on the PDCP duplication function activation signaling, and duplicate the data packet. An RRC message, a MAC CE, or a physical layer message may be used to transmit the PDCP duplication function activation signaling, which is not limited here. In some embodiments, if the UE receives the first activation configuration information and the data offload indication in the activation signaling indicates that the UE is allowed to perform the data offload operation, the PDCP entity first determines whether to perform the data offload operation. Specifically, a data amount of the data packet in the PDCP entity is compared with a preset offload threshold first, and if the data amount is greater than or equal to the preset offload threshold, data offload, duplication and transmission may be performed to the data in the PDCP entity. More specifically, the PDCP entity may divide the data packet into a first data packet and a second data packet. Afterward, the first data packet and the second data packet are duplicated. The first data packet refers to a data packet transmitted through a master node, and the second data packet refers to a data packet transmitted through a secondary node. Before duplication, the duplication number needs to be determined, and may be determined based on the indication information of the RB. In some embodiments, the duplication number may be determined based on the number of LCHs used by the RB. After the LCHs belonging to the master node are determined, the duplication number of the first data packet may be determined based on the number of LCHs with a bit value of 1 among the LCHs (for example, the first LCH) belonging to the master node in the indication information. Accordingly, after the LCHs belonging to the secondary node are determined, the duplication number of the second data packet may be determined based on the number of LCHs (for example, the second LCH) with a bit value of 1 among the LCHs belonging to the secondary node in the indication information. Afterward, the first data packets obtained by duplication may be transmitted to their corresponding first LCHs or legs respectively, and the second data packets obtained by duplication may be transmitted to their corresponding second LCHs or legs respectively. In some embodiments, after the data offload operation is completed, the PDCP entity may perform PDCP duplication on the data of the master node and the secondary node respectively. For the master node, the duplication number N1 of the first data packet is obtained based on the number of LCHs with a bit value of 1 in an indication identifier of each leg (for example, LCH) activated by the master node corresponding to the RB, and then the first data packet distributed to the master node is duplicated to generate N1 first data packets which are respectively delivered to the legs with the bit value of 1 in each LCH indication identifier corresponding to the master node. For the secondary node, the duplication number of the second data packet N2 is obtained based on the number of LCHs with a bit value of 1 in each LCH indication identifier in the secondary node corresponding to the RB, and then the second data packet distributed to the secondary node is duplicated to generate N2 second data packets which are respectively delivered to the legs with the bit value of 1 in each LCH indication identifier corresponding to the secondary node. Alternatively, if the data amount is less than the preset offload threshold, the number of logical channels used by the radio bearer may be determined based on the indication information, and a corresponding number of data packets may be duplicated for transmission. Specifically, all data packets may be transmitted to the legs corresponding to the master node to be duplicated, and based on the number N3 of legs activated by the master node corresponding to the RB, each data packet may be duplicated to generate N3 data packets which are delivered to the legs with the bit value of 1 in each LCH indication identifier corresponding to the master node. In some embodiments, when the data offload indication identifier indicates that no data offload is to be performed, the PDCP duplication may be directly performed on the data packet. Specifically, based on the number of LCHs with a bit value of 1 in each LCH indication identifier corresponding to the RB, each data packet may be duplicated by a corresponding number, and the duplicated data packets are delivered to legs whose bit value is 1 in the LCH indication identifier. Alternatively, if the UE receives the second activation configuration information, and the data offload indication identifier in the PDCP duplication function activation signaling indicates that the UE is allowed to perform data offload, the PDCP entity first determines whether to perform data offload. Specifically, the data amount of the data packet in the PDCP entity is compared with a preset offload threshold first, and if the data amount is greater than or equal to the preset offload threshold, perform data offload, duplication and transmission to data in the PDCP entity. More specifically, the PDCP entity may divide the data packet into a first data packet and a second data packet, and then duplicate the first data packet and the second data packet. The first data packet refers to a data packet transmitted through a master node, and the second data packet refers to a data packet transmitted through a secondary node. Before duplication, the duplication number needs to be determined, and may be determined based on the indication information of RB. In some embodiments, the duplication number may be determined based on the number of LCHs used by the RB. After the data offload is completed, the PDCP entity may perform PDCP duplication on the data of the master node and the secondary node respectively. If the LCH used by the first data packet (for example, the first LCH) belongs to the master node, for the master node, the first data packet distributed to the master node is duplicated based on the number of active legs of the master node corresponding to the RB, and the duplicated first data packets are respectively transmitted to the legs of the master node who have relatively high priority and whose number is equal to the indicated number of LCHs. For the secondary node, the second data packet distributed to the master node is duplicated based on the number of active legs of the secondary node corresponding to the RB, and the duplicated second data packets are respectively transmitted to the legs of the secondary node who have relatively high priority and whose number is equal to the indicated number of LCHs. Alternatively, if the data amount is less than the preset offload threshold, the number of logical channels used by the radio bearer may be determined based on the indication information, and a corresponding number of data packets may be duplicated for transmission. Specifically, all data packets may be transmitted to the leg corresponding to the master node to be duplicated. Based on the number of legs activated by the master node corresponding to the RB, each data packet may be duplicated to generate corresponding number of duplicated data packets which are respectively delivered to the legs of the master node associated with the LCHs with relatively high priorities. In some embodiments, if the data offload indication identifier indicates that no data offload is to be performed, the PDCP duplication operation may be directly performed on the data, and the duplicated data packets are transmitted to the respective activated legs of the master node and the secondary node. Specifically, the PDCP entity may determine the duplication number based on a number of the active legs of the master node and the secondary node corresponding to the RB, and duplicate the data packet to generate the corresponding number of duplicated data packets which are respectively delivered to the legs of the master node and the secondary node who have relatively high priorities and whose number is equal to the number of LCHs. FIG.3is a flow chart of a PDCP duplication function activation method according to an embodiment. The method may be applied in a network side, for example, performed by a base station. Referring toFIG.3, the method may include S301and S302. In S301, a PDCP duplication function activation signaling is determined, wherein the PDCP duplication function activation signaling includes at least one data offload indication identifier of at least one radio bearer configured with a PDCP duplication function. In S302, the PDCP duplication function activation signaling is transmitted to a UE, so that the UE determines a duplication number of a data packet of the at least one radio bearer based on the PDCP duplication function activation signaling. In some embodiments, the base station may configure first activation configuration information or second activation configuration information. The first activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, and a carrier used by each logical channel. The second activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, priorities of logical channels, and a carrier used by each logical channel. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the second activation configuration information, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node are separately ordered, or, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node are uniformly ordered. After obtaining the first activation configuration information or the second activation configuration information, the base station may use an RRC signaling to transmit the first activation configuration information or the second activation configuration information. It should be noted that information carried in the first activation configuration information and the second activation configuration information may be transmitted via multiple RRC signalings, or via one RRC signaling. In S301, the base station may determine the PDCP duplication function activation signaling which includes at least one data offload indication identifier of at least one radio bearer configured with a PDCP duplication function, so that the UE determines whether to perform data offload for the radio bearer based on the data offload indication identifier. In some embodiments, the PDCP duplication function activation signaling further includes indication information, wherein the indication information is used to indicate at least one logical channel used by the at least one radio bearer which is selected from logical channels configured by the at least one radio bearer. In some embodiments, the at least one data offload indication identifier is disposed before or after the indication information of the at least one radio bearer. In some embodiments, when the at least one radio bearer includes a plurality of radio bearers, the PDCP duplication function activation signaling includes a plurality of data offload indication identifiers and a plurality of pieces of indication information, wherein the plurality of data offload indication identifiers and the plurality of pieces of indication information correspond to the plurality of radio bearers respectively, and are arranged in an ascending or descending order of radio bearer identifiers of the plurality of radio bearers. In some embodiments, the indication information uses a bitmap to indicate a usage status of the at least one logical channel configured by the at least one radio bearer. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the indication information, indication identifiers of logical channels are recorded in two groups, wherein the indication identifiers of the logical channels in each group are arranged in an ascending or descending order of LCID, one group of logical channels belongs to a master node, and the other group of logical channels belongs to a secondary node. Alternatively, in some embodiments, the indication information includes a number of logical channels used by the at least one radio bearer, so that the UE determines the logical channels used by the at least one radio bearer based on the number of the logical channels and the priorities of the logical channels. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the indication information, a number of logical channels used by the at least one radio bearer which belong to a master node and a number of logical channels used by the at least one radio bearer which belong to a secondary node are separately recorded. Those skilled in the art could understand that S301to S302can be regarded as steps corresponding to S101to S102in the embodiment as shown inFIG.1, and the two are complementary to each other in terms of specific implementation principles and logic. Therefore, explanation of terms involved in the embodiment as shown inFIG.3can be referred to related descriptions of the embodiment as shown inFIG.1, and is not described in detail here. Signaling interaction between a UE and a network (for example, an NR base station) adopting the embodiments of the present disclosure is further described below in conjunction with a typical application scenario. In a typical application scenario, referring toFIG.4, after a UE1establishes a connection with a base station2, the base station2may perform s1, that is, determine first activation configuration information or the second activation configuration information. Afterward, the base station2may perform s2, that is, transmit the first activation configuration information or the second activation configuration information to the UE1. After receiving the first activation configuration information, the UE1may determine an RB that uses a PDCP duplication function, logical channel configured by the RB, and a carrier used by each logical channel. Alternatively, after receiving the second activation configuration information, the UE1may determine the RB that uses the PDCP duplication function, logical channels configured by the RB, priorities of the logical channels, and a carrier used by each logical channel. Afterward, the base station2may perform s3, that is, determine a PDCP duplication function activation signaling. The PDCP duplication function activation signaling may include a data offload indication identifier of a radio bearer configured with the PDCP duplication function, and indication information used to indicate at least one logical channel the radio bearer uses, and the at least one logical channel used by the radio bearer is selected from logical channels configured by the radio bearer. Afterward, the base station2may perform s4, that is, transmit the PDCP duplication function activation signaling to the UE1. In some embodiments, the UE1may perform s5, that is, after receiving the PDCP duplication function activation signaling, duplicate each data packet based on specific information in the PDCP duplication function activation signaling, and determine whether to perform data offload and legs for transmitting the duplicated data packets. Further, the UE1may perform s6, that is, transmit data obtained by PDCP duplication via multiple legs. More details on working principles and working methods of the UE1and the base station2in the application scenario as shown inFIG.4may be referred to related descriptions ofFIG.1toFIG.3, and are not described in detail here. From above, with the embodiments of the present disclosure, a multi-connectivity PDCP duplication function may be indicated flexibly and effectively. After a duplication number and legs to be used are determined, duplicated data packets may be transmitted via the legs to be used. FIG.5is a structural diagram of a PDCP duplication function activation device according to an embodiment. Referring toFIG.5, the PDCP duplication function activation device5may be applied to a UE. Those skilled in the art could understand that the device may be used to implement technical solutions of the above PDCP duplication function activation method as shown inFIG.1,FIG.2andFIG.4. In some embodiments, the device5may include a first receiving circuitry51and a first determining circuitry52. In some embodiments, the first receiving circuitry51is configured to receive a PDCP duplication function activation signaling from a network, wherein the PDCP duplication function activation signaling includes at least one data offload indication identifier of at least one radio bearer, and the at least one radio bearer is configured with a PDCP duplication function; and the first determining circuitry52is configured to determine a duplication number of a data packet of the at least one radio bearer based on the PDCP duplication function activation signaling. In some embodiments, the PDCP duplication function activation signaling further includes indication information, wherein the indication information is used to indicate at least one logical channel used by the at least one radio bearer which is selected from logical channels configured by the at least one radio bearer. In some embodiments, when the at least one radio bearer includes a plurality of radio bearers, the PDCP duplication function activation signaling includes a plurality of data offload indication identifiers and a plurality of pieces of indication information, wherein the plurality of data offload indication identifiers and the plurality of pieces of indication information correspond to the plurality of radio bearers respectively, and are arranged in an ascending or descending order of radio bearer identifiers of the plurality of radio bearers. In some embodiments, the device5further includes a second receiving circuitry53configured to receive first activation configuration information from the network before the PDCP duplication function activation signaling is received from the network, wherein the first activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, and a carrier used by each logical channel. In some embodiments, the second receiving circuitry53includes a first receiving sub-circuitry531configured to receive the first activation configuration information from the network via an RRC signaling. In some embodiments, the indication information uses a bitmap to indicate a usage status of the at least one logical channel configured by the at least one radio bearer. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the indication information, indication identifiers of logical channels are recorded in two groups, wherein the indication identifiers of the logical channels in each group are arranged in an ascending or descending order of LCID, one group of logical channels belongs to a master node, and the other group of logical channels belongs to a secondary node. In some embodiments, the device5further includes a third receiving circuitry54configured to receive second activation configuration information from the network before the PDCP duplication function activation signaling is received from the network, wherein the second activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, priorities of logical channels, and a carrier used by each logical channel. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the second activation configuration information, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node are separately ordered, or, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node are uniformly ordered. In some embodiments, the third receiving circuitry54includes a second receiving sub-circuitry541configured to receive the second activation configuration information from the network via an RRC signaling. In some embodiments, the indication information includes a number of logical channels used by the at least one radio bearer, and the device5further includes a second determining circuitry55configured to determine the logical channels used by the at least one radio bearer based on the number of the logical channels and the priorities of the logical channels. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the indication information, a number of logical channels used by the at least one radio bearer which belong to a master node and a number of logical channels used by the at least one radio bearer which belong to a secondary node are separately recorded. In some embodiments, the first determining circuitry52includes a first determining sub-circuitry521, a first transmitting sub-circuitry522and a second transmitting sub-circuitry523. The first determining sub-circuitry521is configured to: when the at least one data offload indication identifier indicates that the at least one radio bearer is allowed to perform data offload, determine whether a data amount of the data packet is greater than or equal to a preset offload threshold. The first transmitting sub-circuitry522is configured to: if the data amount of the data packet is greater than or equal to the preset offload threshold, divide the data packet into a first data packet and a second data packet; determine a number of first logical channels used by the at least one radio bearer and a number of second logical channels used by the at least one radio bearer based on the indication information, to obtain a duplication number of the first data packet and a duplication number of the second data packet; duplicate the first data packet and the second data packet based on the duplication number of the first data packet and the duplication number of the second data packet; and transmit data packets obtained by duplicating the first data packet to the first logical channels, and transmit data packets obtained by duplicating the second data packet to the second logical channels. The second transmitting sub-circuitry523is configured to: if the data amount of the data packet is less than the preset offload threshold, determine a number of third logical channels used by the at least one radio bearer based on the indication information, to obtain the duplication number of the data packet; duplicate the data packet based on the duplication number of the data packet; and transmit data packets obtained by duplicating the data packet to the third logical channels. The first data packet refers to a data packet transmitted through a master node, the second data packet refers to a data packet transmitted through a secondary node, the first logical channels belong to the master node, the second logical channels belong to the secondary node, and the third logical channels belong to the master node. In some embodiments, the first determining circuitry52includes a second determining sub-circuitry524configured to: when the at least one data offload indication identifier indicates that the at least one radio bearer does not perform data offload, determine the number of logical channels used by the at least one radio bearer based on the indication information to obtain the duplication number of the data packet; duplicate the data packet based on the duplication number of the data packet; and transmit data packets obtained by duplicating the data packet to the logical channels. More details on working principles and working methods of the device5may be referred to related descriptions ofFIG.1,FIG.2andFIG.4, and are not described in detail here. FIG.6is a structural diagram of a PDCP duplication function activation device according to an embodiment. Referring toFIG.6, the PDCP duplication function activation device6may be applied to a network side, such as a base station, and be used to implement technical solutions of the above PDCP duplication function activation method as shown inFIG.3. In some embodiments, the device6may include: a determining circuitry61configured to determine a PDCP duplication function activation signaling, wherein the PDCP duplication function activation signaling includes at least one data offload indication identifier of at least one radio bearer configured with a PDCP duplication function; and a first transmitting circuitry62configured to transmit the PDCP duplication function activation signaling to a UE, so that the UE determines a duplication number of a data packet of the at least one radio bearer based on the PDCP duplication function activation signaling. In some embodiments, the PDCP duplication function activation signaling further includes indication information, wherein the indication information is used to indicate at least one logical channel used by the at least one radio bearer which is selected from logical channels configured by the at least one radio bearer. In some embodiments, when the at least one radio bearer includes a plurality of radio bearers, the PDCP duplication function activation signaling includes a plurality of data offload indication identifiers and a plurality of pieces of indication information, wherein the plurality of data offload indication identifiers and the plurality of pieces of indication information correspond to the plurality of radio bearers respectively, and are arranged in an ascending or descending order of radio bearer identifiers of the plurality of radio bearers. In some embodiments, the device6further includes a second transmitting circuitry63configured to: before the PDCP duplication function activation signaling is determined, transmit first activation configuration information to the UE, wherein the first activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, and a carrier used by each logical channel. The at least one data offload indication identifier is disposed before or after the indication information of the at least one radio bearer. In some embodiments, the second transmitting circuitry63includes a first transmitting sub-circuitry631configured to transmit the first activation configuration information to the UE via an RRC signaling. In some embodiments, the indication information uses a bitmap to indicate a usage status of the at least one logical channel configured by the at least one radio bearer. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the indication information, indication identifiers of logical channels are recorded in two groups, wherein the indication identifiers of the logical channels in each group are arranged in an ascending or descending order of LCID, one group of logical channels belongs to a master node, and the other group of logical channels belongs to a secondary node. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the indication information, a number of logical channels used by the at least one radio bearer which belong to a master node and a number of logical channels used by the at least one radio bearer which belong to a secondary node are separately recorded. In some embodiments, the device6further includes a third transmitting circuitry64configured to: before the PDCP duplication function activation signaling is determined, transmit second activation configuration information to the UE, wherein the second activation configuration information includes: a PDCP duplication field for indicating to configure the PDCP duplication function, a logical channel configured by each PDCP entity, priorities of logical channels, and a carrier used by each logical channel. In some embodiments, the third transmitting circuitry64includes a second transmitting sub-circuitry641configured to transmit the second activation configuration information to the UE via an RRC signaling. In some embodiments, the indication information includes a number of logical channels used by the at least one radio bearer, so that the UE determines the logical channels used by the at least one radio bearer based on the number of the logical channels and the priorities of the logical channels. In some embodiments, the PDCP duplication function is a PDCP duplication function combined with DC and CA, and in the second activation configuration information, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node are separately ordered, or, priorities of logical channels belonging to a master node and priorities of logical channels belonging to a secondary node are uniformly ordered. More details on working principles and working methods of the device6may be referred to related descriptions ofFIG.2,FIG.3andFIG.4, and are not described in detail here. In an embodiment of the present disclosure, a storage medium having computer instructions stored therein is provided, wherein when the computer instructions are executed, any one of the above methods as shown inFIG.1toFIG.4is performed. In some embodiments, the storage medium may include a computer readable storage medium. The storage medium may include a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk. In an embodiment of the present disclosure, a terminal including a memory and a processor is provided, wherein the memory has computer instructions stored therein, and when the processor executes the computer instructions, any one of the above methods as shown inFIG.1,FIG.2andFIG.4is performed. The base station and the UE may interact with each other. Specifically, the terminal may be the UE. In an embodiment of the present disclosure, a base station including a memory and a processor is provided, wherein the memory has computer instructions stored therein, and when the processor executes the computer instructions, any one of the above methods as shown inFIG.3andFIG.4is performed. Specifically, the base station may be an NR base station. Although the present disclosure has been disclosed above with reference to preferred embodiments thereof, it should be understood that the disclosure is presented by way of example only, and not limitation. Those skilled in the art can modify and vary the embodiments without departing from the spirit and scope of the present disclosure.
62,973
11943658
DETAILED DESCRIPTION Multi-protocol communication networks including a wired protocol, such as a Universal Serial Bus (USB) protocol, and a wireless protocol, such as a radio frequency (RF) protocol, ultra-wideband (UWB) technology, and millimeter-wave (mmWave) wireless and control methodology for operating the same are disclosed. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one skilled in the art that the present invention can be practiced without these specific details. In other instances, well-known structures, and techniques are not shown in detail or are shown in block diagram form in order to avoid unnecessarily obscuring an understanding of this description. Reference in the description to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. The term to couple as used herein can include both to directly electrically connect two or more components or elements and to indirectly connect through one or more intervening components. FIG.1is a block diagram illustrating an exemplary embodiment of a multi-protocol communication network capable of implementing a control methodology in accordance with the present disclosure. Referring toFIG.1the multi-protocol communication network100includes a first device102, such as a wireless hub or router, including a first transceiver104and a first interface-controller106coupled through a first wired-connection108to a first computer or peripheral device110using a packet-switched-wired-protocol. The multi-protocol communication network100generally further includes at least a second device112including a second-transceiver114wirelessly coupled to the first transceiver over a wireless-connection116using a packet-switched-wireless-protocol, and a second interface-controller118through a second wired-connection120to a second computer or peripheral device122. The first device device102is operable in a transmission mode to receive first-packets over the first wired-connection108using a packet-switched-wired-protocol to convert the first-packets to second-packets compatible with a packet-switched-wireless-protocol by inserting a number of synchronization bits in a preamble field of the first-packets, and to couple the second-packets to the second-transceiver114using the packet-switched-wireless-protocol. The second device112is operable in a receive mode to receive the second-packets over the wireless-connection116; to convert the second-packets to third packets compatible with the packet-switched-wired-protocol by removing the number of synchronization bits in the preamble field of the second-packets; and coupling the third packets to the second wired-connection120through the second interface-controller118using the packet-switched-wired-protocol. Generally, inserting the synchronization bits involves determining a packet duration of packets compatible with the packet-switched-wireless-protocol when establishing the wireless-connection or pairing, and inserting a number of synchronization bits so that a duration of the second-packets is aligned with the packet duration of the wireless packets. Transmitting the second-packets to the second-transceiver includes synchronizing a start of the second-packets with a start of packets exchanged between the first transceiver and second-transceiver to establish or maintain the wireless-connection. Although not shown, it will be understood that the second device112can also operate in the transmission mode while the first device102can operate in the receive mode. It will be further understood that while only a first device102and second device112are shown inFIG.1, a plurality of devices may be provided. In some embodiments, the packet-switched-wired-protocol is implemented using a Universal Serial Bus (USB) standard or protocol in which the first and third packets are USB packets used to connect the first peripheral device110to the first interface-controller106via a first USB cable (first wired-connection108), and to connect the second interface-controller118to the second peripheral device122via a second USB cable (second wired-connection120). The USB standard used can include any of the standards specified in existing USB specifications, USB 1.x, USB 2.0, USB 3.x, or USB4, or future generations of USB specifications. Advantageously, the USB standard used is USB 2.0 or later, and includes high speed (HS) USB packets having a data rate of at least 480 megabits per second (Mbit/s). Use of lower data rate packets, such as low speed (LS) and full speed (FS) packets is supported by the multi-protocol communication network100ofFIG.1, however doing so may result in longer wireless packets, increasing latencies and reducing power efficiency of the multi-protocol communication network. The packet-switched-wireless-protocol is generally implemented using a radio frequency (RF) wireless technology standard over, for example, a wireless local area network (WLAN). FIG.2illustrates schematic block diagrams of packets for packet-switched-wireless-protocol in the multi-protocol communication network ofFIG.1when operated in accordance with exemplary embodiments of the present disclosure. Referring toFIG.2packet202represents a first wired protocol packet, such as a USB packet, received in the first interface-controller106. The first wired protocol packet202includes a preamble field204at the beginning including a number of bits used for synchronizing wired communication between a 1stdevice, such as the first peripheral device110, and a host, such as the first interface-controller106, followed by a data field206capable of transmitting multiple bytes of data. Where the first wired protocol packet202is a USB packet the wired protocol packet can include any one of four types of USB packets, including token packets, Data Packets, Handshake Packets, or Start of Frame Packets. Generally, the preamble field204includes 4 to 8 bits and the data field206can include up to from 512 to 1024 bytes of data. Packet208represents a first wireless protocol packet, such as a RF packet, formed from the insertion of a number of synchronization bits into the preamble field204of the first wired protocol packet202after a sync-delay δsdand the transmitted from the first transceiver104to the second transceiver114. The first wireless protocol packet208includes a preamble field210at the beginning including a number of bits used for communicating data for a physical layer (P) and media access control (MAC) layer (C), followed by a data field212capable of transmitting bytes of data. The sync-delay δsdarises from the insertion of from about 4 to about 8 synchronization bits resulting in a delay from 8 to about 16 nanoseconds (ns). Packet214represents a second wireless protocol packet, such as a RF packet or UART packet, received in the second transceiver114after an over the air delay δairof about 5 ns. The second wireless protocol packet214like the first wireless protocol packet208includes a preamble field216at the beginning including the same P and C bits, followed by a data field218. Packet220represents a second wired protocol packet, such as a USB packet, formed by the removal of the synchronization bits from the preamble field216of the second wireless protocol packet214and coupled through the second interface-controller118to the second peripheral device122over the second wired-connection120after a preamble delay δpd. The preamble delay δpdarises from the removal of the synchronization bits from the preamble field216and can be from about 8 to about 32 ns. The second wired protocol packet220like the first wireless protocol packet202includes a preamble field222at the beginning including a number of bits used for synchronizing wired communication between the second interface-controller118and the second peripheral device122, followed by a data field224. Referring toFIG.2it will be understood that the delay times or latencies, δsdand δpdare minimized proactively starting a preamble transmission and inserting a number of synchronization bits in a preamble field of the first-packets—even before data bits of the packets have arrived. FIG.3is a state diagram illustrating exemplary states and transitions between states as part of operation of a multi-protocol communication network in accordance with exemplary embodiments of the present disclosure. Referring toFIG.3the method begins with one or both of the devices, i.e., the first device102and second device112of the multi-protocol communication network100in an off or reset state302. In a first transition power (PWR304) is applied to both of the first and second devices bringing the multi-protocol communication network100to an idle state306in which both the interface-controllers106and118(USB) and the transceivers104and114(RF) are OFF, that is not exchanging packets or communicating. Next, the first transceiver104sends a pairing request (PAIR-Req.308) to the second transceiver114bringing the multi-protocol communication network100to a SCAN/BEACON310state in which the RF is ON while the USB is OFF. If the second transceiver114responds to the pairing request (PAIR-Req.308), the first and second devices are paired (PAIR312) and exchange RF packets at a predetermined data rate, shown here as 1 Mbs, establishing RF communication, and the multi-protocol communication network100enters an RF-ON USB-OFF state314for a predetermined interval. If no response is received in response to the pairing request after a predetermined time-out316the multi-protocol communication network returns to the idle state306. Next, a USB-enable handshake or request (USB-Req.318) is initiated by the first interface-controller106of the first device102sending a USB packet through the first and second transceivers104,114, to the second interface-controller118bringing the multi-protocol communication network100to a USB state320in which both USB and RF communication are enabled or ON. In accordance with the methodology of the present disclosure the USB packet is converted or translated to a RF packet compatible with an RF portion of the multi-protocol communication network100by the insertion of synchronization bits into a preamble field of the USB packet so that a duration of the RF packets is aligned with packet duration of RF packets previously used to establish RF communication. Generally, the conversion is accomplished proactively by sensing a beginning of reception of the USB packet and starting a preamble transmission by inserting the number of synchronization bits in the preamble field without waiting for receipt of a data portion of a first one of the first-packets. The number of synchronization bits can include bits of the non-packet based data received or accumulated in the first device and the second device. Thus, in some embodiments the first and second device are operable to buffer non-packet based data sufficient to enable a slowest RF packet rate dictated by a USB packet rate. If no USB-enable handshake (USB-Req.318) is initiated within the predetermined interval the RF communication, i.e., the exchange of RF packets is discontinued or the RF is disconnected (RF-Disc.322) and the multi-protocol communication network100returns to the idle state306. If the USB-enable handshake (USB-Req.318) is initiated but no response is received the USB communication is discontinued or the USB is disconnected (USB-Disc.324) and the multi-protocol communication network100returns to the RF-ON USB-OFF state314for at least the predetermined interval, actively ‘listening’ for a USB-enable handshake (USB-Req.318). After establishing USB communication with the multi-protocol communication network100in the USB state320the network will continue communication, exchanging USB packets aligned and synchronized with RF packets. If the RF communication is interrupted or disconnected (RF-Disc.326) the multi-protocol communication network100returns to the idle state306. If USB communication is interrupted or suspended (USB-Susp.328) the multi-protocol communication network100will enter a USB-suspended state330in which the RF communication is ON, while the USB communication is suspended or asleep. If further USB packets are received the USB interface-controllers106,118, are awakened (USB-Wake332), and the multi-protocol communication network100return to the USB state320exchanging USB packets aligned and synchronized with RF packets. If no USB packets are received after the predetermined USB communication is disconnected (USB-Disc.334) and the multi-protocol communication network100returns to the RF-ON USB-OFF state314for at least, actively ‘listening’ for a USB-enable handshake (USB-Req.318). Alternatively, if the RF communication is interrupted or disconnected (RF Disc.336) the multi-protocol communication network100returns to the idle state306. It will be understood that because the transmission and reception of the wired and wireless packets is substantially pipelined, with the transmission of one wireless packet immediately following a previous packet, and without the need to buffer an entire wired or wireless packet, the methodology of the present disclosure provides substantial decrease overall latency in data communication, reduces a complexity of the wireless hub or router (first device102or second device112), and increases power efficiency of the multi-protocol communication network100by reducing the time the multi-protocol communication network must remain powered while effectively idled. FIG.4is a block diagram of a wired to wireless hub or router suitable for use as the first device102or second device112in the multi-protocol communication network100ofFIG.1, and capable of implementing a control methodology in accordance with exemplary embodiments of the present disclosure. Referring toFIG.4, in the embodiment shown the wired to wireless hub/router400includes a USB interface402, a transceiver, such as a 60 GHz RF radio404, a system and peripheral interconnect406, and additional system resources408. The USB interface402generally includes a central processing units (CPU) subsystem410, and an input/output (I/O) subsystem412. The CPU subsystem410includes one or more CPUs414, Static Random Access Memory (SRAM416), and Read Only Memory (ROM418) all coupled through the interconnect406. The CPU(s)414can include any suitable processor capable of operating the wired to wireless hub/router400. The SRAM416is a fast, non-volatile memory (e.g., NAND flash, NOR flash, etc.) having shorter access or read times that is configured for storing data and instructions accessed by the CPU(s)414. The ROM418can include an embedded non-volatile memory (eNVM) that is configured for storing boot-up routines, configuration parameters, and other firmware parameters and settings. The I/O subsystem412of the USB interface402can include various different types of I/O blocks, timer/counter/pulse-width-modulation (TCPWM) blocks, and various sub-circuits or blocks. The I/O blocks can include, for example, general purpose input output blocks subsystems (GIPOs); two or more serial communication blocks (2×SCBs), each capable of providing a digital interface such as a UART or an Inter-Integrated Circuit (I2C) interface; and a USB physical layer interface, such as a USB Transceiver Macrocell Interface (UTMI) interface (PHY UTMI+), Other sub-circuits or blocks can include one or more electronic fuse circuits (EFUSE) to enable in-chip programming or tuning of the USB interface402and/or wired to wireless hub/router400. The interconnect406can include a single-level Advanced High-Performance Bus (AHB) or system bus that is configured as an interface that couples the various components of the USB interface402to each other, as well as function as a data and control interface between the RF radio404and other system resources408of the wired to wireless hub/router400. The RF radio404can include, in addition to an electronic oscillator to generate an RF signal and modulator/de-modulator to add or extract information from the RF signal, a Medium Access Control layer (MAC420) and a physical layer (PHY.422) The MAC420can include a crypto block or subsystem, and a L1 Header block or subsystem. The physical layer (PHY.422) can include a serializer/deserializer (SERDES) block or subsystem to convert data between serial data and parallel interfaces, and sync-block or subsystem. The system resources408can include various electronic circuits and subsystems to support various states and modes of operation of the wired to wireless hub/router400. For example, the system resources408can include a power subsystem (Power424) including analog and/or digital circuits such as sleep control circuits, a wake-up interrupt controller (WIC), a power-on-reset (POR), voltage and/or current reference generators or circuits (REF). The system resources408can also include a clock subsystem (Clock426) having analog and/or digital circuits such as, for example, clock control circuits, watchdog timer (WDT) circuit(s), internal low-speed oscillator (ILO) circuit(s), and internal main oscillator (IMO) circuit(s). The system resources408can further include analog and/or digital circuit reset circuits428that provide reset control and support external reset (XRES). In some embodiments, such as that shown, the system resources408can include a test subsystem (test430), including various test circuits or blocks for test mode entry and analog and/or digital design-for-testability (DFT) operations. A method of operating a multi-protocol communication network will now be described with reference toFIG.5. Referring toFIG.5the method begins with establishing a wireless-connection between a first device including a first transceiver and a first interface-controller coupled to a first wired-connection, and a second device including a second-transceiver and a second interface-controller coupled to a second wired-connection using a packet-switched-wireless-protocol (step502). Generally, as noted above establishing the wireless-connection includes determining a packet duration of packets compatible with the packet-switched-wireless-protocol. Next, data is received in a first interface-controller in the first device from the first wired-connection, the data including first-packets and non-packet based data to be transmitted through the wireless-connection (step504). The first-packets received using the packet-switched-wired-protocol are converted to second-packets compatible with the packet-switched-wireless-protocol by proactively starting a preamble transmission and inserting a number of synchronization bits in a preamble field of the first-packets to align a packet duration of the second-packets with a packet duration of packets of the packet-switched-wireless-protocol, wherein the number of synchronization bits include bits of the non-packet based data (step506). As noted above, the synchronization bits can also include commas inserted into the preamble, instead of or in addition to bits of the non-packet based data where necessary so that a duration of the converted second-packets is aligned with a packet duration of packets compatible with the packet-switched-wireless-protocol. Next, the second-packets are then transmitted from the first device to the second device using the packet-switched-wireless-protocol (step508). Finally, the second-packets received in the second transceiver are converted to third packets compatible with the packet-switched-wireless-protocol by removing the number of synchronization bits in the preamble field of the second-packets, and coupling the third-packets to the second wired-connection through the second interface-controller (step510). Optionally, the method can further include while the first interface-controller and second interface-controller are idle accumulating non-packet based UART or PCM data, and exchanging RF packets substantially consisting of the accumulated non-packet based data between the first and second transceiver while the first and second interface-controllers are idle to maintain or ‘keep alive’ the wireless-connection while packet based (USB) data is not being exchanged (step512). Thus, multi-protocol communication networks and methodologies for controlling the same to decrease latency and improve reliability of a wireless-connection, have been disclosed. Embodiments of the present invention have been described above with the aid of functional and schematic block diagrams illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance. It is to be understood that the Detailed Description section, and not the Summary and Abstract sections, is intended to be used to interpret the claims. The Summary and Abstract sections can set forth one or more but not all exemplary embodiments of the present invention as contemplated by the inventor(s), and thus, are not intended to limit the present invention and the appended claims in any way. The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
22,685
11943659
BEST MODE According to an embodiment of the disclosure, a method, performed by a terminal, of accessing an access point (AP) in a wireless communication system includes receiving a beacon frame including load information of a currently accessed AP, determining whether to maintain access to a currently accessed radio channel, based on the load information of the currently accessed AP, receiving a beacon frame including load information of an AP near the terminal by scanning the AP near the terminal, when determining not to maintain the access to the currently accessed radio channel, determining a radio channel to be accessed, based on the load information of the AP near the terminal and inter-channel interference information, and accessing the determined radio channel. According to an embodiment of the disclosure, the load information of the AP may include basic service set (BSS) load information including a number of accessed terminals for each frequency band of the AP, a channel load quantity for each frequency band of the AP, and an average spectral efficiency of the AP. According to an embodiment of the disclosure, the determining of the radio channel to be accessed may include determining an AP to be accessed, a frequency band to be used, and a channel number to be used. According to an embodiment of the disclosure, the determining of whether to maintain the access to the currently accessed radio channel may include determining whether to maintain the access to the currently accessed radio channel based on a number of terminals accessing a currently accessed band of the currently accessed AP, when a channel load quantity of the currently accessed radio channel is greater than or equal to a threshold value. According to an embodiment of the disclosure, the inter-channel interference information may include where inter-channel interference of the AP near the terminal occurs, and the determining of the radio channel to be accessed based on the load information of the AP near the terminal and the inter-channel interference information may include measuring a received signal strength indicator (RSSI) of a signal received from the AP near the terminal and determining whether inter-channel interference of the AP near the terminal occurs, based on the RSSI of the signal received from the AP near the terminal and an adjacent channel power leakage ratio. According to an embodiment of the disclosure, the inter-channel interference information may include whether inter-channel interference of the AP near the terminal occurs, and the determining of the radio channel to be accessed based on the load information of the AP near the terminal and the inter-channel interference information may include measuring the RSSI of the signal received from the AP near the terminal, determining whether inter-channel interference of the AP near the terminal occurs, based on the RSSI of the signal received from the AP near the terminal and the adjacent channel power leakage ratio, obtaining a sum of channel load quantities of channels interfering with the radio channel with respect to the terminal's access to the radio channel, based on whether the inter-channel interference of the AP near the terminal occurs, obtaining a weight value based on the sum of the channel load quantities and a channel load quantity of the radio channel with respect to the terminal's access to the radio channel, obtaining an expected value of an average spectral efficiency of the radio channel with respect to the terminal's access to the radio channel, based on an average spectral efficiency of the radio channel, obtaining a resource efficiency with respect to the terminal's access to the radio channel, based on the weight value and the expected value of the average spectral efficiency, and determining the radio channel to be accessed, based on the resource efficiency. According to an embodiment of the disclosure, the method may further include receiving a beacon frame including the load information of the AP near the terminal by periodically scanning the AP near the terminal, measuring the RSSI of the signal received from the AP near the terminal, transmitting the beacon frame including the load information of the AP near the terminal and the RSSI of the signal received from the AP near the terminal to a centralized AP that is controlled by a central controller, and receiving an access determination result from the centralized AP and accessing the radio channel to which the access of the terminal is determined, based on the access determination result. According to an embodiment of the disclosure, the access determination result may include a result obtained in a way that the central controller determines a total resource efficiency of a network based on the beacon frame of the AP near the terminal, a transmission speed and traffic information for each of terminals accessing the centralized AP, and an RSSI of an AP near the centralized AP, and determines a radio channel to be accessed for each terminal based on the determined total resource efficiency of the network, in which the total resource efficiency, the transmission speed and the traffic information for each terminal, and the RSSI are received by the central controller from the centralized AP. According to an embodiment of the disclosure, the total resource efficiency of the network may be determined in a way that the central controller determines whether inter-channel interference occurs between APs near the centralized AP occurs based on the RSSI of the AP near the centralized AP and an adjacent channel power leakage ratio and determines the total resource efficiency of the network based on whether inter-channel interference occurs between the APs near the centralized AP. According to another embodiment of the disclosure, a terminal accessing an AP in a wireless communication system may include a communicator configured to communicate with a plurality of APs, a memory storing one or more instructions, and at least one processor configured to execute the one or more instructions to receive a beacon frame including load information of a currently accessed AP, determine whether to maintain an access to a currently accessed radio channel, based on the load information of the currently accessed AP, receive a beacon frame including load information of an AP near the terminal by scanning the AP near the terminal, when determining not to maintain the access to the currently accessed radio channel, determine a radio channel to be accessed based on load information of the AP near the terminal and inter-channel interference information, and access the determined radio channel. According to another embodiment of the disclosure, a computer program product includes a recording medium having stored therein a program for causing a terminal to perform operations of receiving a beacon frame including load information of a currently accessed AP, determining whether to maintain an access to a currently accessed radio channel, based on the load information of the currently accessed AP, receiving a beacon frame including load information of an AP near the terminal by scanning the AP near the terminal, when determining not to maintain the access to the currently accessed radio channel, determining a radio channel to be accessed based on load information of the AP near the terminal and inter-channel interference information, and accessing the determined radio channel. Mode of Disclosure Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings. When the embodiments of the disclosure are described, technical matters that are well known in a technical field of the disclosure and are not directly related to the disclosure will not be described. By omitting an unnecessary description, the subject matter of the disclosure will be more clearly described without being obscured. For the same reasons, some elements will be exaggerated, omitted, or simplified in the attached drawings. The size of each element does not entirely reflect the actual size of the element. In each drawing, an identical or corresponding element will be referred to using an identical reference numeral. Advantages and features of the disclosure and a method of achieving them will be apparent with reference to embodiments of the disclosure described below together with the attached drawings. However, the disclosure is not limited to the disclosed embodiments of the disclosure, but may be implemented in various manners, and the embodiments of the disclosure are provided to complete the disclosure of the disclosure and to allow those of ordinary skill in the art to understand the scope of the disclosure. The disclosure is defined by the category of the claims. Throughout the specification, an identical reference numeral will indicate an identical element. Meanwhile, it is known to those of ordinary skill in the art that blocks of a flowchart and a combination of flowcharts may be represented and executed by computer program instructions. These computer program instructions may also be stored in a general-purpose computer, a special-purpose computer, or a processor of other programmable data processing devices, such that the instructions implemented the computer or the processor of the programmable data processing device produce a means for performing functions specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer usable or computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer usable or computer-readable memory produce an article of manufacture including instructions that implement the function specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions that are executed on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart and/or block diagram block or blocks. In addition, each block represents a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical function(s). It should also be noted that in other implementations, the function(s) noted in the blocks may occur out of order. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending on the functionality involved. In the current embodiment of the disclosure, the term ‘˜unit’, as used herein, denotes a software or hardware component, such as a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC), which performs certain tasks. However, the meaning of ‘˜unit’ is not limited to software or hardware. “Unit” may advantageously be configured to reside on the addressable storage medium and configured to reproduce one or more processors. Thus, a unit may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for in the components and ‘˜units’ may be combined into fewer components and ‘˜units’ or further separated into additional components and ‘˜units’. In addition, components and ‘unit(s)’ may be implemented to execute one or more CPUs in a device or a secure multimedia card. In the embodiments of the disclosure, ‘˜unit’ may include one or more processors. The terms as used herein are defined considering the functions in the disclosure and may be replaced with other terms according to the intention or practice of the user or operator. Therefore, the terms should be defined based on the overall disclosure. While embodiments of the disclosure are described by using a wireless local area network (WLAN) system as an example, the embodiments of the disclosure may also be applied to other communication systems having a similar technical background or channel form. The embodiments of the disclosure may also be applied to other communication systems through some modifications within a range that does not largely depart from the scope of the disclosure based on determination by a person of ordinary skill in the art. In the disclosure, the term “terminal” may be interchangeably used with a user or a STA (station). The term “access” may also be interchangeably used with association. For example, user association (UA) may be understood as terminal access. In the disclosure, an access point (AP) may mean a WLAN AP. In the disclosure, a centralized AP (or a controller-based AP) may mean an AP controlled by a central controller. In the disclosure, a stand-alone AP (or an autonomous AP or a non-central-controller-based AP) may mean an AP that is not controlled by the central controller. In the disclosure, the AP may be any one of a centralized AP or a stand-alone AP, unless described specially. In the disclosure, the AP may be a dual-band WLAN AP that supports both a frequency band of 2.4 GHz and a frequency band of 5 GHz. Each AP is allocated a use channel for each band. The AP may transmit a beacon frame including load information of the AP. In the disclosure, determination of a radio channel to be accessed may mean determination of an AP to be accessed, a frequency band to be used, and a channel number to be used. In the disclosure, a central controller may mean a controller that obtains information including the number of terminals accessing an AP, traffic information for each terminal, and a transmission speed from the centralized AP through wired communication (e.g., Ethernet). For example, the central controller may be referred to as a WLAN controller. The stand-alone AP operates independently of the central controller, and thus the central controller may not be able to directly obtain information associated with the stand-alone AP from the stand-alone AP. In the disclosure, the distributed UA scheme may mean a UA scheme in which a terminal determines an AP which the terminal is to access in a wireless communication system. In the disclosure, the centralized UA scheme may mean a UA scheme in which the central controller determines an AP which each terminal in a network is to access based on a WLAN condition in a wireless communication system. FIG.1is a view for describing a method, performed by a terminal, of accessing a radio channel based on a distributed UA scheme, according to an embodiment of the disclosure. Referring toFIG.1, a terminal110according to an embodiment of the disclosure currently accesses an AP-2120. The terminal110may receive a beacon frame122including load information of the AP-2120from the AP-2120. The terminal110may determine whether to maintain access to a currently accessed radio channel, based on the load information of the currently accessed AP-2120. This will be described in more detail with reference toFIG.4. When the terminal110determines not to maintain the access to the currently accessed radio channel, the terminal110may receive a beacon frame including load information of an AP near the terminal110by scanning any AP near the terminal110. The terminal110may determine and access a radio channel based on the received load information of the AP near the terminal110and channel interference information. This will be described in more detail with reference toFIG.6. FIG.2is a view for describing a method, performed by a terminal, of accessing a radio channel by using a centralized UA scheme, according to an embodiment of the disclosure. Referring toFIG.2, each AP may transmit a beacon frame including load information of the AP. A terminal210currently accesses an AP-3230. The terminal210may receive a beacon frame including load information of an AP near the terminal210by periodically scanning any AP near the terminal210. The terminal210may also receive a beacon frame including load information of an AP near the terminal210by aperiodically scanning any AP near the terminal210. InFIG.2, the terminal210may receive a beacon frame including load information of each AP from each of APs near the terminal210, e.g., an AP-1240, an AP-2250, and an AP-3230. The terminal210may transmit the received beacon frame including the load information of the AP to the centralized AP, the AP-3230. The terminal210may then receive an access determination result from the centralized AP, the AP-3230, and access a radio channel to which access of the terminal210is determined, based on the access determination result. The access determination result may be determined by the central controller220. This will be described in more detail with reference toFIG.10. FIG.3illustrates a wireless LAN environment in which a method of accessing a radio channel is executed by a terminal, according to an embodiment of the disclosure. Referring toFIG.3, an environment where a method, according to an embodiment of the disclosure, is executed may include at least one stand-alone AP and at least one centralized AR InFIG.3, an AP-1330and an AP-3340are stand-alone APs, and an AP-2320is a centralized AP. In an AP-dense situation, APs often use the same channel or overlapping channels, such that the APs share channels with each other. Thus, load distribution is required for network quality improvement. In a case where the AP is a dual-band WLAN AP that supports both the frequency band of 2.4 GHz and the frequency band of 5 GHz, load imbalance between bands may occur, requiring load distribution between bands through band steering that moves the accessing terminal between the bands. In a case where the centralized AP and the stand-alone AP are mixed, load imbalance may occur between the centralized AP and the stand-alone AP. According to the disclosed method, in an environment where two types of APs, i.e., stand-alone APs and centralized APs, are densely mixed, network load distribution may be efficiently performed. FIG.4is a flowchart illustrating a method, performed by a terminal, of accessing a radio channel by using a distributed UA scheme, according to an embodiment of the disclosure. In operation410, a terminal may receive from a currently accessed AP; a beacon frame including load information of the AP. According to an embodiment of the disclosure, the load information of the AP may include basic service set (BSS) load information. The BSS load information may include the number of accessed terminals for each frequency band of the AP, |S(k,B)|, a channel load quantity for each frequency band of the AP, L(k,B), and an average spectral efficiency (SE) of the AP, Γ(k,B). Herein, S(k,B) may mean a set of terminals accessing a band B of an AP-k (B∈{2.4 GHz}) and L(k,B) may mean a channel load quantity in the frequency band B of the AP-k. L(k,B) may be defined as follows: L⁡(k,B)=∑i∈S⁡(k,B)αk,iCk,i[Equation⁢1] Herein, αk,imay mean a traffic arrival rate of an STA-i associated with the AP-k, Ck,imay mean a transmission speed of a link between the AP-k and the STA-i, and S(k,B) may mean a set of terminals accessing the band B of the AP-k B∈{2.4 GHz, 5 GHz}, as described above. Γ(k,B) may be defined as follows: Γ⁡(k,B)=❘"\[LeftBracketingBar]"S⁡(k,B)❘"\[RightBracketingBar]"Σi∈S⁡(k,B)⁢1f⁡(SNRk,i)[Equation⁢2] Herein, |S(k,B)| may mean the number of accessed terminals for each frequency band of an AP as described above, and f(SNRk,i) may mean an SE between the AP-k and the STA-i. This may be defined as follows: f(SNRk,i)=min(2.7, log2(1+0.25·SNRk,i))  [Equation 3] Herein, SNRk,imay mean a signal-to-noise ratio (SNR) between the AP-k and the STA-i. In operation420, the terminal may determine whether to maintain access to a currently accessed radio channel, based on the load information of the currently accessed AP. According to an embodiment of the disclosure, when a channel load quantity, L(k,B), of a currently accessed frequency band of a currently accessed AP is greater than or equal to a threshold value Lth, the terminal may determine whether to maintain access to the currently accessed radio channel, based on the number of terminals accessing a currently accessed band of the currently accessed AP. According to an embodiment of the disclosure, the terminal may determine not to maintain access to the currently accessed radio channel based on a probability 1❘"\[LeftBracketingBar]"S⁡(k,B)❘"\[RightBracketingBar]" of a reciprocal of the number |S(k,B)| of terminals accessing the currently accessed band of the currently accessed AP. In operation430, when the terminal determines not to maintain the access to the currently accessed radio channel in operation420, the terminal may scan an AP near the terminal and receive a beacon frame including load information of the AP near the terminal. In operation440, the terminal may determine a radio channel to be accessed, based on the received load information of the AP near the terminal and channel interference information. This will be described in more detail with reference toFIG.6. In operation450, the terminal may access the radio channel determined in operation440. FIG.5is a view for describing a method, performed by a terminal, of accessing a radio channel by using a distributed UA scheme, according to an embodiment of the disclosure. Referring toFIG.5, in operation510, an AP503that a terminal501currently accesses may transmit a beacon frame including load information of the currently accessed AP503, and the terminal501may receive the beacon frame. Operations520through550ofFIG.5are the same as operations420through450ofFIG.4, and thus will be described in brief. In operation520, the terminal501may determine whether to maintain access to a currently accessed radio channel, based on load information of the currently accessed AP503. In operation530, when the terminal501determines not to maintain the access to the currently accessed radio channel, the terminal501may scan an AP near the terminal501and receive a beacon frame including load information of the AP near the terminal501. In operation540, the terminal501may determine a radio channel to be accessed, based on the received load information of the AP near the terminal501and channel interference information. In operation550, the terminal501may access the determined radio channel. FIG.6is a flowchart illustrating a method of determining a radio channel to which a terminal according to an embodiment of the disclosure is to access, based on load information and channel interference information of an AP near the terminal. With reference toFIG.6, operation440ofFIG.4will be described in more detail. In operation430ofFIG.4, the terminal receives the beacon frame including the load information of the AP near the terminal. In operation610, the terminal may measure a received signal strength indicator (RSSI) of a signal received from the AP near the terminal. In operation620, the terminal may determine channel interference (or inter-channel interference) of the AP near the terminal, based on an RSSI of a signal received from the AP near the terminal and an adjacent channel power leakage ratio. Unlike in a frequency band of 5 GHz where non-overlapped channels are mostly used, in a frequency band of 2.5 GHz, other channels than channels #1, #6, and #11 are overlapping channels. Thus, between channels in a frequency band of 2.4 GHz, adjacent channel interference (ACI) may occur, affecting network performance, such that a radio channel to be accessed may have to be determined based on an interference level resulting from ACI. In an embodiment of the disclosure, a product of the RSSI of the signal received from the AP near the terminal and the adjacent channel power leakage ratio may be measured by an interference level resulting from ACI. The adjacent channel power leakage ratio may mean a ratio of power transmitted in a current channel to power leaking to an adjacent channel. That is, as the adjacent channel power leakage ratio increases, an interference level affecting the adjacent channel increases. The adjacent channel power leakage ratio with respect to a channel in 2.4 GHz may be as shown in Table 1. TABLE 1Channel distance12345Power leakage ratio (%)79.0652.6726.510.6270.121 In the following embodiment of the disclosure, a detailed description will be made of a method in which the terminal determines channel interference of the AP near the terminal, based on the RSSI of the signal received from the AP near the terminal and the adjacent channel power leakage ratio. For example, for a channel of the band B of the AP-k that is any one AP among APs near the terminal, the terminal may determine as a radio channel interfering with the channel of the band B of the AP-k, an adjacent radio channel in which an RSSI measured by the terminal while using the same channel as the channel of the band B of the AP-k is greater than or equal to −82 dBm (a minimum threshold value to be detected by carrier sense-based clear channel assessment (CCA)). The terminal may also determine as the radio channel interfering with the channel of the frequency band B of the AP-k, an adjacent radio channel in which a product of an adjacent channel power leakage ratio and an RSSI that is measured by the terminal while using an overlapped channel with the channel of the frequency band B of the AP-k is greater than or equal to −62 dBm (a minimum threshold value to be detected by energy detection-based CCA). When there is a radio channel interfering with the channel of the band B of the AP-k, the terminal may determine that there is channel interference of the AP-k that is the AP near the terminal. Unlike in a frequency band of 5 GHz where non-overlapped channels are mostly used, in a frequency band of 2.4 GHz, other channels than channels #1, #6, and #11 are overlapping channels. According to the above-described embodiment of the disclosure; the terminal may determine interference between the channel of the band B of the AP-k that is any one AP among APs near the terminal and other radio channels. In operation630, based on whether channel interference of the AP near the terminal occurs; the terminal may obtain a sum of channel load quantities of channels interfering with a radio channel when the terminal accesses the radio channel. For example, when the terminal accesses the channel of the frequency band B of the AP-k, the terminal may obtain a sum of channel load quantities of radio channels interfering with the channel of the band B of the AP-k based on whether interference between the channel of the band B of the AP-k and other radio channels occurs, determined in operation620. The sum of the channel load quantities may be defined as below. ∑m∈[k]⋃𝒜k(B)L⁡(m,B)[Equation⁢4] Herein;k(B) may mean a set of APs interfering with the band-B of the AP-k, and L(m,B) may mean a load quantity in the band-B of an AP-m. In operation640, the terminal may obtain a weight value based on the above-described sum of channel load quantities and a channel load quantity of a radio channel with respect to the terminal's access to the radio channel. For example, the terminal may obtain the weight value, based on the channel load quantity of the radio channel when the terminal accesses the channel of the band-B of the AP-k and on the sum of the channel load quantities of the radio channels interfering with the channel of the band-B of the AP-k. The weight value may be defined as follows: wi(k,B)=11+αk,iCk,i+∑m∈[k]⋃𝒜k(B)L⁡(m,B)[Equation⁢5] Herein, may mean a traffic arrival rate of the STA-i associated with the AP-k, and Ck,imay mean a transmission speed of a link between the AP-k and the STA-i. In operation650, the terminal may obtain an expected value of an average SE of a radio channel with respect to the terminal's access to the radio channel, based on the average SE of the radio channel. For example, the terminal may obtain the expected value of the average SE of the band-B of the AP-k with respect to the terminal's access to the channel of the band-B of the AP-k, based on the average SE δ(k,B) of the band-B of the AP-k. The expected value of the average SE may be defined as below. Γi⁡(k,B)=1+S⁡(k,B)1f⁡(SNRk,i)+S⁡(k,B)Γ⁡(k,B)[Equation⁢⁢6] Herein, Γ(k,B) may mean an average SE of the band-B of the AP-k, may mean the number of accessed terminals for each frequency band of the AP, and f(SNRk,i) may mean an SE between the AP-k and the STA-i. In operation660, the terminal may obtain a resource efficiency with respect to the terminal's access to the radio channel, based on the weight value and the expected value of the average SE. For example, the terminal may obtain the resource efficiency with respect to the terminal's access to the channel of the band-B of the AP-k, based on the weight value wi(k,B) and the expected value Γi(k,B) of the average SE. The resource efficiency may be defined as follows: REi(k,B)=wi(k,B)·Γi(k,B)  [Equation 7] In operation670, the terminal may determine a radio channel to be accessed, based on the resource efficiency. For example, the terminal may determine as the radio channel to be accessed, AP-k* and band B* in which the resource efficiency is highest, based on the resource efficiency REi(k,B) with respect to the terminal's access to the channel of the band-B of the AP-k. The radio channel to be accessed may be determined as below. (k*,B*)=argmaxk∈𝒜i(B),B∈(2.4GHz,5⁢GHz)⁢REi(k,B)[Equation⁢8] Herein,i(B) may mean a set of APs using a band-B near the STA-i. According to the disclosed method, in a situation where dual-band APs are highly dense, the terminal may perform load distribution between APs and bands based on whether radio channel interference occurs, thus improving performance of the WLAN. FIG.7is a view for describing a method, performed by a terminal, of accessing a radio channel based on a distributed UA scheme, according to an embodiment of the disclosure. In operation701, a terminal710currently accesses an AP-2720with a saturated channel load quantity. The AP-2720may transmit a beacon frame including load information of the AP-2720, and the terminal710may receive the beacon frame. The terminal710may determine whether to maintain access to the currently accessed AP-2720, based on the load information of the currently accessed AP-2720. When the terminal710determines not to maintain the access, the terminal710goes to operation702. In operation702, the terminal710may receive a beacon frame including load information of each AP from each of APs near the terminal710, e.g., an AP-1740, the AP-2720, and the AP-3730by scanning the APs near the terminal710. The terminal710may determine and access a radio channel based on the received load information of the AP near the terminal710and channel interference information. Operation703shows that the terminal710accesses the determined AP-3730. FIG.8is a view for describing a method of determining whether channel interference occurs between APs according to an embodiment of the disclosure. Referring toFIG.8, a terminal810may determine whether channel interference occurs between APs near the terminal810. For example, the terminal810may determine whether interference occurs between a channel 1 of a 2.4 GHz band of an AP-1820and a channel 2 of the 2.4 GHz band of an AP-2830in which the AP-1820and the AP-2830are near the terminal810. The channel 1 and the channel 2 are adjacent overlapped channels and a product of an RSSI of a signal received from the AP-2830and an adjacent channel power leakage ratio, 0.7906, of the channel 1 and the channel 2 (see Table 1), that is, −59.02 dBm, is greater than −62 dBm, such that the terminal810may determine that the channel 2 of the 2.4 GHz band of the AP-2830is a radio channel that interferes with the channel 1 of the 2.4 GHz band of the AP-1820. FIG.9is a view for describing a method, performed by a terminal, of accessing a radio channel by using a centralized UA scheme, according to an embodiment of the disclosure. In operation910, a terminal901may also receive a beacon frame including load information of an AP near the terminal901by periodically scanning APs near the terminal901. The terminal901may also receive a beacon frame including load information of an AP near the terminal901by aperiodically scanning the APs near the terminal901. The load information of the AP included in the beacon frame may be as disclosed in the description made with reference toFIG.4. That is, the load information of the AP may include BSS load information which includes the number of accessed terminals for each frequency band of the AP, |S(k,B)|, a channel load quantity for each frequency band of the AP, L(k,B), and an average SE of the AP, Γ(k,B). In operation915, the terminal901may measure an RSSI of a signal received from an AP near the terminal901. In operation920, the terminal901may transmit a beacon frame including load information of the AP near the terminal901and the RSSI of the signal received from the AP near the terminal901to a centralized AP903. The centralized AP903may transmit the received information to a central controller905. A period (PR) in which the terminal901scans APs near the terminal901and transmit the information through operations910through920may be, for example, 1 minute. In operation930, the centralized AP903may transmit a transmission speed and traffic information for each terminal accessing the centralized AP903to the central controller905. The traffic information may include a traffic arrival rate of the terminal901. In operation940, the central controller905may command the centralized AP903to periodically perform channel scanning. A scanning period Pc may be, for example, 12 hours. The central controller905may command the centralized AP903to aperiodically perform channel scanning. In operation945, the centralized AP903may scan an AP near the centralized AP903according to a channel scan command of the central controller905. The centralized AP903may obtain the RSSI of a signal transmitted and received between APs near the centralized AP903through channel scanning. In operation950, the centralized AP903may transmit the RSSI of a signal transmitted and received between APs near the centralized AP903to the central controller905. In operation960, the central controller905may determine a total resource efficiency of a network to determine a radio channel to be accessed for each terminal, based on a beacon frame of an AP near the terminal901, received from the centralized AP903, a transmission speed and traffic information for each terminal accessing the centralized AP903, and the RSSI of a signal transmitted and received between APs near the centralized AP903. A more detailed description will be made in the following embodiment of the disclosure. Hereinbelow, a description will be made of a method in which the central controller905determines whether interference occurs between APs near the centralized AP903. The central controller905may determine, based on the RSSI of a signal transmitted and received between APs near the centralized AP903and an adjacent channel power leakage ratio, whether inter-channel interference occurs between the APs near the centralized AP903. In an embodiment of the disclosure, a product of the RSSI of the signal transmitted and received between APs near the centralized AP903and the adjacent channel power leakage ratio may be measured as an interference level resulting from ACI. For example, for the channel of the band-B of the AP-k that is any one AP among APs near the centralized AP903, the central controller905may determine as a radio channel interfering with the channel of the band-B of the AP-k, an adjacent radio channel in which an RSSI measured by the AP-k while using the same channel as the channel of the band-B of the AP-k is greater than or equal to −82 dBm (a minimum threshold value to be detected by carrier sense-based clear channel assessment (CCA)). The central controller905may also determine as the radio channel interfering with the channel of the band-B of the AP-k, an adjacent radio channel in which a product of an adjacent channel power leakage ratio and an RSSI that is measured by the AP-k while using an overlapped channel with the channel of the band-B of the AP-k is greater than or equal to −62 dBm (a minimum threshold value to be detected by energy detection-based CCA). According to the above-described embodiment of the disclosure, the central controller905may determine interference between the channel of the band-B of the AP-k that is any one AP among APs near the centralized AP and other radio channels. In the disclosure, an STA (terminal) may be classified into a legacy STA and a non-legacy STA. The legacy STA (hereinafter, an L-STA) may be a general STA supporting a WLAN. The non-legacy STA (hereinafter, an NL-STA) may be an STA having embedded therein software or hardware implementing the disclosed method. The NL-STA may be expressed as two types depending on a type of an AP the NL-STA currently accesses. The NL-STA accessing the centralized AP may be expressed as a C-STA, and the STA accessing the stand-alone AP may be expressed as a U-STA. This may be summarized as shown in Table 2. TABLE 2Whether UA ControlTypeDescriptionS/W is InstalledLegacy STA (L-STA)General legacy STAXNon-legacy STAC-STANon-legacy STAO(NL-STA)associated withCentralized APU-STAGeneral legacy STAO Hereinbelow, a description will be made of a method in which the central controller905obtains a total channel load quantity applied to the band-B of the AP-k that is any one AP among APs existing in the network. A load quantity of the band-B of the AP-k, L(k, B) may be defined as below. L⁡(k,B)=∑i∈SC(k,B)xi(k,B)·αk,iCk,i+∑j∈SNC(k,B)αk,jCk,j[Equation⁢9] Herein, xi(k,B) may mean a binary indicator indicating association of the STA-i with respect to the band-B of the AP-k, Sc(k,B) may mean a set of C-STAs associated with the band-B of the AP-k, SNC(k,B) may mean a set of non C-STAs (i.e., U-STAs and L-STAs) associated with the band-B of the AP-k, may mean a traffic arrival rate of the STA-i associated with the AP-k, and Ck,imay mean a transmission speed of a link between the AP-k and the STA-i. Next, a total channel load quantity applied to the band-B of the AP-k may be Ltot(k,B), which may be defined as a result of adding a load quantity of radio channels interfering with the above-described channel to the channel load quantity of the band-B of the AP-k. This may be defined as follows: Ltot(k,B):=∑m∈[k]⋃𝒜k(B)L⁡(m,B)[Equation⁢10] Hereink(B) may mean a set of APs interfering with the band-B of the AP-k. Hereinbelow, a description will be made of a method in which the central controller905determines a total resource efficiency of a network based on a beacon frame of an AP near the terminal901, received from the centralized AP903, a transmission speed and traffic information for each terminal accessing the centralized AP903, an RSSI of a signal transmitted and received between APs near the centralized AP903, an adjacent channel power leakage ratio, and whether inter-channel interference of the AP near the centralized AP903occurs. The total resource efficiency of the network may be defined as follows ∑B∑k∈𝒜⁡(B)RE⁡(k,B)[Equation⁢11] Herein, RE(k, B) may mean a resource efficiency in the band-B of the AP-k, and may be defined as below. RE(k,B)=w(k,B)·Γ(k,B)  [Equation 12] Herein, may mean a weight value based on a channel load quantity applied to the band-B of the AP-k and may be defined as below. w(k,B)={1+Ltot(k,B)}−1[Equation 13] In Equation 12, Γ(k, B) may mean an average SE in the band-B of the AP-k, and may be defined as below. Γ⁡(k,B)=∑i∈SC(k,B)xi(k,B)+❘"\[LeftBracketingBar]"SNC(k,B)❘"\[RightBracketingBar]"∑i∈SC(k,B)xi(k,B)f⁡(SNRk,i)+∑j∈SC(k,B)1f⁡(SNRk,j)[Equation⁢14] Herein, xi(k, B) may mean a binary indicator indicating association of the STA-i with respect to the band-B of the AP-k, Sc(k,B) may mean a set of C-STAs associated with the band-B of the AP-k, SNC(k,B) may mean a set of non C-STAs (i.e., U-STAs and L-STAs) associated with the band-B of the AP-k, and f(SNRk,i) may an SE between the AP-k and the STA-i. The central controller905may determine a radio channel that each terminal is to access to maximize a total resource efficiency of the network. This may be defined as follows: maximize⁢∑B∑k∈𝒜⁡(B)RE⁡(k,B)[Equation⁢15] Herein,(B) may mean a set of all APs existing in the network. Constraints of Equation 15 may be defined by Equation 16 and Equation 17. Equation 16 may mean as a constraint for association of the STA-i, that the STA-i is associated with a particular band of any AP in the network. ∑B∑k∈𝒜⁡(B)xi⁢(k,B)=1,∀xi⁢(k,B)∈{0,1}[Equation⁢16] Herein,(B) may mean a set of all APs existing in a network, and xi(k, B) may mean a binary indicator indicating association of the STA-i with respect to the band-B of the AP-k. Equation 17 is a constraint for a channel load quantity applied to the band-B of the AP-k, Ltot(k, B), which may mean that Ltot(k,B) should be less than a predefined channel load quantity threshold value Lth. Ltot(k,B)≤Lth[Equation 17] Equation 15 is a mixed integer quadratic fractional programming (MIQFP) problem, and is NP-Hard. Thus, the following description will be made of a method of obtaining a solution by transforming this problem into a soluble problem. As in Equation 13, w(k,B) has a fractional form, such that an objective function [Equation 15] that is a product of w(k,B) and Γ(k,B) is an MIQFP problem having a fractional form. To transform this problem into a soluble mixed integer quadratic programming (MIQP) problem, an additional parameter has to be introduced using a parametric technique. In this way, the MIQFP problem having a fractional form may be transformed into an MIQP problem having a quadratic form. An equation obtained by transforming the original problem Equation 13 into the MIQP problem by introducing a parameter λ(k, B) may be expressed as below, maximize⁢∑B∑k∈𝒜⁡(B){Σi∈SC(k,B)⁢xi(k,B)+❘"\[LeftBracketingBar]"SNC(k,B)❘"\[RightBracketingBar]"-λ⁡(k,B)w⁡(k,B)·(∑i∈SC(k,B)xi(k,B)f⁡(SNRk,i)+∑j∈SNC(k,B)1f⁡(SNRk,j))}[Equation⁢18] Constraints of Equation 18 may be defined by Equation 16 and Equation 17. According to Equation 18, the original problem Equation 15 may be transformed into the MIQP problem Equation 18 that is optimally soluble by a branch and bound technique due to a given value λ(k,B), This problem may obtain a solution by using a CPLEX MIQP solver to be described below. To obtain a solution to the problem of Equation 18, λ(k,B) should be defined first, According to a Dinkelbach's method, an initial λ(k,B) may be set to 0 and may be continuously renewed. An algorithm ends when a λ(k,B) difference before and after renewal is less than a predefined threshold value ϵ. The algorithm for obtaining the solution to the problem Equation 18 is as below. [Equation 19]Algorithm 1 Optional UA AlgorithmFor all B ∈ {2.4 GHz, 5 GHz} and k ∈(B),λ(k, B) ← 0, oldλ(k, B) ← 1.ϵ ← 10−5, stop ← false.while stop = false doSolve the problem using MIQP solver, given λ(k,B)stop ← truefor all B ∈ {2.4 GHz, 5 GHz} and k ∈(B) doif |λ(k,B) − oldλ(k, B)| > ϵ thenoldλ(k, B) ← λ(k, B)λ⁡(k,B)←w⁡(k,B)·(∑i∈SCxi(k,B)+❘"\[LeftBracketingBar]"SNC(k,B)❘"\[RightBracketingBar]")∑i∈SC(k,B)xi(k,B)f⁡(SNRk,i)+∑j∈SNC(k,B)1f⁡(SNRk,j)stop ← falseend ifend forend whilereturn The optimal association results, xi(k, B) ∈ SC(k, B) In operation965, the central controller905may transmit a result of determining a radio channel to be accessed for each terminal to the centralized AP903that may transmit a received result to the terminal901. The central controller905may periodically determine a radio channel to be accessed for each terminal, and a period (PD) may be, for example, 5 minutes. The central controller905may aperiodically determine a radio channel to be accessed for each terminal. In operation970, the terminal901may receive the result of determining a radio channel to be accessed for each terminal and access the determined radio channel based on the result. According to the disclosed method, in a situation where dual-band APs are highly dense, the terminal may perform load distribution between APs and bands based on whether interference occurs between radio channels, thus improving performance of the WLAN. The terminal may not only adjust an access to existing centralized APs based on a current network condition (channel load quantity and radio channel quality), but also access a stand-alone AP considering a network condition of the stand-alone AP together. FIG.10is a view for describing a method, performed by a terminal, of accessing a radio channel by using a centralized UA scheme, according to an embodiment of the disclosure. In operation1001, a terminal1010currently accesses an AP-31030. The terminal1010may also receive a beacon frame including load information of an AP near the terminal1010by periodically scanning APs near the terminal1010. The terminal1010may also receive a beacon frame including load information of an AP near the terminal1010by aperiodically scanning the APs near the terminal1010. InFIG.10, the terminal1010may receive a beacon frame including load information of each AP from each of the APs near the terminal1010, e.g., an AP-1, an AP-2, and an AP-31030. The terminal1010may measure an RSSI of a signal received from the AP-1, the AP-2, and the AP-31030. The centralized AP, the AP-31030may be controlled by a central controller1020. In operation1002, the central controller1020may receive a beacon frame including load information of an AP near the terminal1010and the RSSI of the signal received from the AP near the terminal1010, from the terminal1010through the AP-31030. In operation1001or1002, the AP-31030may transmit and receive information to and from the central controller1020under control of the central controller1020. This has been described in detail with reference to operations920through950ofFIG.9. In operation1003, the central controller1020may determine a total resource efficiency of a network based on information of the AP-31030received from the AP-31030and information received by the AP-31030from the terminal1010, determine a radio channel to be accessed for each terminal based on the total resource efficiency, and transmit an access determination result to the AP-31030. The AP-31030may transmit the access determination result to the terminal1010. A method of determining the access determination result has been described in detail with reference to operation960ofFIG.9. In operation1004, the terminal1010may access an AP-11040to which an access is determined. FIG.11is a view for describing a method, performed by a terminal, of accessing a radio channel by using a distributed UA scheme and a centralized UA scheme, according to an embodiment of the disclosure. Referring toFIG.11, a terminal1101may access a radio channel to which an access is determined, by using the distributed UA scheme described with reference toFIG.4and the centralized UA scheme described with reference toFIG.9. Each of operations1110through1160ofFIG.11are the same as operations510through550ofFIG.5and operations910through970ofFIG.9, and thus will be described in brief. Operations1110through1120show a method in which the terminal1101may obtain information about an AP near the terminal1101, transmit the information to a centralized AP1105, and transmit the information to a central controller1107through the centralized AP1105, according to the centralized UA scheme. In operation1110, the terminal1101may receive a beacon frame including load information of an AP by periodically scanning APs near the terminal1101. The terminal1101may also receive a beacon frame including load information of an AP near the terminal1101by aperiodically scanning the APs near the terminal1101. In operation1115, the terminal1101may measure an RSSI of a signal received from an AP near the terminal1101. In operation1120, the terminal1101may transmit a beacon frame including load information of the AP near the terminal1101and the RSSI of the signal received from the AP near the terminal1101to a centralized AP1105, and transmit the beacon frame and the RSSI to the central controller1107through the centralized AP1105. Operation1130shows a method in which the terminal1101obtains information about a currently accessed AP1103according to the distributed UA scheme. In operation1130, the terminal1101may receive a beacon frame including load information of the currently accessed AP1103from the currently accessed AP1103. InFIG.11, it is illustrated that the terminal1101obtains information about an AP near the terminal1101and transmits the information to the centralized AP1105by using the centralized UA scheme in operations1110through1120, and then receives a beacon frame including load information of the currently accessed AP1103from the currently accessed AP1103by using the distributed UA scheme in operation1130. However, an order of operations1110through1120and operation1130is not limited by the figure. For example, the terminal1101may receive a beacon frame including load information of the currently accessed AP1103from the currently accessed AP1103by using the distributed UA scheme in operation1130, and obtain information about an AP near the terminal1101and transmit the information to the centralized AP1105, and transmit the information to the central controller1107through the centralized AP1105by using the centralized UA scheme according to operations1110through1120. Operations1110through1120ofFIG.11may be within operation1140, and likewise, operation1130may be within operation1150. Operation1140shows a method in which the terminal1101determines a radio channel to be accessed, according to the distributed UA scheme. Operation1150shows a method in which the terminal1101determines a radio channel to be accessed, according to the centralized UA scheme. Depending on the two methods, the radio channels to which access of the terminal1101is determined may be the same as or different from each other. InFIG.11, it is illustrated that a radio channel that the terminal1101is to access is first determined in operation1140and thus the terminal1101accesses the radio channel in operation1145, and then a radio channel to be accessed is determined in operation1150and the terminal1101newly accesses the radio channel in operation1160, such that the access to the radio channel in operation1145may be maintained or changed depending on a result of operation1140. However, an order of operations1140and1150is not limited by the figure. For example, a radio channel that the terminal1101is to access is first determined in operation1150and thus the terminal1101accesses the radio channel in operation1160, and then a radio channel to be accessed is determined in operation1140and the terminal1101newly accesses the radio channel in operation1145, such that the access to the radio channel in operation1160may be maintained or changed depending on a result of operation1150. When the centralized UA scheme based on the central controller1107is excessively often performed, an effective connection time may be reduced, causing an overhead. Thus, a centralized UA determination period PDmay need to be set to a relatively long period (e.g., several tens of seconds through several minutes). As such, in a system in which a UA is determined in every period, the terminal1101may have a difficulty in actively dealing with a change in a network condition (change in a traffic volume or radio channel quality) occurring between a previous determination period and a next determination period. In the current embodiment of the disclosure, by using the distributed UA scheme during use of the centralized UA scheme, this problem may be complemented. By using the distributed UA scheme, the terminal1101may continuously recognize a load condition of a currently associated AP through beacon reception. When a load quantity of the currently associated AP increases over a certain level between determination periods of the centralized UA scheme, the terminal1101may access an AP having a small load quantity and a good transmission efficiency near the terminal1101. FIG.12illustrates an internal structure of a terminal according to an embodiment of the disclosure. The terminal described above with reference toFIGS.1through11may correspond to a terminal1200ofFIG.12. Referring toFIG.12, the terminal1200may include a communicator1220, a memory1230, and a processor1210. According to the above-described communication method of the terminal1200, the communicator1220, the memory1230, and the processor1210of the terminal1200may operate. However, components of the terminal1200are not limited to the above-described example. For example, the terminal1200may include components that are more than or less than the above-described components. The communicator1220, the memory1230, and the processor1210may be implemented in a single chip form. The communicator1220may transmit and receive a signal to and from an AR To this end, the communicator1220may include a radio frequency (RF) transmitter and an RF receiver. However, this is merely an example of the communicator1220, components of which are not limited to the RF transmitter and the RF receiver. The communicator1220may be referred to as a transceiver. The communicator1220may receive a signal through a radio channel and output the received signal to the processor1210, and transmit a signal output from the processor1210through the radio channel. A program, data, and one or more instructions needed for an operation of the terminal1200may be stored in the memory1230. Control information or data included in a signal obtained by the terminal1200may be stored in the memory1230. The memory1230may include a storage medium such as read only memory (ROM), random access memory (RAM), hard-disk, compact disc (CD)-ROM, digital versatile disc (DVD), etc., or a combination thereof. The memory1230may also include a plurality of memories. The processor1210may execute one or more instructions stored in the memory1230to control a series of processes such that the terminal1200may operate according to the above-described embodiments of the disclosure. According to an embodiment of the disclosure, the processor1210may receive a beacon frame including load information of a currently accessed AP, determine whether to maintain an access to a currently accessed radio channel, based on the load information of the currently accessed AP, receive a beacon frame including load information of the AP near the terminal1200by scanning the AP near the terminal1200, when determining not to maintain the access to the currently accessed radio channel, determine a radio channel to be accessed, based on the load information of the AP near the terminal1200and inter-channel interference information, and accessing the determined radio channel. According to an embodiment of the disclosure, the load information of the channel of the AP, received by the processor1210, may include BSS load information including a number of accessed terminals for each frequency band of the AP, a channel load quantity for each frequency band of the AP, and an average spectral efficiency of the AP. According to an embodiment of the disclosure, the processor1210may determine an AP to be accessed, a frequency band to be used, and a channel number to be used. According to an embodiment of the disclosure, the processor1210may determine whether to maintain the access to the currently accessed radio channel based on a number of terminals accessing a currently accessed band of the currently accessed AP, when a channel load quantity of the currently accessed radio channel is greater than or equal to a threshold value. According to an embodiment of the disclosure, the inter-channel interference information used by the processor1210to determine the radio channel to be accessed may include inter-channel interference information of the AP near the terminal1200, and the processor1210may measure an RSSI of a signal received from the AP near the terminal1200and determine whether inter-channel interference of the AP near the terminal1200occurs, based on the RSSI of the signal received from the AP near the terminal1200and an adjacent channel power leakage ratio. According to an embodiment of the disclosure, the processor1210may obtain a sum of channel load quantities of channels interfering with the radio channel with respect to the terminal's access to the radio channel, based on whether the inter-channel interference of the AP near the terminal1200occurs, obtain a weight value based on the sum of the channel load quantities and a channel load quantity of the radio channel with respect to the terminal's access to the radio channel, obtain an expected value of an average spectral efficiency of the radio channel with respect to the terminal's access to the radio channel, based on an average spectral efficiency of the radio channel, obtain a resource efficiency with respect to the terminal's access to the radio channel, based on the weight value and the expected value of the average spectral efficiency, and determine the radio channel to be accessed, based on the resource efficiency. According to an embodiment of the disclosure, the processor1210may receive a beacon frame including the load information of the AP near the terminal1200by periodically scanning the AP near the terminal1200, measure the RSSI of the signal received from the AP near the terminal1200, transmit the beacon frame including the load information of the AP near the terminal1200and the RSSI of the signal received from the AP near the terminal1200to a centralized AP that is controlled by a central controller, receive an access determination result from the centralized AP and access the radio channel to which the access of the terminal1200is determined, based on the access determination result. According to an embodiment of the disclosure, the access determination result received by the processor1210from the centralized AP may include a result obtained in a way that the central controller determines a total resource efficiency of a network based on the beacon frame of the AP near the terminal1200, a transmission speed and traffic information for each of terminals accessing the centralized AP, and an RSSI of an AP near the centralized AP, and determine a radio channel to be accessed for each terminal based on the determined total resource efficiency of the network, in which the total resource efficiency, the transmission speed and the traffic information for each terminal, and the RSSI are received by the central controller from the centralized AP. According to an embodiment of the disclosure, the access determination result received by the processor1210from the centralized AP may include the total resource efficiency of the network determined in a way that the central controller determines whether inter-channel interference occurs between APs near the centralized AP based on the RSSI of the AP near the centralized AP and an adjacent channel power leakage ratio and determines the total resource efficiency of the network based on whether inter-channel interference occurs between the APs near the centralized AP. Herein, a description has been made using only some operations of the above-described embodiment of the disclosure as an example in relation to an operation of the processor1210, but the processor1210may control all processes such that the terminal1200may operate according to the entire embodiment of the disclosure described above or a part thereof. The methods according to the embodiments of the disclosure described in the claims or specification of the disclosure may be executed or implemented by hardware, software, or a combination thereof. When the methods are implemented by software, a computer-readable storage medium having stored therein one or more programs (software modules) or a computer program product including a recording medium having stored therein a program may be provided. The one or more programs stored in the computer-readable storage medium or computer program product may be configured for execution by one or more processors in an electronic device. The one or more programs include instructions that cause the electronic device to execute the methods according to the embodiments of the disclosure described in the claims or the specification of the disclosure. These programs (software modules and software) may be stored in random access memories (RAMs), nonvolatile memories including flash memories, read only memories (ROMs), electrically erasable programmable ROMs (EEPROMs), magnetic disc storage devices, compact disc-ROMs (CD-ROMs), digital versatile discs (DVDs), other types of optical storage devices, or magnetic cassettes. The programs may be stored in a memory configured by a combination of some or all of such storage devices. Also, each of the memories may be provided in plurality. The programs may be stored to an attachable storage device of the electronic device accessible via the communication network such as Internet, Intranet, LAN, WLAN, or storage area network (SAN), or a communication network by combining the networks. The storage device may access a device performing the embodiment of the disclosure through an external port. Furthermore, a separate storage device in a communication network may access a device performing the embodiment of the disclosure. In the detailed embodiments of the disclosure, components included in the disclosure have been expressed as singular or plural according to the provided detailed embodiment of the disclosure. However, singular or plural expressions have been selected properly for a condition provided for convenience of a description, and the disclosure is not limited to singular or plural components and components expressed as plural may be configured as a single component or a component expressed as singular may also be configured as plural components. Meanwhile, the embodiments of the disclosure disclosed in the present specification and drawings have been provided to easily describe the disclosure and to help understanding of the disclosure, and are not intended to limit the scope of the disclosure. In other words, it is apparent to one of ordinary skill in the art that various changes may be made thereto without departing from the scope of the disclosure. In addition, the embodiments of the disclosure may be used in combination when necessary. For example, a base station and a terminal may be managed by combining an embodiment of the disclosure with some parts of another embodiment of the disclosure. In addition, other modifications based on the technical spirit of the above-described embodiment of the disclosure may also be carried out in other communication systems.
63,694
11943660
DETAILED DESCRIPTION The following description, along with the accompanying drawings, sets forth certain specific details in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that the disclosed embodiments may be practiced in various combinations, without one or more of these specific details, or with other methods, components, devices, materials, etc. In other instances, well-known structures or components that are associated with the environment of the present disclosure, including but not limited to the communication systems and networks, have not been shown or described in order to avoid unnecessarily obscuring descriptions of the embodiments. Additionally, the various embodiments may be methods, systems, media, or devices. Accordingly, the various embodiments may be entirely hardware embodiments, entirely software embodiments, or embodiments combining software and hardware aspects. Throughout the specification, claims, and drawings, the following terms take the meaning explicitly associated herein, unless the context clearly dictates otherwise. The term “herein” refers to the specification, claims, and drawings associated with the current application. The phrases “in one embodiment,” “in another embodiment,” “in various embodiments,” “in some embodiments,” “in other embodiments,” and other variations thereof refer to one or more features, structures, functions, limitations, or characteristics of the present disclosure, and are not limited to the same or different embodiments unless the context clearly dictates otherwise. As used herein, the term “or” is an inclusive “or” operator, and is equivalent to the phrases “A or B, or both” or “A or B or C, or any combination thereof,” and lists with additional elements are similarly treated. The term “based on” is not exclusive and allows for being based on additional features, functions, aspects, or limitations not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,” “an,” and “the” include singular and plural references. FIG.1illustrates a context diagram of an environment100in which UPF load balancing may be implemented in accordance with embodiments described herein. UEs110, such as cellular telephones or other Internet-of-Tings (IoT) devices use 5G wireless cellular telecommunication technology defined by standards set by 3GPP and International Telecommunications Union (ITU) to get data connectivity between applications on the UE and Data Networks (DNs) such as the Internet or private corporate networks. Almost all applications running on the UE, including voice, require such data connectivity. A Protocol Data Unit (PDU) session provides connectivity between applications on a UE and a DN. The UE receives services through a PDU session, which is a logical connection between the UE and DN. A DN is identified by a Data Network Name (DNN). PDU sessions can provide different types of transport services corresponding to the nature of the PDU(s) carried over the PDU session. In various embodiments, a PDU session may be associated with a single DNN and with a single slice identified by Single-Network Slice Selection Assistance Information (S-NSSAI). The UPF is one of the network functions (NFs) of the 5GC. The UPF, comprising UPF1104and UPF2106in the present example, is responsible for packet routing and forwarding, packet inspection, quality of service (QoS) handling, and interconnecting external PDU sessions with the DN. Although two UPFs (UPF1104and UPF2106) are shown in the present example, additional UPFs may be utilized in various other embodiments. Each UPF (e.g., UPF1104and UPF2106) is a virtual network function responsible for PDU sessions between the UEs110and the DN by anchoring the PDU sessions of various UEs110on the individual UPF. The SMF102is also one of the NFs of the 5GC and is primarily responsible for interacting with the decoupled data plane, creating updating and removing PDU sessions, selecting particular UPFs on which to anchor PDU sessions when new UEs appear on the network and managing session context with the UPF. Many of such functions are described in the 3GPP TS 23.501 specification. A network function, such as the SMF102and the UPF, such as UPF1104and UPF2106, can be implemented either as a network elements on dedicated hardware, as a software instance running on dedicated hardware, or as a virtualized function instantiated on an appropriate platform, e.g., a cloud infrastructure. In the present example, UPF1104is implemented at data center 1 and UPF2106is implemented at data center 2, which is geographically separated from data center 1. The SMF102sends messages to the UPF (comprising UPF 1104and UPF 2 in the present example) over the N4 reference interface using the Packet Forwarding Control Protocol (PFCP). The PFCP may employ UDP port (8805) and is defined to support Control and User Plane Separation (CUPS). Decoupling other control plane functions from the user plane, together with the 5G Core Access and Mobility Management Function (AMF) (not shown), the SMF102performs the role of Dynamic Host Control Protocol (DHCP) server and Internet Protocol (IP) Address Management (IPAM) system. Together with the UPF, the SMF102maintains a record of PDU session state by means of a 24 bit PDU Session ID. The SMF102sets configuration parameters in the UPF that define traffic steering parameters and ensure the appropriate routing of packets while guaranteeing the delivery of incoming packets, though a Downlink (DL) data notification. In the present example embodiment, each UPF1104and UPF2106may have the ability to establish network connectivity and anchor PDU sessions of any UE on the network via various cellular telecommunication base stations and associated antennas108. To maximize network performance, PDU sessions are by default anchored on the UPF at the data center that is closest geographically to the UE, as illustrated by most of the dashed lines inFIG.1for UEs110(and an operator defines a service area for each UPF). However, each UPF (e.g., UPF1104and UPF2106) has a maximum network capacity to handle PDU sessions anchored thereon and the associated network traffic. Thus, PDU sessions anchored on a particular UPF (e.g., UPF1104) and their associated network traffic may cause the UPF to near its maximum capacity or become overloaded. UPF load balancing may then cause the PDU session of the next new UE appearing on the network (e.g., UE112) to be anchored on a UPF at a data center (e.g., UPF2106) that is further away than the data center that is closest geographically to the UE. In the present example, UPF1104is at or near its maximum capacity with the PDU sessions of all the other UEs currently anchored on it, so UE112has a PDU session anchored on UPF2106(as shown by dashed line114) instead of UPF1104, even though data center 2 of UPF2106is further away from the UE112than data center 1 of UPF1104. In various embodiments described herein, there are different particular scenarios and rules in which UPF load balancing may cause the PDU session of the next new UE appearing on the network to be anchored on a UPF at a data center that is further away than the data center that is closest geographically to the UE, which improves overall UPF load balancing and network performance. FIG.2illustrates a logical flow diagram showing one embodiment of a process200for load balancing based on current UPF load and thresholds that depend on UPF capacity in accordance with embodiments described herein. At202, the SMF102maintains load thresholds for each user plane function (UPF) of a plurality of UPFs in a cellular telecommunication network. The plurality of UPFs serve as anchor points between UE in the cellular telecommunication network and a DN. Each UPF of the plurality of UPFs is a virtual network function responsible for interconnecting PDU sessions between the UE and the DN by anchoring the PDU sessions on individual UPFs. The load thresholds for each UPF depend on a respective capacity of each UPF to have PDU sessions anchored thereon. In the present example embodiment, an amount of load put on a UPF by a UE appearing in the cellular telecommunication network is assumed to be identical for all UEs appearing in the cellular telecommunication network. At204, the SMF102receives a request to anchor on a UPF a PDU session of a new UE newly appearing on the cellular telecommunication network. At206, the SMF102selects a UPF of the plurality of UPFs on which to anchor the PDU session based on a location of the new UE and determined load-regions for each UPF of the plurality of UPFs defined by the load thresholds. At208, the SMF102anchors the PDU session of the new UE to the selected UPF. FIG.3illustrates a logical flow diagram showing one embodiment of a process300for selecting the UPF based on generated weights, which is useful in the process200ofFIG.2in accordance with embodiments described herein. At302, the SMF102generates weights for selecting the UPF based on the determined load-regions. At304, the SMF102selects the UPF based on the generated weights. FIG.4illustrates a logical flow diagram showing one embodiment of a process400for selecting the UPF based on the determined load-regions for a plurality of UPFs and the weights generated based on the determined load regions, which is useful in the process300ofFIG.3in accordance with embodiments described herein. At402, the SMF102generates multiple load-regions. Each load-region corresponds to a different range of current load of a UPF defined by one or more of lower and upper threshold percentages of UPF load capacity. At404, the SMF102receives the request to anchor the PDU session. At406, the SMF102determines a load region from the multiple load-regions that a current load of the UPF falls within. At408, the SMF102determines whether there are additional UPFs in the plurality of UPFs on which the PDU session may be anchored. If it is determined there are additional UPFs on which the PDU session may be anchored, then the process400proceeds back to406to determine a load region from the multiple load-regions that a current load of the additional UPF falls within. If it is determined there are not additional UPFs on which the PDU session may be anchored, then the process400proceeds to410. At410, the SMF102selects a UPF of the plurality of UPFs based on the determined load-regions for the plurality of UPFs and the weights generated based on the determined load regions. In an example embodiment, the SMF102generates a lowest load-region indicating a current UPF load less than a first threshold percentage of UPF capacity; generates one or more intermediate non-overlapping load-regions each defined by respective lower and upper threshold percentages of UPF capacity and indicating a current load greater than the lowest load-region; and generates a highest load-region indicating a current UPF load greater than a second threshold percentage of UPF capacity and greater than the intermediate non-overlapping load-regions. In the present example embodiment, each UPF is associated with a different respective geographic UPF service area. Selecting the UPF based on the generated weights and the determined load-regions for the UPFs may include determining a particular UPF has (i.e., is at a data center in) a respective geographic area within which the location of the new UE falls (i.e., is at a data center that is closest geographically to the UE compared to data centers of other UPFs). The particular UPF may then be selected in response to the determined load-region of the particular UPF being a load-region indicating its current load is below a threshold capacity. In some embodiments, the SMF102determines a particular UPF has a respective geographic area within which the location of the new UE falls. The SMF102determines whether the particular UPF has a determined load region indicating its current load is in a different load region indicating a higher current load of the particular UPF than a current load of another UPF. In response to this determination, the SMF102weights the selection of a UPF. In particular, the UPF selection by the SMF for load-balancing is based on weighted scheduling of load (UEs) on the UPFs. This weighted scheduling may be credit/token-based (e.g., weighted round robin) or probability based (e.g., using statistical based scheduling algorithms using probability). For example, the SMF102may weight the selection of a UPF such that a probability that the particular UPF is selected is lower than a probability of selection of the other UPF. In some embodiments, the selection of the UPF is weighted by using credit/token-based weighted scheduling or probability-based weighted scheduling such that the frequency of selection of the particular UPF decreases as a difference between a higher current load of the particular UPF and a lower current load of at least one UPF of the plurality of UPFs increases, as indicated by the load regions determined for each UPF of the plurality of UPFs. FIG.5illustrates a logical flow diagram showing one embodiment of a process500for UPF load balancing using predicted throughput of a new UE on the network based on network data analytics in accordance with embodiments described herein. At502, the SMF102maintains load thresholds for each user plane function (UPF) of a plurality of UPFs in a cellular telecommunication network. The plurality of UPFs serve as anchor points between UE in the cellular telecommunication network and a DN. Each UPF of the plurality of UPFs is a virtual network function responsible for interconnecting PDU sessions between the UE and the DN by anchoring the PDU sessions on individual UPFs. The load thresholds for each UPF depend on a respective capacity of each UPF to have PDU sessions anchored thereon. However, in the present example embodiment, an amount of load put on a UPF by a UE appearing in the cellular telecommunication network is not assumed to be identical for all UEs appearing in the cellular telecommunication network. At504, the SMF102receives a request to anchor on a UPF a PDU session of a new UE newly appearing on the cellular telecommunication network. At506, the SMF102selects a UPF of the plurality of UPFs on which to anchor the PDU session based on a location of the new UE, determined load-regions for each UPF of the plurality of UPFs defined by the load thresholds and predicted throughput of the new UE based on network data analytics. In an example embodiment, the network data analytics is provided via a network data analytics function (NWDAF) of a 5G mobile network of which the cellular telecommunication network is comprised. At508, the SMF102anchors the PDU session of the new UE to the selected UPF. FIG.6illustrates a logical flow diagram showing one embodiment of a process600for selecting the UPF, which is useful in the process500ofFIG.5in accordance with embodiments described herein. At602, in selecting the UPF, the SMF102uses the network data analytics to predict throughput of the UE and load on a UPF of the new UE appearing on the cellular telecommunication network based on the predicted throughput. At604, the SMF102selects a UPF of the plurality of UPFs on which to anchor the PDU session based on a location of the new UE, load-regions for each UPF of the plurality of UPFs defined by the load thresholds and the predicted load of the new UE on a UPF. FIG.7illustrates a logical flow diagram showing one embodiment of a process700for selecting the UPF using artificial intelligence (AI) or machine learning (ML) algorithms to perform predictive analysis of throughput, which is useful in the process600ofFIG.6in accordance with embodiments described herein. At702, in using the network data analytics to predict throughput of the new UE and load on a UPF, the SMF102uses artificial intelligence (AI) or machine learning (ML) algorithms to perform predictive analysis of throughput of the new UE and resulting load on a UPF of the new UE appearing on the cellular telecommunication network based on historical activity of the new UE appearing on the cellular telecommunication network. At704, the SMF102implements a weighted scheduling of load on UPFs to achieve UPF load-balancing based on the predicted throughput of the new UE and resulting predicted load on a UPF of the new UE. This weighted scheduling can be implemented using credit/token based scheduling algorithms or statistical based scheduling algorithms (using probability). The SMF102may weight selection of a particular UPF of the plurality of UPFs based on the predicted throughput of the new UE and resulting predicted load on a UPF of the new UE by using credit/token-based weighted scheduling or probability-based weighted scheduling. For example, in one embodiment, the SMF102changes a probability of whether a particular UPF of the plurality of UPFs will be selected based on the predicted throughput of the new UE and resulting predicted load on a UPF of the new UE. In an example embodiment, the SMF102weights selection of the particular UPF to not overload other UPFs of the plurality of UPFs as compared to the particular UPF in response to a current load of the particular UPF being currently in a particular load-region as compared to other UPFs of the plurality of UPFs and the predicted load being at a particular level. For example, the SMF102may increase a probability that a particular UPF will be selected in response to a current load of the particular UPF being currently in a particular load-region as compared to other UPFs and the predicted load being at a particular level. In an example embodiment, the SMF102may weight selection of the particular UPF in the plurality of UPFs to not overload the particular UPF beyond a threshold amount compared to other UPFs in the plurality of UPFs based on the predicted load by using credit/token-based weighted scheduling or probability-based weighted scheduling based on the predicted load. For example, the SMF102may decrease a probability that the particular UPF will be overloaded beyond a threshold amount compared to other UPFs based on the predicted load by changing the probability of whether the particular UPF will be selected based on the predicted load Such load balancing may instead be achieved using credit/token based scheduling (e.g., weighted round robin). FIG.8illustrates a logical flow diagram showing one embodiment of a process800for UPF load balancing based on special considerations for low latency traffic in accordance with embodiments described herein. At802, the SMF102maintains load thresholds for each user plane function (UPF) of a plurality of UPFs in a cellular telecommunication network. The plurality of UPFs serve as anchor points between UE in the cellular telecommunication network and a DN. Each UPF of the plurality of UPFs is a virtual network function responsible for interconnecting PDU sessions between the UE and the DN by anchoring the PDU sessions on individual UPFs. The load thresholds for each UPF depend on a respective capacity of each UPF to have PDU sessions anchored thereon. In the present example embodiment, an amount of load put on a UPF by a UE appearing in the cellular telecommunication network is assumed to be identical for all UEs appearing in the cellular telecommunication network. In some embodiments, the load thresholds may be reduced by a percentage amount of capacity dedicated for low-latency network traffic. For example, a percentage amount of capacity dedicated for low-latency network traffic may be 10% and thus the load thresholds for non-low latency traffic (such as the thresholds maintained in the process200ofFIG.2) may be reduced by 10%. At804, the SMF102receives a request to anchor on a UPF a PDU session of a new UE newly appearing on the cellular telecommunication network. At806, the SMF102selects a UPF of the plurality of UPFs on which to anchor the PDU session based on whether traffic of the PDU session is identified as low latency and a location of the new UE. At808, the SMF102anchors the PDU session of the new UE to the selected UPF. FIG.9illustrates a logical flow diagram showing one embodiment of a process900for selecting the UPF based on the location of the new UE and load-regions for each UPF defined by load thresholds for non-low latency traffic, which is useful in the process800ofFIG.8in accordance with embodiments described herein. At902, the SMF102receives a request to anchor on a UPF a PDU session of a new UE newly appearing on the cellular telecommunication network. At904, the SMF102determines whether the traffic of the PDU session is identified as low latency. In the present example embodiment, the selection of the UPF is based on dedicating a percentage of capacity of each UPF of the plurality of UPFs to low-latency traffic of PDU sessions. Latency may be measured in the time elapsed from when the client sends the first byte of a request to the moment the server receives it, or it may be measured by the total journey time for a packet to travel to the server and then back to the client. In the present example, on the downlink, the latency is measured from the time that the UPF receives the packet until the time that the packet is delivered to the UE. On the uplink, the latency is measured from the time that the UE sends the packet until the time that the packet is received by the UPF. For example, low latency network traffic may support operations that require near real-time access to rapidly changing data. Low latency is desirable in a wide range of use cases. In a general sense, lower latency is nearly always an improvement over slower packet transport. Low latency is desirable in online gaming as it contributes to a more realistic gaming environment. The term low latency is often used to describe specific business use cases, in particular high-frequency trading in capital markets. If traffic of the PDU session is identified as low latency, then the process900proceeds to906. If traffic of the PDU session is not identified as low latency, then the process900proceeds to908. At906, the SMF102selects a UPF having a closest associated location to a current location of the new UE. At908, the SMF selects a UPF based on the location of the new UE and load-regions for each UPF of the plurality of UPFs defined by the load thresholds for non-low latency traffic. FIG.10illustrates a logical flow diagram showing one embodiment of a process1000for selecting the UPF based on whether the network traffic is identified as low latency, which is useful in the process900ofFIG.9in accordance with embodiments described herein. At1002, the SMF102receives a request to anchor on a UPF a PDU session of a new UE newly appearing on the cellular telecommunication network. At1004, the SMF102determines whether the traffic of the PDU session is identified as low latency. If traffic of the PDU session is identified as low latency, then the process1000proceeds to1006. If traffic of the PDU session is not identified as low latency, then the process1000proceeds to1008. At1006, the SMF102selects a UPF having a closest associated location to a current location of the new UE. At1008, the SMF102selects a UPF based on weights for selecting the UPF generated based on the load-regions, wherein each load-region corresponds to a different range of current load of the UPF defined by one or more of lower and upper threshold percentages of load capacity of the UPF. In some embodiments, if each UPF of the plurality of UPFs is identified as currently having a current load falling within a low load-region defined by a current load below a particular threshold, then the SMF102selects a UPF having a closest associated location to a current location of the new UE. FIG.11illustrates a logical flow diagram showing one embodiment of a process1100for UPF load balancing supporting multiple slices, maintaining several load-thresholds for each UPF and each slice depending on the UPF and network slice capacity in accordance with embodiments described herein. At1102, the SMF102maintains load thresholds for each network slice of a plurality of network slices. In the present example embodiment, each network slice of each respective set of network slices comprises of a set of virtual network resources and network traffic flows associated with the network slice and represents an independent virtualized instance of a network defined by allocation of a subset of available network resources in the cellular telecommunication network. The “user plane” of each network slice of the plurality of network slices is supported by a respective user plane function (UPF) of a plurality of UPFs in a cellular telecommunication network. The plurality of UPFs serve as anchor points between user equipment (UE) in the cellular telecommunication network and a data network (DN). Each UPF of the plurality of UPFs is a virtual network function responsible for interconnecting packet data unit (PDU) sessions between the user equipment (UE) and the DN by anchoring the PDU sessions on individual UPFs. The load thresholds for each network slice depend on a respective capacity of each network slice and total capacity of each UPF supporting each network slice to have PDU sessions anchored thereon. An amount of load put on a network slice by a UE appearing in the cellular telecommunication network is assumed to be identical for all UEs appearing in the cellular telecommunication network. At1104, the SMF102receives a request to anchor on a UPF a PDU session of a new UE newly appearing on the cellular telecommunication network. At1106, the SMF102selects a network slice of the plurality of network slices on which to anchor the PDU session based on a location of the new UE and determined load-regions for each network slice of the plurality of network slices defined by the load thresholds. At1108, the SMF102anchors the PDU session of the new UE to the selected network slice and the respective UPF supporting the selected network slice. FIG.12illustrates a logical flow diagram showing one embodiment of a process1200for selecting the network slice based on generated weights, which is useful in the process1100ofFIG.11in accordance with embodiments described herein. At1202, the SMF102generates weights for selecting the network slice based on the determined load-regions. At1204, the SMF102selects the network slice based on the generated weights. FIG.13illustrates a logical flow diagram showing one embodiment of a process1300for selecting the network slice based on determined load-regions for each slice and weights generated based on the determined load regions, which is useful in the process1200ofFIG.12in accordance with embodiments described herein. At1302, the SMF102generates multiple load-regions. Each load-region corresponds to a different range of current load of a network slice defined by one or more of lower and upper threshold percentages of network slice load capacity. For example, in one embodiment, the SMF102may generate a lowest load-region indicating a current network slice load less than a first threshold percentage of network slice capacity; generate one or more intermediate non-overlapping load-regions each defined by respective lower and upper threshold percentages of network slice capacity and indicating a current load greater than the lowest load-region; and generate a highest load-region indicating a current network slice load greater than a second threshold percentage of network slice capacity and greater than the intermediate load-region(s). At1304, the SMF102receives the request to anchor the PDU session. At1306, the SMF102, in response to receiving the request to anchor the PDU session, determines a load region from the multiple load-regions that a current load of a network slice falls within. At1306, the SMF102determines whether there are any additional network slides on which the PDU session may be anchored. If it is determined there are additional network slides on which the PDU session may be anchored, then process1300proceeds back to1306to determine a load region from the multiple load-regions that a current load of the additional network slice falls within. If it is determined there are not additional network slides on which the PDU session may be anchored, then the process1300proceeds to1310. At1310, the SMF102selects a network slice of the plurality of network slices based on the determined load-regions for the plurality of network slices and the weights generated based on the determined load regions. In some embodiments, each network slice of the plurality of network slices is associated with a respective geographic area of the respective UPF supporting the network slice (i.e., the geographic area of the data center of the UPF). Selecting a network slice based on the generated weights and the determined load-regions for the plurality of network slices may include determining a particular network slice of the plurality of network slices is associated with a respective geographic area within which the location of the new UE falls. The SMF102may then select the particular network slice in response the determined load-region of the particular network slice being a load-region indicating a current load of the particular network slice is below a threshold capacity. In some embodiments, selecting a network slice based on the generated weights and the determined load-regions for the plurality of network slices may include determining whether the particular network slice has a determined load region indicating a current load of the particular network slice is in a different load region indicating a higher current load of the particular network slice than a current load of another network slice. In response to this, the SMF102may weight the selection of a network slice of the plurality of network slices such that the particular network slice is not overloaded compared to the other network slice by using credit/token-based weighted scheduling or probability-based weighted scheduling. In some embodiments, the selection of a network slice includes determining a particular network slice of the plurality of network slices is associated with a respective geographic area within which the location of the new UE falls. The selection of the network slice is then weighted by using credit/token-based weighted scheduling or probability-based weighted scheduling such that the frequency of selection of the particular network slice decreases as a difference between a higher current load of the particular network slice and a lower current load of at least one network slice of the plurality of network slices increases, as indicated by the load regions determined for each network slice of the plurality of network slices. FIG.14illustrates a logical flow diagram showing one embodiment of a process1400for UPF load balancing using predicted CPU utilization and/or predicted memory utilization of new UE on the network based on network data analytics in accordance with embodiments described herein. At1402, the SMF102maintains load thresholds for each UPF of a plurality of UPFs in a cellular telecommunication network. The plurality of UPFs serve as anchor points between UE in the cellular telecommunication network and a DN. Each UPF of the plurality of UPFs is a virtual network function responsible for interconnecting PDU sessions between the UE and the DN by anchoring the PDU sessions on individual UPFs. The load thresholds for each UPF depend on a respective capacity of each UPF to have PDU sessions anchored thereon. At1404, the SMF102receives a request to anchor on a UPF a PDU session of a new UE newly appearing on the cellular telecommunication network. At1406, the SMF102selects a UPF of the plurality of UPFs on which to anchor the PDU session based on a location of the new UE, load-regions for each UPF of the plurality of UPFs defined by the load thresholds and one or more of predicted CPU utilization and predicted memory utilization of the new UE based on network data analytics. In some embodiments, the network data analytics may be provided via an NWDAF of a 5G mobile network of which the cellular telecommunication network is comprised. FIG.15illustrates a logical flow diagram showing one embodiment of a process1500for selecting the UPF, which is useful in the process1400ofFIG.14in accordance with embodiments described herein. At1502, the SMF102uses the network data analytics to predict one or more of CPU utilization and memory utilization of the UE and load on a UPF of the new UE appearing on the cellular telecommunication network based on one or more of the predicted CPU utilization and predicted memory utilization. At1504, the SMF102selects a UPF of the plurality of UPFs on which to anchor the PDU session based on a location of the new UE, load-regions for each UPF of the plurality of UPFs defined by the load thresholds and the predicted load of the new UE on a UPF. FIG.16illustrates a logical flow diagram showing one embodiment of a process for1600selecting the UPF using AI or machine learning ML algorithms to perform predictive analysis of CPU utilization and/or predicted memory utilization, which is useful in the process1500ofFIG.15in accordance with embodiments described herein. At1602, the SMF102uses artificial intelligence (AI) or machine learning (ML) algorithms to perform predictive analysis of one or more of CPU utilization and memory utilization of the new UE and resulting load on a UPF of the new UE appearing on the cellular telecommunication network based on historical activity of the new UE appearing on the cellular telecommunication network. At1604, the SMF102implements a weighted scheduling of load on UPFs to achieve UPF load-balancing based on one or more of the predicted CPU utilization and predicted memory utilization of the new UE and resulting predicted load on a UPF of the new UE. This weighted scheduling can be implemented using credit/token based scheduling algorithms or statistical based scheduling algorithms (using probability). For example, in one embodiment, the SMF102may increase a probability that a particular UPF will be selected in response to a current load of the particular UPF being currently in a particular load-region as compared to other UPFs of the plurality of UPFs and the resulting predicted load resulting from one or more of the predicted CPU utilization and predicted memory utilization being at a particular level. The SMF102may decrease a probability that the particular UPF in the plurality of UPFs will be overloaded beyond a threshold amount compared to other UPFs in the plurality of UPFs based on the resulting predicted load by changing the probability of whether the particular UPF will be selected based on the resulting predicted load resulting from one or more of the predicted CPU utilization and predicted memory utilization. Such load balancing may instead be achieved using credit/token based scheduling (e.g., weighted round robin). FIG.17illustrates a chart1700showing an example of possible load-regions that a current load of a UPF may be determined to fall within that may be used in the processes ofFIGS.2-7and15-17in accordance with embodiments described herein. The chart1700indicates the load region1702and the range of percentage of total load capacity1704for each load-region1702defined by respective load thresholds indicated in the chart1700. In the present example, the “Low” load region indicates a current UPF load of less than 30% of the total load capacity. The “Medium” load region indicates a current UPF load of greater than or equal to 30% and less than 55% of the total load capacity. The “High” load region indicates a current UPF load of greater than or equal to 55% and less than 75% of the total load capacity. The “Very High” load region indicates a current UPF load of greater than or equal to 75% of the total load capacity. There may be additional or different load regions in various other embodiments. The chart1700may be stored or represented as a data structure in computer memory and may be maintained and/or accessible by the SMF102. FIG.18illustrates a timeline1800showing an example of possible UPF load balancing that may occur according to the processes ofFIGS.2-4with two UPFs determined to fall within particular different load-regions of those shown in the chart1700FIG.17at particular times in accordance with embodiments described herein. For each example point in time t=1, t=2, t=3 and t=4, shown in the timeline1800is the determined UPF1 load1802, the determined UPF2 load1806and the corresponding UPF1 load balancing action1804and UPF2 load balancing action1808. Referring now also toFIG.1, in the present example, at time t=1, the SMF102determines that the current load of UPF1104is 20% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “low” based on the load thresholds in the chart1700ofFIG.17. At time t=1, the SMF102also determines that the current load of UPF2106is 2% of the total load capacity for UPF2106and thus determines the load region or UPF2106is also “low” based on the load thresholds in the chart1700ofFIG.17. Based on the determined load regions for UPF1104and UPF2106, any new UE in the region of UPF1104(e.g., the region of UPF1104being a respective geographic area associated with data center 1) are anchored on UPF1104and any new UE in the region of UPF2106(e.g., the region of UPF2106being a respective geographic area associated with data center 2) are anchored on UPF2106. At time t=2, the SMF102determines that the current load of UPF1104is 45% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “medium” based on the load thresholds in the chart1700ofFIG.17. At time t=2, the SMF102also determines that the current load of UPF2106is 10% of the total load capacity for UPF2106and thus determines the load region or UPF2106is “low” based on the load thresholds in the chart1700ofFIG.17. Based on the determined load regions for UPF1104and UPF2106, the weights determined by the SMF102for UPF1104load balancing are 1 & 2. In particular, for every three new UEs in the region of UPF1104(data center 1), one is anchored on UPF1104and the other two are anchored on UPF2106. In various embodiments, the load balancing can be done using credit/token based scheduling algorithms (using weighted round robin) or done randomly (using statistical based scheduling algorithms using probability): with probability 1/3 a new UE is anchored on UPF1104and with probability 2/3 the new UE is anchored on UPF2106. Any new UE in the region of UPF2106are anchored on UPF2106(data center 2). At time t=3, the SMF102determines that the current load of UPF1104is 65% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “high” based on the load thresholds in the chart1700ofFIG.17. At time t=3, the SMF102also determines that the current load of UPF2106is 19% of the total load capacity for UPF2106and thus determines the load region or UPF2106is “low” based on the load thresholds in the chart1700ofFIG.17. Based on the determined load regions for UPF1104and UPF2106, the weights determined by the SMF102for UPF1104load balancing are 1 & 4. In particular for every five new UEs in the region of UPF1 (data center1), one is anchored on UPF1 and the other four are anchored on UPF2. In various embodiments, the load balancing can be done by the SMF102using weighted round robin or done randomly, with probability 1/5 a new UE is anchored on UPF1 and with probability 4/5 the new UE is anchored on UPF2). Any new UE in the region of UPF2106(e.g., the region of UPF2106being a respective geographic area associated with data center 2) are anchored on UPF2106(data center 2). At time t=4, the SMF102determines that the current load of UPF1104is 75% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “very high” based on the load thresholds in the chart1700ofFIG.17. At time t=4, the SMF102also determines that the current load of UPF2106is 25% of the total load capacity for UPF2106and thus determines the load region or UPF2106is “low” based on the load thresholds in the chart1700ofFIG.17. Based on the determined load regions for UPF1104and UPF2106, any new UE in the region of UPF1104(data center 1) are anchored on UPF2106and any new UE in the region of UPF2106(data center 2) are also anchored on UPF2106. In some embodiments, the credit based scheduling method, such as the weighted round robin scheduling, disclosed herein is based on a probabilistic model. Alternatively, a credit/token based weighted round robin scheduling can be used in various other embodiments. FIG.19illustrates a timeline showing an example of possible UPF load balancing that may occur according to the processes ofFIGS.5-7with two UPFs determined to fall within particular different load-regions of those shown inFIG.17at particular times in accordance with embodiments described herein. For each example point in time t=1, t=2, t=3 and t=4, shown in the timeline1900is the determined UPF1 load1902, the determined UPF2 load1906, the corresponding UPF1 load balancing action1904and UPF2 load balancing action1908, as well as example new UE predicted loads1910. For example, the predicted load may be based on data from the network data analytics function (NWDAF). The predicted load may be measured in units based on throughput (e.g., packets per second, bytes per second, and/or bits per second), CPU utilization (e.g., CPU clock cycles, clock ticks, CPU time, CPU time per second, process time, percentage of CPU capacity utilization) and/or memory utilization, (megabytes of memory, and/or percentage of memory capacity utilization) or any combination thereof. Referring now also toFIG.1, in the present example, at time t=1, the SMF102determines that the current load of UPF1104is 20% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “low” based on the load thresholds in the chart1700ofFIG.17. At time t=1, the SMF102also determines that the current load of UPF2106is 2% of the total load capacity for UPF2106and thus determines the load region or UPF2106is also “low” based on the load thresholds in the chart1700ofFIG.17. In this case, based on the determined load regions for UPF1104and UPF2106, regardless of any predicted load of the new UE, any new UE in the region of UPF1104(data center 1) are anchored on UPF1104and any new UE in the region of UPF2106(data center 2) are anchored on UPF2106. At time t=2, the SMF102determines that the current load of UPF1104is 45% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “medium” based on the load thresholds in the chart1700ofFIG.17. At time t=2, the SMF102also determines that the current load of UPF2106is 10% of the total load capacity for UPF2106, and thus determines the load region or UPF2106is “low” based on the load thresholds in the chart1700ofFIG.17. Based on the determined load regions for UPF1104and UPF2106, the weights determined by the SMF102for UPF1104load balancing are 1 & 2. For example, based on the detected predicted load of the new UE appearing on the network, if the SMF102determines that the predicted load of the new UE is 2 units of load, the SMF102will perform load balancing such that there is a 43% probability the SMF102anchors the UE to UPF1104and a 57% probability the SMF102anchors the UE to UPF2106. If the SMF102determines that the predicted load of the new UE is 1 unit of load, the SMF102will perform load balancing such that there is a 25% probability the SMF102anchors the UE to UPF1104and a 75% probability the SMF102anchors the UE to UPF2106. Any new UE in the region of UPF2 (data center 2)106are anchored on UPF2106. At time t=3, the SMF102determines that the current load of UPF1104is 65% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “high” based on the load thresholds in the chart1700ofFIG.17. At time t=3, the SMF102also determines that the current load of UPF2106is 19% of the total load capacity for UPF2106and thus determines the load region or UPF2106is “low” based on the load thresholds in the chart1700ofFIG.17. Based on the determined load regions for UPF1104and UPF2106, the weights determined by the SMF102for UPF1104load balancing are 1 & 4. For example, based on the detected predicted load of the new UE appearing on the network, if the SMF102determines that the predicted load of the new UE is 2 units of load, the SMF102will perform load balancing such that there is a 10% probability the SMF102anchors the UE to UPF1104and a 90% probability the SMF102anchors the UE to UPF2106. If the SMF102determines that the predicted load of the new UE is 1 unit of load, the SMF102will perform load balancing such that there is a 28.6% probability the SMF102anchors the UE to UPF1104and a 71.4% probability the SMF102anchors the UE to UPF2106. Any new UE in the region of UPF2106(data center 2) are anchored on UPF2106. At time t=4, the SMF102determines that the current load of UPF1104is 75% of the total load capacity for UPF1104and thus determines the load region or UPF1104is “very high” based on the load thresholds in the chart1700ofFIG.17. At time t=4, the SMF102also determines that the current load of UPF2106is 25% of the total load capacity for UPF2106and thus determines the load region or UPF2106is “low” based on the load thresholds in the chart1700ofFIG.17. In this case, based on the determined load regions for UPF1104and UPF2106, regardless of any predicted load of the new UE, any new UE in the region of UPF1104(data center 1) are anchored on UPF2106and any new UE in the region of UPF2106(data center 2) are also anchored on UPF2106. FIG.20illustrates a chart2000showing an example of possible load-regions for non-low latency traffic that a current load of a UPF may be determined to fall within that may be used in the processes ofFIGS.8-10in accordance with embodiments described herein. The chart2000indicates the load region2002and the range of percentage of total load capacity2004for each load-region2002defined by respective load thresholds indicated in the chart1700. The load-regions shown in the chart2000are used for UPF load balancing based on special considerations for low latency traffic. The load thresholds are reduced compared to those of chart1700inFIG.17by a percentage amount of capacity dedicated for low-latency network traffic. In the present example, the percentage amount of capacity dedicated for low-latency network traffic is 10% and thus the load thresholds for non-low latency traffic are reduced by 10%. In particular, “Low” load region indicates a current UPF load of less than 20% of the total load capacity. The “Medium” load region indicates a current UPF load of greater than or equal to 20% and less than 45% of the total load capacity. The “High” load region indicates a current UPF load of greater than or equal to 45% and less than 65% of the total load capacity. The “Very High” load region indicates a current UPF load of greater than or equal to 65% of the total load capacity. There may be additional or different load regions in various other embodiments. The chart1700may be stored or represented as a data structure in computer memory and may be maintained and/or accessible by the SMF102. In one embodiment, in performing UPF load balancing based on special considerations for low latency traffic, the SMF102may perform such load balancing in the manner described with respect to the processes ofFIGS.8-10and the example herein described with respect toFIGS.18, but uses the non-low latency load thresholds in chart1700instead that are reduced to dedicate 10% of the total load capacity for low-latency network traffic. Also, a chart and/or corresponding data structure similar to that ofFIG.17using load thresholds shown therein may be generated or used by the SMF102to perform UPF load balancing supporting multiple slices as described herein with respect to the processes ofFIGS.11-13. In such an embodiment, the chart and/or corresponding data structure also or instead indicates such load-thresholds for each slice depending on the UPF and network slice capacity. For example, these load thresholds may be percentage based, such as 30%, 55%, and 75% for slice 1 and 30%, 45%, and 65% for slice 2. Also, depending on the UPFs slice based load-regions, the SMF102creates slice based weights for a weighted load balancing between slices of different UPFs. The SMF102maintains the load-regions of the UPFs and creates the slice weights for the weighted load balancing of the UEs/PDU sessions among the UPFs. In such an embodiment, depending on the UPFs slice loads, the SMF102creates slice weights for a weighted load balancing of the UEs/PDU sessions among the UPFs. In various embodiments, there may be additional UPFs and additional corresponding data centers. Additional load regions may be determined for each UPF, and the weighting and load balancing may be performed, in one example embodiment as described herein, to adjust the probabilities that a new UE appearing in the network is anchored on a particular UPF based on determined load regions for each UPF and the location of the UE. Such load balancing may instead be achieved using credit/token based scheduling (e.g., weighted round robin). FIG.21shows a system diagram that describe various implementations of computing systems2100for implementing embodiments described herein. The SMF102and the UPF, such as UPF1104and UPF2106, can be implemented either as a network elements on dedicated hardware, as a software instance running on dedicated hardware, or as a virtualized function instantiated on an appropriate platform, e.g., a cloud infrastructure. In some embodiments, such NFs may be completely software-based and designed as cloud-native, meaning that they're agnostic to the underlying cloud infrastructure, allowing higher deployment agility and flexibility. However,FIG.21illustrates an example of underlying hardware on which the SMF102and the UPF, such as UPF1104and UPF2106, may be implemented. For example, SMF102may be implemented using SMF computer system(s)2101. In some embodiments, one or more special-purpose computing systems may be used to implement SMF102. Accordingly, various embodiments described herein may be implemented in software, hardware, firmware, or in some combination thereof. SMF computer system(s)2101may include memory2102, one or more central processing units (CPUs)2114, I/O interfaces2118, other computer-readable media2120, and network connections2122. Memory2102may include one or more various types of non-volatile and/or volatile storage technologies. Examples of memory2102may include, but are not limited to, flash memory, hard disk drives, optical drives, solid-state drives, various types of random access memory (RAM), various types of read-only memory (ROM), other computer-readable storage media (also referred to as processor-readable storage media), or the like, or any combination thereof. Memory2102may be utilized to store information, including computer-readable instructions that are utilized by CPU2114to perform actions, including embodiments described herein. Memory2102may have stored thereon SMF module2104. The SMF module2104is configured to implement and/or perform some or all of the functions of the SMF102described herein. Memory2102may also store other programs and data2110, which may include load thresholds, load-regions, databases, load-balancing rules, AI or ML programs to perform predictive analysis of UPF load based on predicted UE throughput, CPU utilization and/or memory utilization using data from the NWDAF, user interfaces, operating systems, other network management functions, other NFs, etc. Network connections2122are configured to communicate with other computing devices to facilitate the load balancing described herein. In various embodiments, the network connections2122include transmitters and receivers (not illustrated) to send and receive data as described herein, such as sending data to and receiving data from UPFs, UEs and other NFs to send and receive instructions, commands and data to implement the processes described herein. I/O interfaces2118may include a video interfaces, other data input or output interfaces, or the like. Other computer-readable media2120may include other types of stationary or removable computer-readable media, such as removable flash drives, external hard drives, or the like. In some embodiments, one or more special-purpose computing systems may be used to implement UPF, such as UPF1104and UPF2106. Accordingly, various embodiments described herein may be implemented in software, hardware, firmware, or in some combination thereof. UPF computer system(s)2112is an example of a computer system that may implement a UPF, such as UPF1104and UPF2106. For example, computer system(s)2112may be present in data center 1 to implement UPF1104or present in data center 2 to implement UPF2106. Computer system(s)2112may include memory2130, one or more central processing units (CPUs)2144, I/O interfaces2148, other computer-readable media2150, and network connections2152. Memory2130may include one or more various types of non-volatile and/or volatile storage technologies similar to memory2102. Memory2130may be utilized to store information, including computer-readable instructions that are utilized by CPU2144to perform actions, including embodiments described herein. Memory2130may have stored thereon UPF module2124. The UPF module214receives the messages or instructions from the SMF module204to perform the load balancing operations as described herein. Memory2130may also store other programs and data2138, which may include load thresholds, load-regions, databases, load-balancing rules, AI or ML programs to perform predictive analysis of UPF load based on predicted UE throughput, CPU utilization and/or memory utilization using data from the NWDAF, user interfaces, operating systems, other network management functions, other NFs, etc. Network connections2152are configured to communicate with other computing devices, such as SMF computer system(s)2101. In various embodiments, the network connections2152include transmitters and receivers (not illustrated) to send and receive data as described herein. I/O interfaces2148may include one or more other data input or output interfaces. Other computer-readable media2150may include other types of stationary or removable computer-readable media, such as removable flash drives, external hard drives, or the like. The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
55,611
11943661
DETAILED DESCRIPTION In some wireless communications systems, a user equipment (UE) may perform communications through a base station, such as a voice or video call, where the UE and base station may perform the call in accordance with a bitrate. In some cases, conditions may change that may impact the communications. In such cases, a base station may determine to change the bitrate being used for the call to improve or mitigate degradation of the call. Upon determining a change of bitrate, the base station may transmit a bitrate recommendation to the UE. Accordingly, the UE may receive the bitrate recommendation, and the UE may change the bitrate being used for the call in accordance with the recommendation. In some cases, the UE, the base station, or both may start a timer based on part receiving the bitrate recommendation. Upon expiry of the timer, the UE may transmit a request (e.g., query) to the base station requesting whether the UE is to continue using the last recommended bitrate, or the UE may request to use a different bitrate. In some cases, the UE may be configured to only transmit the bitrate request based on expiry of the timer which may only be prompted by the bitrate recommendation from the base station. Accordingly, the UE may be limited by the base station sending the bitrate recommendation. Accordingly, the bitrate recommendation may not be based on any request from the UE and the UE may be limited in opportunity to transmit the bitrate request. To improve reliability and flexibility of a bitrate adjustment procedure, a UE may be configured to transmit a bitrate request regardless of the timer and bitrate recommendation. Therefore, a UE may calculate one or more parameters associated with a channel between the UE and the base station during a call (e.g., the channel associated with performing the call). Based on the one or more parameters (e.g., jitter, reference signal received quality (RSRQ), signal-to-interference-plus-noise ratio (SINR), delay) the UE may determine whether the bitrate being used for the call should be adjusted (e.g., to improve or maintain quality of the call). If the UE determines that the bitrate should be adjusted, then the UE may transmit a bitrate request to the base station (prior to ever receiving a bitrate recommendation from the base station, and accordingly, prior to the start or the expiry of the timer). The base station may consider the requested bitrate. In some cases, the base station may accept the requested bitrate or decline the requested bitrate. The base station may transmit a bitrate recommendation to the UE, where the recommended bitrate may by the same or different from the requested bitrate. Accordingly, the base station may determine the recommended bitrate based on information from the UE to improve the reliability of the call. Particular aspects of the subject matter described herein may be implemented to realize one or more advantages. The described techniques may support improvements in updating a bitrate being used for communications between a UE and base station by improving flexibility of a bitrate adjustment procedure. Improving the flexibility of the bitrate adjusted procedure may improve reliability and decrease latency associated with the communications, among other advantages. As such, supported techniques may include improved network operations and, in some examples, may promote network efficiencies, among other benefits. Aspects of the disclosure are initially described in the context of wireless communications systems. Aspects are the described with reference to Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to techniques for configuring a bitrate request. FIG.1illustrates an example of a wireless communications system100that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The wireless communications system100may include one or more base stations105, one or more UEs115, and a core network130. In some examples, the wireless communications system100may be a Long Term Evolution (LTE) network, an LTE-Advanced (LTE-A) network, an LTE-A Pro network, or a New Radio (NR) network. In some examples, the wireless communications system100may support enhanced broadband communications, ultra-reliable (e.g., mission critical) communications, low latency communications, communications with low-cost and low-complexity devices, or any combination thereof. The base stations105may be dispersed throughout a geographic area to form the wireless communications system100and may be devices in different forms or having different capabilities. The base stations105and the UEs115may wirelessly communicate via one or more communication links125. Each base station105may provide a coverage area110over which the UEs115and the base station105may establish one or more communication links125. The coverage area110may be an example of a geographic area over which a base station105and a UE115may support the communication of signals according to one or more radio access technologies. The UEs115may be dispersed throughout a coverage area110of the wireless communications system100, and each UE115may be stationary, or mobile, or both at different times. The UEs115may be devices in different forms or having different capabilities. Some example UEs115are illustrated inFIG.1. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115, the base stations105, or network equipment (e.g., core network nodes, relay devices, integrated access and backhaul (IAB) nodes, or other network equipment), as shown inFIG.1. The base stations105may communicate with the core network130, or with one another, or both. For example, the base stations105may interface with the core network130through one or more backhaul links120(e.g., via an S1, N2, N3, or other interface). The base stations105may communicate with one another over the backhaul links120(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105), or indirectly (e.g., via core network130), or both. In some examples, the backhaul links120may be or include one or more wireless links. One or more of the base stations105described herein may include or may be referred to by a person having ordinary skill in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or a giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or other suitable terminology. A UE115may include or may be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client, among other examples. A UE115may also include or may be referred to as a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may include or be referred to as a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or a machine type communications (MTC) device, among other examples, which may be implemented in various objects such as appliances, or vehicles, meters, among other examples. The UEs115described herein may be able to communicate with various types of devices, such as other UEs115that may sometimes act as relays as well as the base stations105and the network equipment including macro eNBs or gNBs, small cell eNBs or gNBs, or relay base stations, among other examples, as shown inFIG.1. The UEs115and the base stations105may wirelessly communicate with one another via one or more communication links125over one or more carriers. The term “carrier” may refer to a set of radio frequency spectrum resources having a defined physical layer structure for supporting the communication links125. For example, a carrier used for a communication link125may include a portion of a radio frequency spectrum band (e.g., a bandwidth part (BWP)) that is operated according to one or more physical layer channels for a given radio access technology (e.g., LTE, LTE-A, LTE-A Pro, NR). Each physical layer channel may carry acquisition signaling (e.g., synchronization signals, system information), control signaling that coordinates operation for the carrier, user data, or other signaling. The wireless communications system100may support communication with a UE115using carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both frequency division duplexing (FDD) and time division duplexing (TDD) component carriers. Signal waveforms transmitted over a carrier may be made up of multiple subcarriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or discrete Fourier transform spread OFDM (DFT-S-OFDM)). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme, the coding rate of the modulation scheme, or both). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. A wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers or beams), and the use of multiple spatial layers may further increase the data rate or data integrity for communications with a UE115. The time intervals for the base stations105or the UEs115may be expressed in multiples of a basic time unit which may, for example, refer to a sampling period of Ts=1/(Δfmax·Nf) seconds, where Δfmaxmay represent the maximum supported subcarrier spacing, and Nfmay represent the maximum supported discrete Fourier transform (DFT) size. Time intervals of a communications resource may be organized according to radio frames each having a specified duration (e.g., 10 milliseconds (ms)). Each radio frame may be identified by a system frame number (SFN) (e.g., ranging from 0 to 1023). Each frame may include multiple consecutively numbered subframes or slots, and each subframe or slot may have the same duration. In some examples, a frame may be divided (e.g., in the time domain) into subframes, and each subframe may be further divided into a number of slots. Alternatively, each frame may include a variable number of slots, and the number of slots may depend on subcarrier spacing. Each slot may include a number of symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). In some wireless communications systems100, a slot may further be divided into multiple mini-slots containing one or more symbols. Excluding the cyclic prefix, each symbol period may contain one or more (e.g., Nf) sampling periods. The duration of a symbol period may depend on the subcarrier spacing or frequency band of operation. A subframe, a slot, a mini-slot, or a symbol may be the smallest scheduling unit (e.g., in the time domain) of the wireless communications system100and may be referred to as a transmission time interval (TTI). In some examples, the TTI duration (e.g., the number of symbol periods in a TTI) may be variable. Additionally or alternatively, the smallest scheduling unit of the wireless communications system100may be dynamically selected (e.g., in bursts of shortened TTIs (sTTIs)). Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using one or more of time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. A control region (e.g., a control resource set (CORESET)) for a physical control channel may be defined by a number of symbol periods and may extend across the system bandwidth or a subset of the system bandwidth of the carrier. One or more control regions (e.g., CORESETs) may be configured for a set of the UEs115. For example, one or more of the UEs115may monitor or search control regions for control information according to one or more search space sets, and each search space set may include one or multiple control channel candidates in one or more aggregation levels arranged in a cascaded manner. An aggregation level for a control channel candidate may refer to a number of control channel resources (e.g., control channel elements (CCEs)) associated with encoded information for a control information format having a given payload size. Search space sets may include common search space sets configured for sending control information to multiple UEs115and UE-specific search space sets for sending control information to a specific UE115. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, but the different geographic coverage areas110may be supported by the same base station105. In other examples, the overlapping geographic coverage areas110associated with different technologies may be supported by different base stations105. The wireless communications system100may include, for example, a heterogeneous network in which different types of the base stations105provide coverage for various geographic coverage areas110using the same or different radio access technologies. The wireless communications system100may be configured to support ultra-reliable communications or low-latency communications, or various combinations thereof. For example, the wireless communications system100may be configured to support ultra-reliable low-latency communications (URLLC) or mission critical communications. The UEs115may be designed to support ultra-reliable, low-latency, or critical functions (e.g., mission critical functions). Ultra-reliable communications may include private communication or group communication and may be supported by one or more mission critical services such as mission critical push-to-talk (MCPTT), mission critical video (MCVideo), or mission critical data (MCData). Support for mission critical functions may include prioritization of services, and mission critical services may be used for public safety or general commercial applications. The terms ultra-reliable, low-latency, mission critical, and ultra-reliable low-latency may be used interchangeably herein. In some examples, a UE115may also be able to communicate directly with other UEs115over a device-to-device (D2D) communication link135(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105or be otherwise unable to receive transmissions from a base station105. In some examples, groups of the UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some examples, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between the UEs115without the involvement of a base station105. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC) or 5G core (5GC), which may include at least one control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management function (AMF)) and at least one user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). The control plane entity may manage non-access stratum (NAS) functions such as mobility, authentication, and bearer management for the UEs115served by the base stations105associated with the core network130. User IP packets may be transferred through the user plane entity, which may provide IP address allocation as well as other functions. The user plane entity may be connected to IP services150for one or more network operators. The IP services150may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched Streaming Service. Some of the network devices, such as a base station105, may include subcomponents such as an access network entity140, which may be an example of an access node controller (ANC). Each access network entity140may communicate with the UEs115through one or more other access network transmission entities145, which may be referred to as radio heads, smart radio heads, or transmission/reception points (TRPs). Each access network transmission entity145may include one or more antenna panels. In some configurations, various functions of each access network entity140or base station105may be distributed across various network devices (e.g., radio heads and ANCs) or consolidated into a single network device (e.g., a base station105). The wireless communications system100may operate using one or more frequency bands, typically in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). Generally, the region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band because the wavelengths range from approximately one decimeter to one meter in length. The UHF waves may be blocked or redirected by buildings and environmental features, but the waves may penetrate structures sufficiently for a macro cell to provide service to the UEs115located indoors. The transmission of UHF waves may be associated with smaller antennas and shorter ranges (e.g., less than 100 kilometers) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. The wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, the wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz industrial, scientific, and medical (ISM) band. When operating in unlicensed radio frequency spectrum bands, devices such as the base stations105and the UEs115may employ carrier sensing for collision detection and avoidance. In some examples, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, P2P transmissions, or D2D transmissions, among other examples. A base station105or a UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. The antennas of a base station105or a UE115may be located within one or more antenna arrays or antenna panels, which may support MIMO operations or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some examples, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. Additionally or alternatively, an antenna panel may support radio frequency beamforming for a signal transmitted via an antenna port. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105, a UE115) to shape or steer an antenna beam (e.g., a transmit beam, a receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that some signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying amplitude offsets, phase offsets, or both to signals carried via the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). A UE115may calculate one or more parameters (e.g., jitter, RSRQ, a SINR, delay, or a combination thereof) associated with a quality of communications between the UE115and a base station105. The UE115may determine a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station105, and the UE115may transmit, prior to receiving the bitrate recommendation from the base station105, a request to perform the communications in accordance with the determined bitrate. The base station105may determine whether to accept the requested bitrate, and perform the communications with the UE115in accordance with the determination. FIG.2illustrates an example of a wireless communications system200that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The wireless communications system200may include base station105-aand UE115-a, which may be examples of a base station105and a UE115as described with reference toFIG.1. Base station105-amay serve a geographic coverage area110-a. In some cases, base station105-aand UE115-amay implement a bitrate adjustment procedure for communications between UE115-aand base station105-a. Additionally or alternatively, other wireless devices, may implement a same or similar procedure. A UE115, such as UE115-a, may perform communications via a base station105, such as base station105-a. In some cases, UE115-amay perform a voice call, or video call, or both (e.g., an multimedia telephony service (MMTel) call with voice and/or video), via base station105-a. For example, UE115-aand base station105-amay communicate in accordance with a call via communications link205-a(e.g., an uplink communications and/or a downlink communications link). To perform the voice and/or video call, the base station105may recommend, to the UE115, a bitrate to use to perform the call, where the recommendation may be based on channel conditions (e.g., wireless channel conditions). For example, base station105-amay detect a radio link condition between UE115-aand base station105-aand may transmit a bit rate recommendation to UE115-a. In some cases, the recommended bitrate may be directional such that the recommend bitrate is for uplink communications, downlink communications, or both. Accordingly, the base station105may recommend multiple different bitrates, such as one for use in the uplink and one for use in the downlink. In some cases, base station105-amay send the bitrate recommendation message to UE115-abased on UE115-asupporting reception of the bitrate recommendation message. The base station105may be configured to transmit a higher allowed bit rate to the UE115when the radio link condition between the UE115and base station105is above a threshold (e.g., is good) in a specific direction and may be configured to transmit a lower allowed bit rate to the UE115when the radio link condition in a specific direction is degrading (e.g., getting worse). In some cases, the base station105may include the recommend bitrate in a message, such as a medium access control (MAC) control element (MAC-CE) message. In some cases, the UE115, the base station105, or both may be configured with a bitrate map that indicates an association of bitrates to index values. In some cases, the index value may be a logical channel identifier (LCID) value included in an a MAC-CE (e.g., a MAC-CE sub-header). For example, the base station105may indicate the recommended bitrate as a set of bits in the LCID field of a MAC-CE. Accordingly, upon determining a recommended bitrate, the base station105may include an index value in the message that maps to the corresponding recommended bit rate. The recommended bitrate may be a physical layer bit rate including the payload (e.g., the actual media such as audio and/or video traffic payload), and overhead (e.g., RTP, UDP, IP, and modem higher layer PDCP, RLC, MAC overheads). When the UE115receives a recommended bitrate (e.g., a radio access network (RAN) message), the UE115may save recommended bitrate information (e.g., save the uplink and/or downlink physical bitrate information as “RAN context”). For example, in the audio call scenario (e.g., audio codec in Voice over LTE (VoLTE) and/or Voice over NR (VoNR) voice call scenario), the UE115may use a selected mode and bitrate (e.g., Adaptive Multi-Rate Wideband (AMR-WB) 23.85 kbit/s, Enhanced Voice Services (EVS) 24.4 kbit/s etc.). The UE115may then adapt the selected mode and bitrate (e.g., the audio codec mode and bitrate) according to the saved bitrate information (e.g., the RAN context) to improve the call quality as recommended by the base station105for better user experience. The UE115may perform a similar procedure for video calls (e.g., video streaming). In some implementations, a UE115may be capable of transmitting a bitrate request (e.g., query) to the base station105to increase or decrease the previously received recommended bitrate (e.g., in the uplink, downlink, or both). Similarly to the recommend bitrate, in some cases, the UE115may include the bitrate request in a message, such as a MAC-CE message. In some cases, the UE115, the base station105, or both may be configured with a bitrate map that indicates an association of bitrates to index values. In some cases, the index value may be a LCID value included in an a MAC-CE (e.g., a MAC-CE sub-header). For example, the UE115may indicate the requested bitrate as a set of bits in the LCID field of a MAC-CE. Accordingly, upon determining a requested bitrate, the UE115may include an index value in the message that maps to the corresponding recommended bit rate. Accordingly, a base station105may determine a recommended bitrate for a UE115to use for communications between the UE115and base station105. The base station105may transmit a message (e.g., a MAC-CE) including an indication of a recommended bitrate (e.g., LCID, a direction field (indicating uplink or downlink), a bitrate index). In some cases, the MAC layer (e.g., MAC layer 2) of the UE115may receive the bitrate recommendation and the MAC layer may relay the bitrate recommendation to a data services (DS) of the UE115(e.g., via a BitRate_MAC message including an evolved packet system (EPS) bearer ID, direction field, bitrate index, robust header compression (RoHC) engage/disengage information). The DS may be referred to as an intermediate layer of the UE115. The DS of the UE115may relay the bitrate recommendation to the IMS of the UE115(e.g., via a BitRate_DS message including media ID, direction field, bitrate index, RoHC engage/disengage information), where the IMS processes the bitrate to be used for the call. In some cases, the bitrate may be relayed to an upper layer of the UE or to the IMS directly. Upon receiving the recommended bitrate, the UE115may be configured to start a timer (e.g., bitRateQueryProhibitTimer), where the timer may define a duration the UE115is to wait before transmitting a bitrate request. In some cases, the UE115may start the timer when the recommended bitrate is received by the IMS of the UE115. The UE115may be configured to refrain from transmitting a bitrate request to the base station105until the timer expires. Accordingly, if the UE115determines that the bitrate should be increased or decreased from the recommended bitrate, then the IMS of the UE115may transmit a bitrate request to the DS of the UE115(e.g., via a BitRAteQ_IMS message including a media ID, direction field, and bitrate index). The DS may relay the bitrate request to the MAC layer of the UE115(e.g., (e.g., via a BitRAteQ_DS message including an EPS bearer ID, direction field, and bitrate index). Upon receiving the bitrate request, the MAC layer may determine whether to transmit the bitrate request to the base station105based on whether the timer is running. If the timer is still running, the MAC layer may refrain from transmitting the bitrate request to the base station105. If, however, the timer has expired, then the MAC layer may transmit the bitrate request to the base station105(e.g., via a MAC-CE message including an LCID, direction field, and bitrate index). The bitrate request may query whether the UE115is to continue using the last recommended bitrate, or the UE115may request to use a different bitrate. The bitrate request may indicate for the base station105to increase or decrease the bitrate from the last recommended bitrate and may include an indication of a direction (e.g., uplink, downlink, or both) for which the bitrate request applies. For example, the UE115may request for the bitrate to increase or decrease in the uplink, or the downlink, or both. However, the UE115may be configured to only transmit the bitrate request based on expiry of the timer which may only be prompted by the bitrate recommendation from the base station105. Accordingly, the UE115may be limited in transmitting the bitrate request by the base station105sending the bitrate recommendation. As such, the bitrate recommendation may not be based on any request from the UE115and the UE115may be limited in opportunity to transmit the bitrate request. To improve reliability and flexibility of a bitrate adjustment procedure, a UE115may be configured to transmit a bitrate request regardless of the timer (e.g., bitRateQueryProhibitTimer), and bitrate recommendation. Rather, a UE115may determine a quality of communications between the UE115and a base station and if the UE115determines to increase or decrease the bitrate, then the UE115may transmit a bitrate request to the base station105regardless of whether the UE115has received a bitrate recommendation or whether the timer is running. Accordingly, UE115-amay be performing a call via base station105-aover communications link205-a. UE115-amay calculate one or more parameters (e.g., jitter, reference signal received quality (RSRQ), signal-to-interference-plus-noise ratio (SINR), delay) associated with a channel (e.g., communications link205-a) between UE115-aand base station105-aduring a call. UE115-amay be configured to calculate the one or more parameters in the IMS stack during the call. UE115-amay be configured to perform the calculations periodically or continuously while UE115-ais performing the call. UE115-amay calculate the parameters before or after receiving a bitrate recommendation from base station105-a. In some cases, UE115-amay be preconfigured with a calculation configuration (e.g., parameters to calculate, frequency of calculation), or UE115-amay receive the calculation configuration in a message (e.g., radio resource control (RRC), MAC-CE, downlink control information (DCI)) from a network device, such as base station105-a. Based on the one or more parameters UE115-amay determine whether the bitrate being used for the call should be adjusted by increasing, or decreasing the current bitrate (e.g., to improve or maintain quality of the call). If UE115-adetermines that the bitrate should be adjusted, then UE115-amay transmit a bitrate request210to base station150-avia communications link205-b. For example, UE115-amay transmit (e.g., the MAC layer may transmit) the bitrate request210to the base station105prior to ever receiving a bitrate recommendation from base station105-a, and accordingly, prior to the start and/or expiry of the timer. The bitrate request210may include indicate a specific bitrate, or an indication to increase or decrease the bitrate from the current bitrate. In some cases, UE115-amay request a directional bitrate. For example, the UE115may request multiple different bitrates, such as one for use in the uplink and one for use in the downlink. Base station105-amay consider the requested bitrate. In some cases, base station105-amay accept the requested bitrate or decline the requested bitrate. If base station105-aaccepts the bitrate request210, base station105-amay transmit a message to UE115-aindicating that base station105-aaccepts the request. Additionally or alternatively, if base station105-aaccepts the bitrate request, base station105-amay transmit a bitrate recommendation to UE115-a, where the bitrate recommendation may indicate a recommended bitrate equal to the requested bitrate. If base station105-arejects the bitrate request210, base station105-amay transmit a message to UE115-aindicating that base station105-arejects the request. Additionally or alternatively, if base station105-arejects the bitrate request, base station105-amay refrain from transmitting a response to the bitrate request210. Additionally or alternatively, if base station105-arejects the bitrate request, base station105-amay transmit a bitrate recommendation to UE115-a, where the bitrate recommendation may indicate a recommended bitrate different from the requested bitrate. Accordingly, base station105-amay determine the recommended bitrate based on information from UE115-ato improve the reliability of the call. UE115-amay be pre-configured to transmit bitrate request210regardless of a bitrate recommendation, or may receive a message (e.g., RRC, DCI, MAC-CE) from the base station105indicating the UE115to operate in such a manner. In some cases, upon transmitting a bitrate request, UE115-amay be configured to start a bitrate request timer, where the bitrate request timer may define a duration for UE115-ato wait before transmitting a second bitrate request. The bitrate request timer may be different from the bitRateQueryProhibitTimer, and may expire after a duration or based on an event, such as UE115-areceiving a bitrate recommendation. For example, UE115-amay start a bitrate request timer upon transmitting the bitrate request210, and refrain from transmitting a second request while the bitrate request timer is running. If the bitrate does not change while the timer is running (e.g., UE115-adoes not receive a bitrate recommendation), UE115-bmay determine to request the same or a different bitrate (compared to the previously transmitted bitrate request). Upon expiry of the bitrate request timer, UE115-bmay transmit the second bitrate request to perform the communications in accordance with the determined bitrate. UE115-amay be pre-configured with the bitrate request timer, or may receive a message (e.g., RRC, DCI, MAC-CE) from the base station105indicating UE115-ato use the bitrate request timer and the parameters associated with the bitrate request timer (e.g., when to start the timer, the duration of the timer, what to do after expiry of the timer). FIG.3illustrates an example of a process flow300that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The process flow300may illustrate an example bitrate adjustment procedure. For example, UE115-bmay determine to increase or decrease a bitrate being used for communications between UE115-band base station105-band UE115-bmay transmit a bitrate request to base station105-b. Base station105-band UE115-bmay be examples of the corresponding wireless devices described with reference toFIGS.1and2. In some cases, instead of UE115-bimplementing the bitrate adjustment procedure, a different type of device (e.g., a base station105) may perform a same of similar procedure. Alternative examples of the following may be implemented, where some steps are performed in a different order than described or are not performed at all. In some cases, steps may include additional features not mentioned below, or further steps may be added. At305, UE115-bmay calculate one or more parameters associated with a quality of communications between UE115-band base station105-b. Calculating one or more parameters may include calculating a jitter, a delay, a reference signal received quality, a signal-to-interference-plus-noise ratio, or a combination thereof associated with communications between UE115-band base station105-b. At310, UE115-bmay determine a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from base station105-b. At315, UE115-bmay transmit, prior to receiving the bitrate recommendation from base station105-b, a request to perform the communications in accordance with the determined bitrate. The request may include an indication to increase a current bitrate or decrease the current bitrate being used for communications between UE115-band base station105-b. Transmitting the request may include transmitting a medium access control message including a logical channel identifier, where the logical channel identifier may indicate the request. In some cases, UE115-bmay start a bitrate request timer upon transmitting the request, and refrain from transmitting a second request while the bitrate request timer is running. UE115-bmay identify that the bitrate request timer has expired, and may transmit the second request to perform the communications in accordance with the determined bitrate. At320, base station105-bmay determine whether to accept the requested bitrate. In some cases, UE115-bmay receive, from base station105-b, an indication of an acceptance by base station105-bto use the determined bitrate. Receiving the indication of the acceptance may include receiving the bitrate recommendation after transmitting the request, where the bitrate recommendation may include the determined bitrate. Receiving the bitrate recommendation may include receiving a medium access control message including a logical channel identifier, where the logical channel identifier indicates the determined bitrate. In some cases, UE115-bmay receive, from base station105-bafter transmitting the request, the bitrate recommendation, where the bitrate recommendation may be different than the determined bitrate. Receiving the bitrate recommendation may include receiving a medium access control message including a logical channel identifier, where the logical channel identifier may indicate the bitrate recommendation. In some implementations, UE115-bmay receive, from base station105-b, an indication that base station105-brejected the determined bitrate. In some cases, base station105-bmay refrain from transmitting a message to UE115-bindicating that base station105-brejected the requested bitrate based on determining to reject the requested bitrate. UE115-bmay identify a failure by UE115-bto receive an indication that base station105-baccepts the determined bitrate, and UE115-bmay refrain from switching bitrates to the determined bitrate based on the identification. At325, UE115-band base station105-bmay perform communications (e.g., voice call, video call) in accordance with the determination. FIG.4shows a block diagram400of a device405that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The device405may be an example of aspects of a UE115as described herein. The device405may include a receiver410, a transmitter415, and a communications manager420. The device405may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver410may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). Information may be passed on to other components of the device405. The receiver410may utilize a single antenna or a set of multiple antennas. The transmitter415may provide a means for transmitting signals generated by other components of the device405. For example, the transmitter415may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). In some examples, the transmitter415may be co-located with a receiver410in a transceiver module. The transmitter415may utilize a single antenna or a set of multiple antennas. The communications manager420, the receiver410, the transmitter415, or various combinations thereof or various components thereof may be examples of means for performing various aspects of techniques for configuring a bitrate request as described herein. For example, the communications manager420, the receiver410, the transmitter415, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager420, the receiver410, the transmitter415, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager420, the receiver410, the transmitter415, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager420, the receiver410, the transmitter415, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a central processing unit (CPU), an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager420may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver410, the transmitter415, or both. For example, the communications manager420may receive information from the receiver410, send information to the transmitter415, or be integrated in combination with the receiver410, the transmitter415, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager420may support wireless communications at a UE in accordance with examples as disclosed herein. For example, the communications manager420may be configured as or otherwise support a means for calculating one or more parameters associated with a quality of communications between the UE and a base station. The communications manager420may be configured as or otherwise support a means for determining a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station. The communications manager420may be configured as or otherwise support a means for transmitting, prior to receiving the bitrate recommendation from the base station, a request to perform the communications in accordance with the determined bitrate. By including or configuring the communications manager420in accordance with examples as described herein, the device405(e.g., a processor controlling or otherwise coupled to the receiver410, the transmitter415, the communications manager420, or a combination thereof) may support techniques for more efficient utilization of communication resources. FIG.5shows a block diagram500of a device505that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The device505may be an example of aspects of a device405or a UE115as described herein. The device505may include a receiver510, a transmitter515, and a communications manager520. The device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver510may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). Information may be passed on to other components of the device505. The receiver510may utilize a single antenna or a set of multiple antennas. The transmitter515may provide a means for transmitting signals generated by other components of the device505. For example, the transmitter515may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). In some examples, the transmitter515may be co-located with a receiver510in a transceiver module. The transmitter515may utilize a single antenna or a set of multiple antennas. The device505, or various components thereof, may be an example of means for performing various aspects of techniques for configuring a bitrate request as described herein. For example, the communications manager520may include a parameter calculation manager525, a bitrate determination manager530, a bitrate request manager535, or any combination thereof. The communications manager520may be an example of aspects of a communications manager420as described herein. In some examples, the communications manager520, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver510, the transmitter515, or both. For example, the communications manager520may receive information from the receiver510, send information to the transmitter515, or be integrated in combination with the receiver510, the transmitter515, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager520may support wireless communications at a UE in accordance with examples as disclosed herein. The parameter calculation manager525may be configured as or otherwise support a means for calculating one or more parameters associated with a quality of communications between the UE and a base station. The bitrate determination manager530may be configured as or otherwise support a means for determining a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station. The bitrate request manager535may be configured as or otherwise support a means for transmitting, prior to receiving the bitrate recommendation from the base station, a request to perform the communications in accordance with the determined bitrate. FIG.6shows a block diagram600of a communications manager620that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The communications manager620may be an example of aspects of a communications manager420, a communications manager520, or both, as described herein. The communications manager620, or various components thereof, may be an example of means for performing various aspects of techniques for configuring a bitrate request as described herein. For example, the communications manager620may include a parameter calculation manager625, a bitrate determination manager630, a bitrate request manager635, a bitrate request acceptance manager640, a bitrate recommendation manager645, a bitrate request rejection manager650, a bitrate timer manager655, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager620may support wireless communications at a UE in accordance with examples as disclosed herein. The parameter calculation manager625may be configured as or otherwise support a means for calculating one or more parameters associated with a quality of communications between the UE and a base station. The bitrate determination manager630may be configured as or otherwise support a means for determining a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station. The bitrate request manager635may be configured as or otherwise support a means for transmitting, prior to receiving the bitrate recommendation from the base station, a request to perform the communications in accordance with the determined bitrate. In some examples, the bitrate request acceptance manager640may be configured as or otherwise support a means for receiving, from the base station, an indication of an acceptance by the base station to use the determined bitrate. In some examples, to support receiving the indication of the acceptance, the bitrate recommendation manager645may be configured as or otherwise support a means for receiving the bitrate recommendation after transmitting the request, where the bitrate recommendation includes the determined bitrate. In some examples, to support receiving the bitrate recommendation, the bitrate recommendation manager645may be configured as or otherwise support a means for receiving a medium access control message including a logical channel identifier, where the logical channel identifier indicates the determined bitrate. In some examples, the bitrate recommendation manager645may be configured as or otherwise support a means for receiving, from the base station after transmitting the request, the bitrate recommendation, where the bitrate recommendation is different than the determined bitrate. In some examples, to support receiving the bitrate recommendation, the bitrate recommendation manager645may be configured as or otherwise support a means for receiving a medium access control message including a logical channel identifier, where the logical channel identifier indicates the bitrate recommendation. In some examples, the bitrate request rejection manager650may be configured as or otherwise support a means for receiving, from the base station, an indication that the base station rejected the determined bitrate. In some examples, the bitrate request rejection manager650may be configured as or otherwise support a means for identifying a failure by the UE to receive an indication that the base station accepts the determined bitrate. In some examples, the bitrate request rejection manager650may be configured as or otherwise support a means for refraining from switching bitrates to the determined bitrate based on the identification. In some examples, the bitrate timer manager655may be configured as or otherwise support a means for starting a bitrate request timer upon transmitting the request. In some examples, the bitrate request manager635may be configured as or otherwise support a means for refraining from transmitting a second request while the bitrate request timer is running. In some examples, the bitrate timer manager655may be configured as or otherwise support a means for identifying that the bitrate request timer has expired. In some examples, the bitrate request manager635may be configured as or otherwise support a means for transmitting the second request to perform the communications in accordance with the determined bitrate. In some examples, to support transmitting the request, the bitrate request manager635may be configured as or otherwise support a means for transmitting a medium access control message including a logical channel identifier, where the logical channel identifier indicates the request. In some examples, to support calculating one or more parameters, the parameter calculation manager625may be configured as or otherwise support a means for calculating a jitter, a delay, a reference signal received quality, a signal-to-interference-plus-noise ratio, or a combination thereof associated with communications between the UE and the base station. In some examples, the request includes an indication to increase a current bitrate or decrease the current bitrate being used for communications between the UE and the base station. FIG.7shows a diagram of a system700including a device705that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The device705may be an example of or include the components of a device405, a device505, or a UE115as described herein. The device705may communicate wirelessly with one or more base stations105, UEs115, or any combination thereof. The device705may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager720, an input/output (I/O) controller710, a transceiver715, an antenna725, a memory730, code735, and a processor740. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus745). The I/O controller710may manage input and output signals for the device705. The I/O controller710may also manage peripherals not integrated into the device705. In some cases, the I/O controller710may represent a physical connection or port to an external peripheral. In some cases, the I/O controller710may utilize an operating system such as iOS®, ANDROID®, MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, LINUX®, or another known operating system. Additionally or alternatively, the I/O controller710may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller710may be implemented as part of a processor, such as the processor740. In some cases, a user may interact with the device705via the I/O controller710or via hardware components controlled by the I/O controller710. In some cases, the device705may include a single antenna725. However, in some other cases, the device705may have more than one antenna725, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver715may communicate bi-directionally, via the one or more antennas725, wired, or wireless links as described herein. For example, the transceiver715may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver715may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas725for transmission, and to demodulate packets received from the one or more antennas725. The transceiver715, or the transceiver715and one or more antennas725, may be an example of a transmitter415, a transmitter515, a receiver410, a receiver510, or any combination thereof or component thereof, as described herein. The memory730may include random access memory (RAM) and read-only memory (ROM). The memory730may store computer-readable, computer-executable code735including instructions that, when executed by the processor740, cause the device705to perform various functions described herein. The code735may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code735may not be directly executable by the processor740but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory730may contain, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor740may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor740may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor740. The processor740may be configured to execute computer-readable instructions stored in a memory (e.g., the memory730) to cause the device705to perform various functions (e.g., functions or tasks supporting techniques for configuring a bitrate request). For example, the device705or a component of the device705may include a processor740and memory730coupled to the processor740, the processor740and memory730configured to perform various functions described herein. The communications manager720may support wireless communications at a UE in accordance with examples as disclosed herein. For example, the communications manager720may be configured as or otherwise support a means for calculating one or more parameters associated with a quality of communications between the UE and a base station. The communications manager720may be configured as or otherwise support a means for determining a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station. The communications manager720may be configured as or otherwise support a means for transmitting, prior to receiving the bitrate recommendation from the base station, a request to perform the communications in accordance with the determined bitrate. By including or configuring the communications manager720in accordance with examples as described herein, the device705may support techniques for improved communication reliability, reduced latency, more efficient utilization of communication resources, and improved coordination between devices. In some examples, the communications manager720may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver715, the one or more antennas725, or any combination thereof. Although the communications manager720is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager720may be supported by or performed by the processor740, the memory730, the code735, or any combination thereof. For example, the code735may include instructions executable by the processor740to cause the device705to perform various aspects of techniques for configuring a bitrate request as described herein, or the processor740and the memory730may be otherwise configured to perform or support such operations. FIG.8shows a block diagram800of a device805that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The device805may be an example of aspects of a base station105as described herein. The device805may include a receiver810, a transmitter815, and a communications manager820. The device805may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver810may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). Information may be passed on to other components of the device805. The receiver810may utilize a single antenna or a set of multiple antennas. The transmitter815may provide a means for transmitting signals generated by other components of the device805. For example, the transmitter815may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). In some examples, the transmitter815may be co-located with a receiver810in a transceiver module. The transmitter815may utilize a single antenna or a set of multiple antennas. The communications manager820, the receiver810, the transmitter815, or various combinations thereof or various components thereof may be examples of means for performing various aspects of techniques for configuring a bitrate request as described herein. For example, the communications manager820, the receiver810, the transmitter815, or various combinations or components thereof may support a method for performing one or more of the functions described herein. In some examples, the communications manager820, the receiver810, the transmitter815, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry). The hardware may include a processor, a DSP, an ASIC, an FPGA or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure. In some examples, a processor and memory coupled with the processor may be configured to perform one or more of the functions described herein (e.g., by executing, by the processor, instructions stored in the memory). Additionally or alternatively, in some examples, the communications manager820, the receiver810, the transmitter815, or various combinations or components thereof may be implemented in code (e.g., as communications management software or firmware) executed by a processor. If implemented in code executed by a processor, the functions of the communications manager820, the receiver810, the transmitter815, or various combinations or components thereof may be performed by a general-purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination of these or other programmable logic devices (e.g., configured as or otherwise supporting a means for performing the functions described in the present disclosure). In some examples, the communications manager820may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver810, the transmitter815, or both. For example, the communications manager820may receive information from the receiver810, send information to the transmitter815, or be integrated in combination with the receiver810, the transmitter815, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager820may support wireless communications at a base station in accordance with examples as disclosed herein. For example, the communications manager820may be configured as or otherwise support a means for receiving, from a UE, prior to transmitting a bitrate recommendation to the UE, a requested bitrate for communications between the UE and the base station. The communications manager820may be configured as or otherwise support a means for determining whether to accept the requested bitrate. The communications manager820may be configured as or otherwise support a means for performing the communications with the UE in accordance with the determination. By including or configuring the communications manager820in accordance with examples as described herein, the device805(e.g., a processor controlling or otherwise coupled to the receiver810, the transmitter815, the communications manager820, or a combination thereof) may support techniques for more efficient utilization of communication resources. FIG.9shows a block diagram900of a device905that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The device905may be an example of aspects of a device805or a base station105as described herein. The device905may include a receiver910, a transmitter915, and a communications manager920. The device905may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver910may provide a means for receiving information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). Information may be passed on to other components of the device905. The receiver910may utilize a single antenna or a set of multiple antennas. The transmitter915may provide a means for transmitting signals generated by other components of the device905. For example, the transmitter915may transmit information such as packets, user data, control information, or any combination thereof associated with various information channels (e.g., control channels, data channels, information channels related to techniques for configuring a bitrate request). In some examples, the transmitter915may be co-located with a receiver910in a transceiver module. The transmitter915may utilize a single antenna or a set of multiple antennas. The device905, or various components thereof, may be an example of means for performing various aspects of techniques for configuring a bitrate request as described herein. For example, the communications manager920may include a bitrate recommendation reception component925, a bitrate request evaluation component930, a communications performing component935, or any combination thereof. The communications manager920may be an example of aspects of a communications manager820as described herein. In some examples, the communications manager920, or various components thereof, may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the receiver910, the transmitter915, or both. For example, the communications manager920may receive information from the receiver910, send information to the transmitter915, or be integrated in combination with the receiver910, the transmitter915, or both to receive information, transmit information, or perform various other operations as described herein. The communications manager920may support wireless communications at a base station in accordance with examples as disclosed herein. The bitrate recommendation reception component925may be configured as or otherwise support a means for receiving, from a UE, prior to transmitting a bitrate recommendation to the UE, a requested bitrate for communications between the UE and the base station. The bitrate request evaluation component930may be configured as or otherwise support a means for determining whether to accept the requested bitrate. The communications performing component935may be configured as or otherwise support a means for performing the communications with the UE in accordance with the determination. FIG.10shows a block diagram1000of a communications manager1020that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The communications manager1020may be an example of aspects of a communications manager820, a communications manager920, or both, as described herein. The communications manager1020, or various components thereof, may be an example of means for performing various aspects of techniques for configuring a bitrate request as described herein. For example, the communications manager1020may include a bitrate recommendation reception component1025, a bitrate request evaluation component1030, a communications performing component1035, a bitrate request acceptance component1040, a bitrate recommendation transmission component1045, a bitrate request rejection component1050, or any combination thereof. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). The communications manager1020may support wireless communications at a base station in accordance with examples as disclosed herein. The bitrate recommendation reception component1025may be configured as or otherwise support a means for receiving, from a UE, prior to transmitting a bitrate recommendation to the UE, a requested bitrate for communications between the UE and the base station. The bitrate request evaluation component1030may be configured as or otherwise support a means for determining whether to accept the requested bitrate. The communications performing component1035may be configured as or otherwise support a means for performing the communications with the UE in accordance with the determination. In some examples, the bitrate request acceptance component1040may be configured as or otherwise support a means for transmitting, to the UE, an indication of an acceptance by the base station to use the determined bitrate based on determining to accept the requested bitrate. In some examples, to support transmitting the indication of the acceptance, the bitrate recommendation transmission component1045may be configured as or otherwise support a means for transmitting the bitrate recommendation after receiving the requested bitrate, where the bitrate recommendation includes the requested bitrate. In some examples, to support transmitting the bitrate recommendation, the bitrate recommendation transmission component1045may be configured as or otherwise support a means for transmitting a medium access control message including a logical channel identifier, where the logical channel identifier indicates the requested bitrate. In some examples, the bitrate recommendation transmission component1045may be configured as or otherwise support a means for transmitting, to the UE after receiving the requested bitrate, the bitrate recommendation, where the bitrate recommendation is different than the requested bitrate. In some examples, to support transmitting the bitrate recommendation, the bitrate recommendation transmission component1045may be configured as or otherwise support a means for transmitting a medium access control message including a logical channel identifier, where the logical channel identifier indicates the bitrate recommendation. In some examples, the bitrate request rejection component1050may be configured as or otherwise support a means for transmitting, to the UE, an indication that the base station rejected the requested bitrate based on determining to reject the requested bitrate. In some examples, the bitrate request rejection component1050may be configured as or otherwise support a means for refraining from transmitting a message to the UE indicating that the base station rejected the requested bitrate based on determining to reject the requested bitrate. In some examples, to support receiving the requested bitrate, the bitrate recommendation reception component1025may be configured as or otherwise support a means for receiving a medium access control message including a logical channel identifier, where the logical channel identifier indicates the requested bitrate. In some examples, the requested bitrate includes an indication to increase a current bitrate or decrease the current bitrate being used for communications between the UE and the base station. FIG.11shows a diagram of a system1100including a device1105that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The device1105may be an example of or include the components of a device805, a device905, or a base station105as described herein. The device1105may communicate wirelessly with one or more base stations105, UEs115, or any combination thereof. The device1105may include components for bi-directional voice and data communications including components for transmitting and receiving communications, such as a communications manager1120, a network communications manager1110, a transceiver1115, an antenna1125, a memory1130, code1135, a processor1140, and an inter-station communications manager1145. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more buses (e.g., a bus1150). The network communications manager1110may manage communications with a core network130(e.g., via one or more wired backhaul links). For example, the network communications manager1110may manage the transfer of data communications for client devices, such as one or more UEs115. In some cases, the device1105may include a single antenna1125. However, in some other cases the device1105may have more than one antenna1125, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The transceiver1115may communicate bi-directionally, via the one or more antennas1125, wired, or wireless links as described herein. For example, the transceiver1115may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver1115may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas1125for transmission, and to demodulate packets received from the one or more antennas1125. The transceiver1115, or the transceiver1115and one or more antennas1125, may be an example of a transmitter815, a transmitter915, a receiver810, a receiver910, or any combination thereof or component thereof, as described herein. The memory1130may include RAM and ROM. The memory1130may store computer-readable, computer-executable code1135including instructions that, when executed by the processor1140, cause the device1105to perform various functions described herein. The code1135may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. In some cases, the code1135may not be directly executable by the processor1140but may cause a computer (e.g., when compiled and executed) to perform functions described herein. In some cases, the memory1130may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices. The processor1140may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor1140may be configured to operate a memory array using a memory controller. In some other cases, a memory controller may be integrated into the processor1140. The processor1140may be configured to execute computer-readable instructions stored in a memory (e.g., the memory1130) to cause the device1105to perform various functions (e.g., functions or tasks supporting techniques for configuring a bitrate request). For example, the device1105or a component of the device1105may include a processor1140and memory1130coupled to the processor1140, the processor1140and memory1130configured to perform various functions described herein. The inter-station communications manager1145may manage communications with other base stations105, and may include a controller or scheduler for controlling communications with UEs115in cooperation with other base stations105. For example, the inter-station communications manager1145may coordinate scheduling for transmissions to UEs115for various interference mitigation techniques such as beamforming or joint transmission. In some examples, the inter-station communications manager1145may provide an X2 interface within an LTE/LTE-A wireless communications network technology to provide communication between base stations105. The communications manager1120may support wireless communications at a base station in accordance with examples as disclosed herein. For example, the communications manager1120may be configured as or otherwise support a means for receiving, from a UE, prior to transmitting a bitrate recommendation to the UE, a requested bitrate for communications between the UE and the base station. The communications manager1120may be configured as or otherwise support a means for determining whether to accept the requested bitrate. The communications manager1120may be configured as or otherwise support a means for performing the communications with the UE in accordance with the determination. By including or configuring the communications manager1120in accordance with examples as described herein, the device1105may support techniques for improved communication reliability, reduced latency, more efficient utilization of communication resources, and improved coordination between devices. In some examples, the communications manager1120may be configured to perform various operations (e.g., receiving, monitoring, transmitting) using or otherwise in cooperation with the transceiver1115, the one or more antennas1125, or any combination thereof. Although the communications manager1120is illustrated as a separate component, in some examples, one or more functions described with reference to the communications manager1120may be supported by or performed by the processor1140, the memory1130, the code1135, or any combination thereof. For example, the code1135may include instructions executable by the processor1140to cause the device1105to perform various aspects of techniques for configuring a bitrate request as described herein, or the processor1140and the memory1130may be otherwise configured to perform or support such operations. FIG.12shows a flowchart illustrating a method1200that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The operations of the method1200may be implemented by a UE or its components as described herein. For example, the operations of the method1200may be performed by a UE115as described with reference toFIGS.1through7. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1205, the method may include calculating one or more parameters associated with a quality of communications between the UE and a base station. The operations of1205may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1205may be performed by a parameter calculation manager625as described with reference toFIG.6. At1210, the method may include determining a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station. The operations of1210may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1210may be performed by a bitrate determination manager630as described with reference toFIG.6. At1215, the method may include transmitting, prior to receiving the bitrate recommendation from the base station, a request to perform the communications in accordance with the determined bitrate. The operations of1215may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1215may be performed by a bitrate request manager635as described with reference toFIG.6. FIG.13shows a flowchart illustrating a method1300that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The operations of the method1300may be implemented by a UE or its components as described herein. For example, the operations of the method1300may be performed by a UE115as described with reference toFIGS.1through7. In some examples, a UE may execute a set of instructions to control the functional elements of the UE to perform the described functions. Additionally or alternatively, the UE may perform aspects of the described functions using special-purpose hardware. At1305, the method may include calculating one or more parameters associated with a quality of communications between the UE and a base station. The operations of1305may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1305may be performed by a parameter calculation manager625as described with reference toFIG.6. At1310, the method may include determining a bitrate for performing the communications based on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station. The operations of1310may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1310may be performed by a bitrate determination manager630as described with reference toFIG.6. At1315, the method may include transmitting, prior to receiving the bitrate recommendation from the base station, a request to perform the communications in accordance with the determined bitrate. The operations of1315may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1315may be performed by a bitrate request manager635as described with reference toFIG.6. At1320, the method may include identifying a failure by the UE to receive an indication that the base station accepts the determined bitrate. The operations of1320may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1320may be performed by a bitrate request rejection manager650as described with reference toFIG.6. At1325, the method may include refraining from switching bitrates to the determined bitrate based on the identification. The operations of1325may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1325may be performed by a bitrate request rejection manager650as described with reference toFIG.6. FIG.14shows a flowchart illustrating a method1400that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The operations of the method1400may be implemented by a base station or its components as described herein. For example, the operations of the method1400may be performed by a base station105as described with reference toFIGS.1through3and8through11. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the described functions. Additionally or alternatively, the base station may perform aspects of the described functions using special-purpose hardware. At1405, the method may include receiving, from a UE, prior to transmitting a bitrate recommendation to the UE, a requested bitrate for communications between the UE and the base station. The operations of1405may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1405may be performed by a bitrate recommendation reception component1025as described with reference toFIG.10. At1410, the method may include determining whether to accept the requested bitrate. The operations of1410may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1410may be performed by a bitrate request evaluation component1030as described with reference toFIG.10. At1415, the method may include performing the communications with the UE in accordance with the determination. The operations of1415may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1415may be performed by a communications performing component1035as described with reference toFIG.10. FIG.15shows a flowchart illustrating a method1500that supports techniques for configuring a bitrate request in accordance with aspects of the present disclosure. The operations of the method1500may be implemented by a base station or its components as described herein. For example, the operations of the method1500may be performed by a base station105as described with reference toFIGS.1through3and8through11. In some examples, a base station may execute a set of instructions to control the functional elements of the base station to perform the described functions. Additionally or alternatively, the base station may perform aspects of the described functions using special-purpose hardware. At1505, the method may include receiving, from a UE, prior to transmitting a bitrate recommendation to the UE, a requested bitrate for communications between the UE and the base station. The operations of1505may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1505may be performed by a bitrate recommendation reception component1025as described with reference toFIG.10. At1510, the method may include determining whether to accept the requested bitrate. The operations of1510may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1510may be performed by a bitrate request evaluation component1030as described with reference toFIG.10. At1515, the method may include transmitting, to the UE, an indication of an acceptance by the base station to use the determined bitrate based on determining to accept the requested bitrate. The operations of1515may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1515may be performed by a bitrate request acceptance component1040as described with reference toFIG.10. At1520, the method may include performing the communications with the UE in accordance with the determination. The operations of1520may be performed in accordance with examples as disclosed herein. In some examples, aspects of the operations of1520may be performed by a communications performing component1035as described with reference toFIG.10. The following provides an overview of aspects of the present disclosure: Aspect 1: A method for wireless communications at a UE, comprising: calculating one or more parameters associated with a quality of communications between the UE and a base station; determining a bitrate for performing the communications based at least in part on the one or more calculated parameters prior to receiving a bitrate recommendation from the base station; and transmitting, prior to receiving the bitrate recommendation from the base station, a request to perform the communications in accordance with the determined bitrate. Aspect 2: The method of aspect 1, further comprising: receiving, from the base station, an indication of an acceptance by the base station to use the determined bitrate. Aspect 3: The method of aspect 2, wherein receiving the indication of the acceptance further comprises: receiving the bitrate recommendation after transmitting the request, wherein the bitrate recommendation comprises the determined bitrate. Aspect 4: The method of aspect 3, wherein receiving the bitrate recommendation further comprises: receiving a medium access control message comprising a logical channel identifier, wherein the logical channel identifier indicates the determined bitrate. Aspect 5: The method of any of aspects 1 through 4, further comprising: receiving, from the base station after transmitting the request, the bitrate recommendation, wherein the bitrate recommendation is different than the determined bitrate. Aspect 6: The method of aspect 5, wherein receiving the bitrate recommendation further comprises: receiving a medium access control message comprising a logical channel identifier, wherein the logical channel identifier indicates the bitrate recommendation. Aspect 7: The method of any of aspects 1 through 6, further comprising: receiving, from the base station, an indication that the base station rejected the determined bitrate. Aspect 8: The method of any of aspects 1 through 7, further comprising: identifying a failure by the UE to receive an indication that the base station accepts the determined bitrate; and refraining from switching bitrates to the determined bitrate based at least in part on the identification. Aspect 9: The method of any of aspects 1 through 8, further comprising: starting a bitrate request timer upon transmitting the request; and refraining from transmitting a second request while the bitrate request timer is running. Aspect 10: The method of aspect 9, further comprising: identifying that the bitrate request timer has expired; and transmitting the second request to perform the communications in accordance with the determined bitrate. Aspect 11: The method of any of aspects 1 through 10, wherein transmitting the request further comprises: transmitting a medium access control message comprising a logical channel identifier, wherein the logical channel identifier indicates the request. Aspect 12: The method of any of aspects 1 through 11, wherein calculating one or more parameters further comprises: calculating a jitter, a delay, a reference signal received quality, a signal-to-interference-plus-noise ratio, or a combination thereof associated with communications between the UE and the base station. Aspect 13: The method of any of aspects 1 through 12, wherein the request comprises an indication to increase a current bitrate or decrease the current bitrate being used for communications between the UE and the base station. Aspect 14: A method for wireless communications at a base station, comprising: receiving, from a UE, prior to transmitting a bitrate recommendation to the UE, a requested bitrate for communications between the UE and the base station; determining whether to accept the requested bitrate; and performing the communications with the UE in accordance with the determination. Aspect 15: The method of aspect 14, further comprising: transmitting, to the UE, an indication of an acceptance by the base station to use the determined bitrate based at least in part on determining to accept the requested bitrate. Aspect 16: The method of aspect 15, wherein transmitting the indication of the acceptance further comprises: transmitting the bitrate recommendation after receiving the requested bitrate, wherein the bitrate recommendation comprises the requested bitrate. Aspect 17: The method of aspect 16, wherein transmitting the bitrate recommendation further comprises: transmitting a medium access control message comprising a logical channel identifier, wherein the logical channel identifier indicates the requested bitrate. Aspect 18: The method of any of aspects 14 through 17, further comprising: transmitting, to the UE after receiving the requested bitrate, the bitrate recommendation, wherein the bitrate recommendation is different than the requested bitrate. Aspect 19: The method of aspect 18, wherein transmitting the bitrate recommendation further comprises: transmitting a medium access control message comprising a logical channel identifier, wherein the logical channel identifier indicates the bitrate recommendation. Aspect 20: The method of any of aspects 14 through 19, further comprising: transmitting, to the UE, an indication that the base station rejected the requested bitrate based at least in part on determining to reject the requested bitrate. Aspect 21: The method of any of aspects 14 through 20, further comprising: refraining from transmitting a message to the UE indicating that the base station rejected the requested bitrate based at least in part on determining to reject the requested bitrate. Aspect 22: The method of any of aspects 14 through 21, wherein receiving the requested bitrate further comprises: receiving a medium access control message comprising a logical channel identifier, wherein the logical channel identifier indicates the requested bitrate. Aspect 23: The method of any of aspects 14 through 22, wherein the requested bitrate comprises an indication to increase a current bitrate or decrease the current bitrate being used for communications between the UE and the base station. Aspect 24: An apparatus for wireless communications, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 1 through 13. Aspect 25: An apparatus for wireless communications, comprising at least one means for performing a method of any of aspects 1 through 13. Aspect 26: A non-transitory computer-readable medium storing code for wireless communications, the code comprising instructions executable by a processor to perform a method of any of aspects 1 through 13. Aspect 27: An apparatus for wireless communications, comprising a processor; memory coupled with the processor; and instructions stored in the memory and executable by the processor to cause the apparatus to perform a method of any of aspects 14 through 23. Aspect 28: An apparatus for wireless communications, comprising at least one means for performing a method of any of aspects 14 through 23. Aspect 29: A non-transitory computer-readable medium storing code for wireless communications, the code comprising instructions executable by a processor to perform a method of any of aspects 14 through 23. It should be noted that the methods described herein describe possible implementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Although aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR networks. For example, the described techniques may be applicable to various other wireless communications systems such as Ultra Mobile Broadband (UMB), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, as well as other systems and radio technologies not explicitly mentioned herein. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and components described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a CPU, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of computer-readable medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” The term “determine” or “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (such as via looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (such as receiving information), accessing (such as accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and other such similar actions. In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “example” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
103,689
11943662
DESCRIPTION OF EMBODIMENTS The following describes technical solutions of this application with reference to the accompanying drawings. The technical solutions of the embodiments of this application may be applied to various communications systems, such as: a global system for mobile communication (GSM) system, a code division multiple access (CDMA) system, a wideband code division multiple access (WCDMA) system, a general packet radio service (GPRS), a long term evolution (LTE) system, an LTE frequency division duplex (FDD) system, an LTE time division duplex (TDD), a universal mobile telecommunications system (UMTS), a worldwide interoperability for microwave access (WiMAX) communications system, a future 5th generation (5G) system, or a new radio system. FIG.1is a schematic architectural diagram of a system to which an embodiment of this application is applied. As shown inFIG.1, the system100includes a session management network element and a first device. Optionally, the first device includes a terminal and/or a user plane network element. The system100may be configured to perform a link quality obtaining method in the embodiments of this application. In a possible implementation, the session management network element is configured to determine a monitoring link, where the monitoring link is used to detect quality of service of a service transmitted between the session management network element and the terminal. The session management network element is further configured to send a first link quality reporting request to the first device, where the first link quality reporting request is used to indicate the first device to report quality of service information of the service path when a reporting policy is met. The first device is configured to send a first link quality notification message to the session management network element, where the first link quality notification message includes the quality of service information and an identifier of the monitoring link, so that the first device reports the quality of service information of the service path when the reporting policy is met, thereby helping a network side obtain network performance. Optionally, the system100further includes an application network element. In a possible implementation, the application network element is configured to send a second link quality reporting request to the session management network element, where the second link quality reporting request includes an identifier of a service corresponding to the service path. Optionally, the session management network element is further configured to send a second link quality notification message to the application network element, where the second link quality notification message includes the quality of service information. In this way, the application network element can also learn of transmission performance of the link in a timely manner, to make corresponding adjustment in a timely manner when a fault occurs. Optionally, when the first link quality reporting request includes one or more of a link quality reporting period, a latency threshold, a packet loss rate threshold, and a jitter threshold, the session management network element is further configured to: determine that the reporting policy needs to be updated; and send a first update message to the first device, where the first update message is used to indicate to update the reporting policy, and the first update message includes one or more of a link quality reporting period update value, a latency threshold update value, a packet loss rate threshold update value, and a jitter threshold update value. Therefore, the session management network element may update the reporting policy in a timely manner, to meet a quality of service requirement of the service. Optionally, the application network element is further configured to send a second update message to the session management network element, where the second update message carries the identifier of the service, and the second update message includes one or more of a link quality reporting period update value, a latency threshold update value, a packet loss rate threshold update value, and a jitter threshold update value. Therefore, the session management network element may determine, based on an indication of the application network element, to update the reporting policy. An update manner is relatively flexible. It should be noted that the session management network element, the first device, the application network element, and the like inFIG.1are merely names, and the names constitute no limitation on the devices. In a 5G network and another future network, network elements or entities corresponding to the session management network element, the first device, and the application network element may have other names. This is not specifically limited in this embodiment of this application. For example, the session management network element may alternatively be replaced by an SMF function entity, the user plane network element may alternatively be replaced by a UPF function entity, the application network element may alternatively be replaced by an application function AF entity, and so on. This is uniformly described herein, and details are not described below. Optionally, the session management network element, the user plane network element, and the application network element in the system100may be separately a separate network element, or may be jointly implemented by a plurality of network elements, or may be used as a function module in a network element. This is not specifically limited in this embodiment of this application. It may be understood that the foregoing function may be a network element in a hardware device, or may be a software function running on dedicated hardware, or may be a virtualization function instantiated on a platform (for example, a cloud platform). The terminal in the embodiments of this application may also be referred to as user equipment (UE), an access terminal, a terminal in V2X communications, a subscriber unit, a subscriber station, a mobile station, a mobile console, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent, a user apparatus, or the like. The terminal may be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a handheld device having a wireless communication function, a computing device, another processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a future 5G network, or a terminal device in a future evolved public land mobile network (PLMN). This is not limited in the embodiments of this application. The terminal may further include a V2X device, for example, a vehicle or an on-board unit (OBU) in a vehicle. The terminal in the embodiments of this application is connected to a radio access network (RAN) device in a wireless manner, and the radio access network device is connected to a core network device in a wireless or wired manner (not shown inFIG.1). The core network device and the radio access network device may be different independent physical devices, or functions of the core network device and logical functions of the radio access network device may be integrated into a same physical device, or some functions of the core network device and some functions of the radio access network device may be integrated into one physical device. The terminal may be at a fixed location, or may be movable. The radio access network device is an access device used by the terminal to access the mobile communications system in a wireless manner, and may be a NodeB, an evolved NodeB eNodeB, a gNodeB (gNB) in a 5G mobile communications system, a base station in a future mobile communications system, an access node in a wireless fidelity (WiFi) system, or the like, or may be a radio controller in a cloud radio access network (CRAN) scenario. Alternatively, the access network device may be a relay station, an access point, an in-vehicle device, a wearable device, a network device in a future 5G network, a network device in a future evolved PLMN network, or the like. A specific technology and a specific device form that are used by the radio access network device are not limited in this embodiment of this application. The core network device may include, for example, a mobility management entity (MME), a broadcast/multicast service center (BMSC), or may include a corresponding function entity in a 5G system, for example, a core network control plane (CP) or a user plane (UP) network function, for example, a session management network function (SMF), an access and mobility management function (AMF), or the like. The core network control plane may also be understood as a core network control plane function (CPF) entity. V2X communication means that a vehicle may obtain road condition information or receives information in a timely manner through vehicle to vehicle communication (V2V), vehicle to infrastructure communication (V2I), vehicle to pedestrian communication (V2P), or vehicle to network communication (V2N), or in another manner. The most common V2V and V2I are used as an example. A vehicle may broadcast information, such as a vehicle speed, a driving direction, a specific location, or whether an emergency brake is stepped on, of the vehicle to a nearby vehicle through V2V communication. The nearby vehicle obtains such information, so that a driver can better sense a traffic status, to make an early judgment on a dangerous status, and further make timely avoidance. Optionally, for V2I communication, in addition to interaction of the foregoing security information, the roadside infrastructure may further provide various types of service information, data network access, and the like for the vehicle. Functions such as no-stop charging and in-vehicle entertainment greatly improve traffic intelligence. Generally, a network used for V2X communication is referred to as an Internet of Vehicles. The radio access network device and the terminal may be deployed on land, including indoors or outdoors, or in a handheld or vehicle-mounted manner; or may be deployed on the water; or may be deployed on an airplane, a balloon, and a satellite in the air. Application scenarios of the radio access network device and the terminal are not limited in the embodiments of this application. The embodiments of this application are applicable to downlink signal transmission, or uplink signal transmission, or device-to-device (D2D) signal transmission. For downlink signal transmission, a sending device is a radio access network device, and a corresponding receiving device is a terminal. For uplink signal transmission, a sending device is a terminal, and a corresponding receiving device is a radio access network device. For D2D signal transmission, a sending device is a terminal, and a corresponding receiving device is also a terminal. A signal transmission direction is not limited in the embodiments of this application. Communication may be performed between a radio access network device and a terminal and between terminals by using a licensed spectrum, or by using an unlicensed spectrum, or by using both a licensed spectrum and an unlicensed spectrum. Communication may be performed between the radio access network device and the terminal and between the terminals by using a spectrum below 6 GHz, using a spectrum above 6 GHz, or by using both a spectrum below 6 GHz and a spectrum above 6 GHz. A spectrum resource used between the radio access network device and the terminal is not limited in the embodiments of this application. Optionally, the system100shown inFIG.1may be applied to a 5G network and another possible future network. This is not specifically limited in this embodiment of this application. The system100shown inFIG.1is applied to a 5G network. In this case, as shown inFIG.2, for example, the session management network element may be an SMF202in 5G, the user plane network element may be a UPF208in 5G, the terminal may be UE in 5G, and the application network element may be an AF210in 5G. FIG.2is a diagram of a scenario to which an embodiment of this application is applied. As shown inFIG.2, the system200includes an AMF201, a session management function device (SMF)202, a radio access network (RAN)203, an authentication server function (AUSF)204, a unified data management device (UDM)205, a policy control function device (PCF)206, a data network (DN)207, a user plane function device (UPF)208, user equipment (UE)209, and an application function (AF)210. The UE209is connected to the AMF201by using an N1 interface, and the UE209is connected to the RAN203by using a radio resource control (RRC) protocol. The RAN203is connected to the AMF201by using an N2 interface, and the RAN203is connected to the UPF208by using an N3 interface. A plurality of UPFs208are connected to each other by using an N9 interface, the UPF208is connected to the DN207by using an N6 interface, and the UPF208is connected to the SMF202by using an N4 interface. The SMF202is connected to the PCF206by using an N7 interface, the SMF202is connected to the UDM205by using an N10 interface, and the SMF202is connected to the AMF201by using an N11 interface. A plurality of AMFs201are connected to each other by using an N14 interface, the AMF201is connected to the UDM205by using an N8 interface, the AMF201is connected to the AUSF204by using an N12 interface, and the AMF201is connected to the PCF206by using an N15 interface. The AUSF204is connected to the UDM205by using an N13 interface. The AMF201and the SMF202obtain user subscription data from the UDM205by using the N8 interface and the N10 interface respectively, and obtain policy data from the PCF206by using the N15 interface and the N7 interface respectively. The AF210is connected to the PCF206by using an N5 interface. The SMF202controls the UPF208by using the N4 interface. It should be noted that the naming of each network element (such as the SMF202, the AF210, or the UPF208) included inFIG.2is only a name, and the name does not constitute any limitation on a function of the network element. In a 5G network and another future network, the foregoing network elements may alternatively have other names. This is not specifically limited in the embodiments of this application. For example, in a 6G network, some or all of the foregoing network elements may still use terms in 5G, or may use other names, or the like. This is uniformly described herein. Details are not described in the following. For specific working processes and beneficial effects of the network elements in the systems inFIG.1andFIG.2, refer to descriptions in the following method embodiments. FIG.3is a schematic block diagram of a computer device300(or a link quality obtaining apparatus) to which an embodiment of this application is applied. The session management network element, the first device (including the terminal and/or the user plane network element), or the application network element inFIG.1may be implemented by the computer device inFIG.3. Alternatively, the SMF202, the UPF208, the AF210, or the UE209inFIG.2may be implemented by the computer device inFIG.3. As shown inFIG.3, the computer device includes a processor301, a memory302, and a transceiver303. The processor301, the memory302, and the transceiver303communicate with each other through an internal connection path, and transfer control and/or data signals. It may be understood that, although not shown, the computer device300may further include another apparatus, such as an input apparatus, an output apparatus, or a battery. Optionally, in some embodiments, the memory302may store an executable instruction for performing the method in the embodiments of this application. The processor301may execute the instruction stored in the memory302in combination with other hardware (such as the transceiver303) to complete the steps performed in the method shown below. For a specific working process and beneficial effects, refer to descriptions in the following method embodiments. The method disclosed in the embodiments of this application may be applied to a processor, or implemented by a processor. The processor may be an integrated circuit chip and has a signal processing capability. In an implementation process, the steps in the foregoing method may be implemented by using a hardware integrated logical circuit in the processor, or by using instructions in a form of software. The processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logical device, a discrete gate or transistor logic device, or a discrete hardware component. It may implement or perform the methods, the steps, and logical block diagrams that are disclosed in the embodiments of this application. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor or the like. Steps of the methods disclosed with reference to the embodiments of this application may be directly executed and accomplished by means of a hardware decoding processor, or may be executed and accomplished by using a combination of hardware and software modules in the decoding processor. A software module may be located in a mature storage medium in the art, such as a random access memory (RAM), a flash memory, a read-only memory (ROM), a programmable read-only memory, an electrically erasable programmable memory, a register, or the like. The storage medium is located in the memory, and a processor reads instructions in the memory and completes the steps in the foregoing method in combination with hardware of the processor. The computer device300may be a general-purpose computer device or a dedicated computer device. In a specific implementation, the computer device300may be a desktop computer, a portable computer, a network server, a palmtop computer (PDA), a mobile phone, a tablet computer, a wireless terminal device, a communications device, an embedded device, or a device having a structure similar to that inFIG.3. A type of the computer device300is not limited in this embodiment of this application. FIG.4is a schematic flowchart of a link quality obtaining method400according to an embodiment of this application. As shown inFIG.4, the method400includes the following steps. S410. A session management network element determines a monitoring link, where the monitoring link is used to detect quality of service of a service path between a first communications device and a second communications device. Optionally, the monitoring link may be replaced by: a QoS monitoring link, a QoS detection link, a QoS monitoring connection, a QoS detection connection, a QoS detection session, a QoS monitoring session, an LQAP connection, an LQAP session, a network control protocol (NCP) link, an NCP monitoring link, an NCP connection, an NCP session, and other similar expressions that can detect the quality of service. This is not limited in this embodiment of this application. Optionally, the service path may be equivalently replaced by a QoS flow, a packet data unit (PDU) session, or a service flow; or another similar expression that can be derived, such as an (end-to-end) transmission path or a transmission resource corresponding to or used for the QoS flow, the PDU session, or the service flow. This is not limited in this embodiment of this application. Optionally, the quality of service of the service path may be equivalently replaced by: quality of the service path, transmission quality or quality of service of a service corresponding to the service path, transmission quality of the service path, quality of service performance of a service transmitted on the service path, transmission quality of a service transmitted on the service path (for example, quality of service performance of a data packet transmitted on a service path on which a detection packet is actually transmitted), and other similar expressions that can be derived. This is not limited in this embodiment of this application. Optionally, “detect” in the sentence that the monitoring link is used to detect quality of service of a service path between a first communications device and a second communications device may be replaced by another similar expression that has a function of obtaining quality of service of a link, such as survey, monitor, supervise, measure, calculate, or decide. This is not limited in this embodiment of this application. Optionally, the session management network element may be the SMF202inFIG.2, or a device or a function entity that has a session management function. This is not limited. Optionally, the first communications device and the second communications device are uplink and downlink devices for each other. For example, the first communications device may be understood as a terminal. The second communications device may be understood as a user plane network element (for example, a UPF). Alternatively, the first communications device may be understood as a user plane network element. The first communications device may be understood as a terminal. Optionally, a function of the monitoring link is to detect the quality of service of the service path between the first communications device and the second communications device. It should be noted that the monitoring link may be understood as a logical detection link established between the first communications device and the second communications device on a transmission link of a service or a service flow (for example, a URLLC service), and the logical detection link and the service use a same end-to-end transmission and processing resource, including a processing resource in a base station, a processing resource in the UPF, a transmission path between UE and the base station, and a transmission path between the base station and the UPF. Optionally, a network side control plane may indicate the first communications device and the second communications device to establish the logical detection link, or the first communications device and the second communications device may spontaneously establish the logical detection link. This is not limited. It should be noted that the monitoring link may be specially used to detect the quality of service. Alternatively, the monitoring link may be understood as a general transmission path, and may further perform another function in addition to QoS detection. For example, for a path corresponding to a network control protocol NCP, in addition to QoS monitoring, the NCP protocol may be further used for a function such as bicasting. This is not limited in this embodiment of this application. Alternatively, the monitoring link may be a service transmission path, and in this case, an identifier of the monitoring link is a service identifier. Therefore, the detection packet and a service packet may use a same end-to-end transmission and processing resource. Optionally, the first communications device and the second communications device may detect the quality of service information by sending detection packets to each other. Specifically, on the monitoring link, the first communications device and the second communications device send detection packets to each other, and the quality of service of the monitoring link and transmission quality of a pipe in which the monitoring link is located can be determined based on an arrival status of the detection packet. The pipe corresponds to the end-to-end transmission and processing resource described above. In other words, the quality of service of the monitoring link that is determined by using the detection packet may reflect quality of service of the service. A 5G network introduces a link quality awareness protocol (LQAP) for the URLLC service. Optionally, the monitoring link may be an LQAP logical monitoring link established between the first communications device and the second communications device, and the monitoring link is identified by using an LQAP identifier (ID). For ease of understanding, the following uses an uplink case as an example to describe a process of sending a detection packet. For example, the first communications device is a terminal, and the second communications device is a UPF. After establishing an LQAP connection to the UPF, the terminal can obtain a context of the LQAP connection. The context includes an LQAP ID, a sending rule (for example, periodic sending) of a detection data packet, a construction manner of the detection data packet, and the like. Correspondingly, the UPF can obtain the LQAP ID and an expected acceptance rule for the detection data packet. Then, the terminal sends an LQAP detection packet to the UPF based on the context of the LQAP connection, where the LQAP detection packet carries the LQAP ID. After receiving the LQAP detection packet, the UPF locates the context of the LQAP connection based on the LQAP ID, and obtains the expected acceptance rule from the context. Then, the UPF can determine quality of service of the LQAP connection by comparing an actual arrival status of the LQAP detection packet with the expected acceptance rule. S420. The session management network element sends a first link quality reporting request to a first device, where the first link quality reporting request is used to indicate the first device to report quality of service information of the service path when a reporting policy is met, and the first device includes the first communications device and/or the second communications device. Correspondingly, the first device receives the first link quality reporting request. Optionally, the quality of service information includes the quality of service parameter and/or a link status notification message, and the link status notification message is used to indicate that the quality of service parameter of the service path meets the reporting policy. The link status notification message may specifically indicate that the quality of service parameter detected by the first device meets a corresponding threshold. In other words, the first device may not only report a specific quality of service parameter to the session management network element, but also report, to the session management network element, information indicating that a detected quality of service parameter meets a corresponding threshold. This is not limited. In this embodiment of this application, a link quality reporting request (for example, the first link quality reporting request) may be replaced by another description having a similar function, such as a link quality event reporting message, a link quality subscription message, a link quality subscription notification, a link quality notification request, or a link quality event subscription. This is not limited in this embodiment of this application. It should be noted that the quality of service parameter refers to some parameters reflecting quality of service (QoS) of the service path, and is used to represent real-time transmission performance of the quality of service of the service path. For example, the quality of service parameter includes one or more of a packet loss rate, a jitter parameter, a latency parameter, a jitter level, and a bandwidth requirement. Optionally, the quality of service of the service path may refer to transmission performance of the detection packet. Optionally, the quality of service information is obtained by the first communications device or the second communications device by sending a detection packet. For example, the first communications device and the second communications device send detection packets to each other. Herein, for a process of sending the detection packet, refer to the foregoing specific description. It should be noted that a bottom-layer transmission resource or a transmission pipe used for the detection packet is the same as a transmission resource or a transmission pipe used for the service packet. Therefore, the quality of service of the service path may reflect the quality of service of the service. Optionally, the reporting policy may be sent by an application network element to the session management network element. This is not limited. Specifically, the first link quality reporting request includes a link quality reporting period. The reporting policy indicates that the first device reports the quality of service information based on the link quality reporting period. In other words, the session management network element may deliver the link quality reporting period (for example, T is used to represent the link quality reporting period) to the first device, so that the first device performs reporting based on the link quality reporting period. In this way, compared with the solution in the prior art, the session management network element in this embodiment of this application can obtain network performance in real time. It should be understood that, that periodic reporting is a most possible implementation is used as an example for description herein, but other possibilities are not excluded. For example, a reporting time interval gradually increases or decreases, or has a specific gradient rule. This is not limited in this embodiment of this application. For example, the reporting time interval is 1 ms, 2 ms, 3 ms, 4 ms, or the like. It should be noted that, in a case in which the first device reports the quality of service information based on two factors (including the link quality reporting period and the threshold corresponding to the quality of service parameter), when the first device reports the quality of service information based on the link quality reporting period, alternatively, a notification message indicating whether the quality of service parameter of the service path meets the threshold corresponding to the quality of service parameter may be reported to the session management network element. Specifically, when the link quality reporting period arrives, if the first device detects that the quality of service parameter of the service path in this case meets the threshold corresponding to the quality of service parameter, the quality of service information includes a link status notification message, and the link status notification message is used to indicate that the quality of service parameter of the service path meets the threshold corresponding to the quality of service parameter; or if the first device detects that the quality of service parameter of the service path in this case does not meet the threshold corresponding to the quality of service parameter, the quality of service information includes a link status notification message, and the link status notification message is used to indicate that the quality of service parameter of the service path does not meet the threshold corresponding to the quality of service parameter, so that the session management network element learns of the quality of service information of the service path in real time. Specifically, when the quality of service information is the quality of service parameter of the service path, that the first device meets the reporting policy indicates that the first device detects that the quality of service parameter meets one or more of the following conditions: a latency parameter of the service path is greater than or equal to a latency threshold, where the quality of service parameter includes the latency parameter; a packet loss rate of the service path is greater than or equal to a packet loss rate threshold, where the quality of service parameter includes the packet loss rate; and a jitter parameter of the service path is greater than or equal to a jitter threshold, where the quality of service parameter includes the jitter parameter. In other words, the session management network element may deliver the threshold of the quality of service parameter to the first device, so that the first device performs reporting based on the threshold of the quality of service parameter. This helps the session management network element learn of the quality of service parameter (one or more of the jitter parameter, the packet loss rate, and the latency parameter) of the service path. It should be understood that the foregoing two reporting manners (including period-based reporting and threshold-based reporting) may coexist, or only one of the two reporting manners may exist. This is not limited in this embodiment of this application. In this embodiment of this application, the first device may determine the threshold of the quality of service parameter based on actual requirements of different services. Alternatively, the session management network element may deliver the threshold of the quality of service parameter to the first device. For example, in this embodiment of this application, the latency threshold, the packet loss rate threshold, or the jitter threshold may be determined by the first device based on a service requirement, or may be delivered by the session management network element to the first device. Optionally, the first link quality reporting request includes one or more of the latency threshold, the packet loss rate threshold, and the jitter threshold. The following describes a possible specific case in which the first device reports the quality of service information. (1) If the quality of service information is the quality of service parameter, when the link quality reporting period arrives, the first device reports the quality of service parameter based on the link quality reporting period. (2) If the quality of service information includes the quality of service parameter and the link status notification message, when a period of the link quality reporting period arrives, the first device reports the quality of service parameter based on the threshold corresponding to the quality of service parameter, and may further report the link status notification message. The link status notification message is used to indicate whether the quality of service parameter exceeds the threshold corresponding to the quality of service parameter. (3) If the quality of service information includes the quality of service parameter and the link status notification message, and the threshold of the quality of service parameter is delivered by the session management network element to the first device, correspondingly, the first device reports the link status notification message to the session management network element when the link quality reporting period arrives. The link status notification message is used to indicate whether the quality of service parameter meets the threshold of the quality of service parameter. (4) If the quality of service information includes the quality of service parameter and the link status notification message, and the threshold of the quality of service parameter is delivered by the session management network element to the first device, correspondingly, when detecting that the quality of service parameter exceeds the threshold of the quality of service parameter, the first device reports the link status notification message to the session management network element. The link status notification information is used to indicate that the quality of service parameter exceeds the threshold of the quality of service parameter. (5) If the quality of service information is the link status notification message, and the threshold of the quality of service parameter is determined by the first device based on a service requirement, when the quality of service parameter exceeds the threshold of the quality of service parameter, the first device sends the link status notification message to the session management network element. The link status notification message is used to indicate that the quality of service parameter exceeds the threshold of the quality of service parameter. (6) If the quality of service information includes the quality of service parameter and the link status notification message, and the threshold of the quality of service parameter is determined by the first device based on a service requirement, when the quality of service parameter exceeds the threshold of the quality of service parameter, the first device reports the quality of service parameter and the link status notification message to the session management network element. The link status notification message is used to indicate that the quality of service parameter exceeds the threshold of the quality of service parameter. (7) If the quality of service information includes the quality of service parameter and the link status notification message, after detecting the quality of service parameter, the first device may directly determine whether the quality of service parameter meets a service requirement, and then report an indication indicating whether the quality of service parameter meets the service requirement and the quality of service parameter to the session management network element. It should be understood that the quality of service parameter may be one or more of a latency parameter, the packet loss rate, and the jitter parameter. Correspondingly, the threshold of the quality of service parameter is one or more of the latency threshold, the packet loss rate threshold, and the jitter threshold. It should be further understood that the foregoing lists only seven possible cases, and does not constitute a limitation on this embodiment of this application. A person skilled in the art may change or deduce a plurality of related implementations based on the foregoing cases. The plurality of implementations also fall within the protection scope of this embodiment of this application. Optionally, the latency threshold may include an uplink latency threshold and/or a downlink latency threshold. The uplink latency threshold is delivered to the user plane network element, and the downlink latency threshold is delivered to the terminal. In this way, if the user plane network element receives the uplink latency threshold, the user plane network element may perform reporting based on the uplink latency threshold and the uplink latency threshold. If the terminal receives the downlink latency threshold, the terminal may perform reporting with reference to the detection packet and the downlink latency threshold. Optionally, the uplink latency threshold and the downlink latency threshold may be the same or different. If the uplink latency threshold is the same as the downlink latency threshold, it may be understood that there is only one latency threshold. Optionally, the packet loss rate threshold may include an uplink packet loss rate threshold and/or a downlink packet loss rate threshold. The uplink packet loss rate threshold is delivered to the user plane network element, and the downlink packet loss rate threshold is delivered to the terminal. In this way, if the user plane network element receives the uplink packet loss rate threshold, the user plane network element may perform reporting based on the uplink packet loss rate threshold and the uplink packet loss rate threshold. If the terminal receives the downlink packet loss rate threshold, the terminal may perform reporting with reference to the detection packet and the downlink packet loss rate threshold. Optionally, the uplink packet loss rate threshold and the downlink packet loss rate threshold may be the same or different. If the uplink packet loss rate threshold is the same as the downlink packet loss rate threshold, it may be understood that there is only one packet loss rate threshold. Optionally, the jitter parameter threshold may include an uplink jitter parameter threshold and/or a downlink jitter parameter threshold. The uplink jitter parameter threshold is delivered to the user plane network element, and the downlink jitter parameter threshold is delivered to the terminal. In this way, if the user plane network element receives the uplink jitter parameter threshold, the user plane network element may perform reporting based on the uplink jitter parameter threshold and the uplink jitter parameter threshold. If the terminal receives the downlink jitter parameter threshold, the terminal may perform reporting with reference to the detection packet and the downlink jitter parameter threshold. Optionally, the uplink jitter parameter threshold and the downlink jitter parameter threshold may be the same or different. If the uplink jitter parameter threshold is the same as the downlink jitter parameter threshold, it may be understood that there is only one jitter parameter threshold. S430. When the reporting policy is met, the first device sends a first link quality notification message to the session management network element, where the first link quality notification message includes the quality of service information and an identifier of the monitoring link. Correspondingly, the session management network element receives the first link quality notification message from the first device. Optionally, the identifier of the monitoring link may be an LQAP ID, but a possibility that the identifier of the monitoring link is a service identifier is not excluded. Optionally, a link quality notification message (for example, the first link quality notification message) may be replaced by another description having a similar function, such as a link quality response message, a link quality subscription response notification, a link quality notification response, or a link quality event subscription response. In this embodiment of this application, the session management network element determines the monitoring link, where the monitoring link is used to detect the quality of service of the service path between the first communications device and the second communications device, and sends the first link quality reporting request to the first device, so that the first device reports the quality of service information of the service path when the reporting policy is met, thereby helping the network side obtain network performance. Optionally, in a first possible implementation, S410includes: determining, by the session management network element based on a quality of service requirement of a service, an identifier of the service corresponding to the service path; and determining, by the session management network element, the monitoring link based on the identifier of the service. In this embodiment of this application, the service corresponding to the service path may be understood as a service transmitted on the service path. In other words, the service is transmitted by using the service path. It should be understood that the service may not be all services transmitted on the service path, and may be one of the services. This is not limited in this embodiment of this application. In other words, the session management network element may learn of the identifier of the service based on the quality of service requirement of the current service, and then search, based on the identifier of the service, for context information corresponding to the service. The session management network element determines the identifier (for example, the LQAP ID) of the monitoring link based on the context information, and determines the monitoring link based on the identifier of the monitoring link. Optionally, in a second possible implementation, S410includes: receiving, by the session management network element, reporting indication information from a policy control network element, where the reporting indication information is used to indicate to report the quality of service information; determining, by the session management network element based on the reporting indication information, an identifier of a service corresponding to the service path; and determining, by the session management network element, the monitoring link based on the identifier of the service. Optionally, the reporting indication information carries the identifier of the service. Herein, the policy control network element may be the PCF206inFIG.2. In other words, the session management network element may learn of the identifier of the service based on an indication of the policy control network element, and then search, based on the identifier of the service, for context information corresponding to the service. The session management network element determines the identifier (for example, the LQAP ID) of the monitoring link based on the context information, and determines the monitoring link based on the identifier of the monitoring link. Optionally, in a third possible implementation, as shown inFIG.5, the method400further includes: S411. The session management network element receives a second link quality reporting request from an application network element, where the second link quality reporting request includes an identifier of a service corresponding to the service path. S410includes: determining, by the session management network element, the monitoring link based on the identifier of the service. Herein, the service is a service transmitted between the first communications device and the second communications device. The identifier of the service may include information such as authentication quintuple information and a QoS flow identifier. Specifically, the session management network element obtains the identifier of the service based on the second link quality reporting request of the application network element, and determines the monitoring link based on the identifier of the service. In other words, the session management network element determines the monitoring link based on an indication of the application network element. Herein, the application network element may send the second link quality reporting request to the session management network element based on a service requirement and a status of the application network element. For example, a service corresponding to the foregoing application network element relates to a scenario such as remote robot control.FIG.6is a schematic diagram of an application scenario according to an embodiment of this application. As shown inFIG.6, a remote robot control scenario may be classified into a state 1, a state 2, and a state 3. The state 1 may be understood as an initial startup state. Specifically, a robot intermittently interacts with a network in a startup process, to implement functions such as authentication and authorization, and has a relatively low requirement on a latency. The state 2 may be understood as a non-operation state of the robot or an operation state that has a relatively low requirement on transmission performance such as a latency. In this state, the robot interacts with the network periodically, so that a remote controller learns that the robot is in the non-operation state. The non-operation state has a low requirement on a latency. Periodic interaction is used to enable a server to learn that the robot is available. The state 3 may be understood as an operation state. The robot performs a corresponding operation (for example, a remote operation such as remote surgery) by receiving a remote instruction. In the operation state, a requirement on network transmission is extremely high. Once the remote controller needs to send an instruction to the robot, the corresponding instruction needs to be sent to the robot within a specified time period. Otherwise, there is a high probability of an accident. Therefore, once the application network element enters the state 3, it indicates that a requirement on link quality is extremely high in this case. Before the application network element enters the state 3 or when the application has entered the state 3, the application may subscribe to a link quality event from the network side, that is, specifically, send the second link quality reporting request to the session management network element, to expect to obtain a real-time quality of service parameter of the robot. Optionally, for another state (the state 1 or the state 2) that does not have a high requirement on link quality, whether a quality of service parameter needs to be subscribed to may be determined based on a requirement. This is not limited. It should be understood that the technical solutions in the embodiments of this application may be applied to a scenario in which link quality needs to be obtained in real time, for example, a scenario in which link quality or a link event needs to be learned of in real time, such as telemedicine, industrial control, robot control, intelligent control, or automatic communication or control. This is not limited. In the foregoing three possible implementations, the session management network element determines, based on the identifier of the service, the context information corresponding to the service, and the session management network element determines the monitoring link based on the context information. Specifically, the session management network element may locate, based on the identifier of the service, a local context corresponding to the service, and then obtain the identifier of the monitoring link, for example, the LQAP ID, from information about the local context. Optionally, if no monitoring link exists, the session management network element needs to establish a monitoring link. For example, if no LQAP ID exists in the local context, the session management network element allocates an LQAP ID to the first communications device and the second communications device, and initiates LQAP link establishment, so that an LQAP link is established between the first communications device and the second communications device. In this way, the first communications device and the second communications device may feed back the quality of service parameter based on the LQAP link. Optionally, the second link quality reporting request may be sent to the user plane network element in a process of establishing the monitoring link, or may be sent after the monitoring link is established. This is not limited in this embodiment of this application. Optionally, a specific representation form of the identifier of the service may be one or more of the following: an IP 5-tuple, a terminal address, an application address, an application identifier, a terminal identifier, a service flow identifier, a service aggregation flow identifier, a packet data unit (PDU) session ID, and a QoS flow ID. Optionally, the session management network element may send, to the application network element, the quality of service information reported by the first device, so that the application network element learns of performance of the link. As shown inFIG.5, the method400further includes the following steps. S440. The session management network element sends a second link quality notification message to the application network element, where the second link quality notification message includes the quality of service information. Correspondingly, the application network element receives the second link quality notification message. That is, further, after receiving the quality of service information (including the first link quality notification message) reported by the first device, the session management network element may send, to the application network element, the quality of service information reported by the first device, so that the application network element perceives the link quality event in real time. In this way, the application network element can highly cooperate with the 5G network. Optionally, after receiving the quality of service information, the application network element may also perform a corresponding adjustment measure. For example, if the application network element finds that a link is faulty or congested, one or more of the following adjustment measures may be performed: (1) adjusting a sending rate, for example, reducing the sending rate to reduce impact on bandwidth, to effectively avoid a fault, where for a specific operation, reference may be made to a congestion control technology in an existing transmission control protocol (TCP) technology; (2) adjusting a codec rate, for example, reducing the codec rate, to reduce a requirement of a service on bandwidth, and reducing impact on a network by sacrificing a part of quality of service, where for details, reference may be made to an adaptive video picture quality adjustment technology in a video call (for example, a WeChat video call); (3) adjusting a non-critical service, for example, disabling a non-critical service to reduce bandwidth usage. It should be understood that the foregoing three manners are merely examples of some adjustments that may be performed by the application network element, and do not constitute a limitation on this embodiment of this application. In addition, when receiving the first link quality notification message, the session management network element may learn of a network status of the first device. For example, the network status may be any one of wireless handover (or air interface handover), user plane function UPF reselection, and packet data unit PDU session establishment or reestablishment. The session management network element determines, based on the network status, not to send the second link quality notification message to the application network element. Specifically, the session management network element determines, based on the network status, that a change of the quality of service parameter is a stable error or a normal case (for example, a wireless handover state causes a specific latency, but is restored to normal after a period of time), and determines not to report the second link quality notification message to the application network element. Certainly, if the session management network element determines, based on the network status, that the change of the quality of service parameter exceeds a normal range, the session management network element needs to report the second link quality notification message to the application network element. In this case, the session management network element may add network status indication information to the second link quality notification message, and feed back the current network status to the application network element, so that the application network element determines, based on the network status, whether the reporting policy needs to be adjusted (for example, adjust a period, or adjust a threshold). In the foregoing description, when receiving the first link quality notification message from the first device, the session management network element can directly learn of the network status of the first device. In another possible case, the first device adds network status indication information to the first link quality notification message, where the network status indication information is used to indicate the network status of the first device, so that the session management network element learns of a status of the first device based on the network status indication information. This is not limited. In this embodiment of this application, when the first link quality reporting request includes one or more of the link quality reporting period, the latency threshold, the packet loss rate threshold, and the jitter threshold, the session management network element may further determine whether the reporting policy needs to be updated or adjusted. As shown inFIG.5, the method specifically includes the following steps. S450. The session management network element determines that the reporting policy needs to be updated. S460. The session management network element sends a first update message to the first device, where the first update message is used to indicate to update the reporting policy, and the first update message includes one or more of a link quality reporting period update value, a latency threshold update value, a packet loss rate threshold update value, and a jitter threshold update value. Correspondingly, the first device receives the first update message. The first update message carries the identifier of the monitoring link. That the first update message carries an identifier of a monitoring link is intended to help the first device search for a monitoring link whose reporting policy needs to be updated, or search for a local context that is of the monitoring link and that corresponds to the identifier of the monitoring link. It should be understood that which update value is specifically included in the first update message may directly depend on specific content included in the first link quality reporting request. Certainly, this does not constitute a limitation on this embodiment of this application. An update value specifically included in the first update message may be specific to only some content included in the first link quality reporting request. For example, the first link quality reporting request includes the link quality reporting period, the latency threshold, the packet loss rate threshold, and the jitter threshold, but the first update message may include the link quality reporting period update value, the latency threshold update value, the packet loss rate threshold update value, and the jitter threshold update value, or may include only the latency threshold update value, the packet loss rate threshold update value, and the jitter threshold update value. Herein, which thresholds need to be updated may be determined based on an actual situation of the quality of service information reported by the first device or an actual requirement of the service. This is not limited in this embodiment of this application. The “update” includes one or more of adding, modifying, and deleting. For example, a threshold corresponding to a quality of service parameter that needs to be reported is added: thresholds corresponding to a reporting policy before updating include a jitter threshold and a latency threshold; thresholds corresponding to an updated reporting policy include a jitter threshold, a latency threshold, a packet loss rate threshold, and a link quality reporting period. For another example, the reporting policy may be modified. For another example, a threshold corresponding to a quality of service parameter that needs to be reported is deleted; thresholds corresponding to a reporting policy before update include a jitter threshold, a latency threshold, a packet loss rate threshold, and a link quality reporting period; thresholds corresponding to an updated reporting policy includes a latency threshold and a link quality reporting period. It should be understood that the examples herein are merely for ease of understanding by a person skilled in the art, and do not constitute any limitation on the embodiments of this application. A person skilled in the art may obtain different solutions through transformation based on the foregoing examples, and all the transformed solutions fall within the protection scope of the embodiments of this application. Specifically, when determining that the reporting policy needs to be updated, the session management network element sends the first update message to the first device. The first device modifies specific content corresponding to the reporting policy based on the first update message. This includes one or more of the following updates: updating the link quality reporting period based on the link quality reporting period update value; updating the latency parameter based on the latency threshold update value; updating the packet loss rate threshold based on the packet loss rate threshold update value; and updating the jitter threshold based on the jitter threshold update value. For example, if the link quality reporting period before the update is 2 milliseconds (ms), and the session management network element changes, based on a QoS requirement, reporting once every 2 ms to reporting once every 5 ms, the updated link quality reporting period is 5 ms. For another example, the session management network element may modify a specific threshold based on a change of a QoS requirement, including modifying one or more of the latency threshold, the packet loss rate threshold, and the jitter threshold. In this way, the first device may report the quality of service information of the service path based on the updated reporting policy. It should be understood that which content is specifically updated by the first device may depend on content carried in the first update message. Further, as shown inFIG.5, the method400further includes: S451. The session management network element receives a second update message from the application network element, where the second update message carries the identifier of the service, and the second update message includes one or more of a link quality reporting period update value, a latency threshold update value, a packet loss rate threshold update value, and a jitter threshold update value. The determining, by the session management network element, that the reporting policy needs to be updated includes: determining, by the session management network element, the monitoring link based on the identifier of the service; and determining, by the session management network element, the monitoring link based on the identifier of the service. Specifically, the session management network element may determine, based on an indication of the application network element, to update the reporting policy. Herein, to locate the monitoring link whose reporting policy needs to be updated, the application network element needs to add the identifier of the service to the second update message. In this way, the session management network element can locate, based on the identifier of the service, a local context corresponding to the service, and then obtain the identifier of the monitoring link, for example, the LQAP ID, from information about the local context, to determine the monitoring link, and determine that the reporting policy of the monitoring link needs to be updated. Optionally, the method400further includes: evaluating, by the session management network element, link quality of the monitoring link based on the first link quality notification message; and sending, by the session management network element, a repair indication to the first device based on an evaluation result, where the repair indication is used to indicate the first device to optimize or repair the service path, for example, indicate the first device to perform bicasting. In other words, a service path that needs to be optimized or repaired may be understood as a transmission link or a bearer corresponding to a service. Specifically, the session management network element may evaluate the quality of service of the monitoring link based on the quality of service information reported by the first device and a corresponding quality of service requirement. For example, when the quality of service information is a link status notification message, if the link notification message indicates that the quality of service parameter meets a corresponding threshold, the session management network element performs link repair. If the link notification message indicates that the quality of service parameter does not meet the corresponding threshold, the session management network element may determine that the quality of service parameter of the service path is normal, and may determine, based on an actual situation of a service requirement, whether to perform link repair. For another example, when the quality of service information is a quality of service parameter, if the reported quality of service parameter exceeds a corresponding threshold, the session management network element performs link repair. If the reported quality of service parameter does not exceed the corresponding threshold, but approaches or is approximate to the corresponding threshold, the session management network element makes a comprehensive decision based on the network status. If it is determined that the quality of service parameter is normal, link repair is not performed. If the reported quality of service parameter does not exceed the corresponding threshold, but approaches or is approximate to the corresponding threshold, the network status is normal at this time, and no handover is performed, the session management network element considers that the quality of service parameter is abnormal and a link fault may be subsequently caused, and therefore, initiates link repair. That the session management network element initiates link repair means that the session management network element sends a repair indication to the first device, where the repair indication is used to indicate the first device to optimize or repair the service path. For example, if the service already has a plurality of connections, but a standby connection is not enabled, the session management network element notifies the first device to enable the standby connection to perform bicasting. If the service is currently in a single-connection scenario, the session management network element establishes a new transmission path for the service, so that the first device transmits data of the service by using a current path and the new transmission path together, for example, in a dual-connection scenario. Alternatively, the session management network element performs another effective repair measure, provided that normal transmission of a service is not affected. This is not limited in this embodiment of this application. The following describes, with reference to a specific example, a case in which the quality of service information of the service path is the quality of service parameter of the service path. It should be understood that the example herein is only for a person skilled in the art to understand the technical solutions of the embodiments of this application, and do not constitute any limitation on the embodiments of this application. In the following example, an SMF is used as the session management network element, the first device includes a terminal and a UPF, an AF is used as the application network element, and a PCF is used as the policy control network element. This is uniformly described herein. As shown inFIG.7, the method includes the following steps. 701. The AF sends a second link quality reporting request to the SMF. Correspondingly, the SMF receives the second link quality reporting request. The second link quality reporting request includes an identifier of a service corresponding to a service path. Optionally, the second link quality reporting request may include a reporting policy, including threshold-based reporting and/or period-based reporting. For description of the reporting policy, refer to the foregoing description. 702. The SMF determines a monitoring link. The SMF may determine the monitoring link based on the identifier of the service in step701, or may determine the monitoring link based on a quality of service requirement of the service. This is not limited. Herein, for a process of determining the monitoring link, refer to the description in the foregoing embodiment. Details are not described herein again. 703. The SMF sends a first link quality reporting request to the UPF. Correspondingly, the UPF receives the first link quality reporting request from the SMF. The first link quality reporting request is used to indicate the UPF to report a quality of service parameter of the service path when the reporting policy is met. Optionally, the first link quality reporting request includes a link quality reporting period. Optionally, the first link quality reporting request includes one or more of a link quality reporting period, an uplink latency threshold, an uplink packet loss rate threshold, and an uplink jitter parameter. 704. The SMF sends the first link quality reporting request to the terminal. Correspondingly, the terminal receives the first link quality reporting request from the SMF. The first link quality reporting request is used to indicate the terminal to report the quality of service parameter of the service path when the reporting policy is met. Optionally, the first link quality reporting request includes a link quality reporting period. Optionally, the first link quality reporting request includes one or more of a link quality reporting period, a downlink latency threshold, a downlink packet loss rate threshold, and a downlink jitter parameter threshold. It should be understood that a message format of the first link quality reporting request in step703may be the same as a message format of the first link quality reporting request in step704, and carried content may not be completely the same. For example, an uplink threshold is sent for the UPF, and a downlink threshold is sent for the terminal, but both requests carry an ID of the monitoring link, for example, an LQAP ID. It should be further understood that the foregoing description is provided only by using an example in which the uplink threshold is sent for the UPF and the downlink threshold is sent for the terminal. Alternatively, both the uplink threshold and the downlink threshold may be sent to the UPF or the terminal. This is not limited. 705. The terminal and the UPF perform link quality detection. Herein, that the terminal and the UPF perform link quality detection includes: the terminal and the UPF send detection packets to each other, and determine link quality based on arrival statuses of the detection packets. Specifically, for example, the terminal is a transmit end and the UPF is a receive end. When a monitoring link is established between the terminal and the UPF, the terminal sends a detection packet to the UPF. The UPF receives, based on an expected acceptance rule, the detection packet sent by the terminal. The UPF may determine quality of the monitoring link by comparing an arrival of the detection packet with the expected acceptance rule. It should be understood that the transmit end and the receive end may be interchanged, that is, both the terminal and the UPF can detect the quality of the monitoring link. This is not limited in this embodiment of this application. 706. The terminal sends a first link quality notification message to the SMF. Correspondingly, the SMF receives the first link quality notification message from the terminal. The first link quality notification message includes one or more of a downlink latency parameter, a downlink packet loss rate, and a downlink jitter parameter. Optionally, specific content reported by the terminal to the SMF may correspond to the threshold received in step704. Optionally, the terminal performs periodic reporting based on the link quality reporting period. 707. The UPF sends a first link quality notification message to the SMF. Correspondingly, the SMF receives the first link quality notification message from the UPF. The first link quality notification message includes one or more of an uplink latency parameter, an uplink packet loss rate, and an uplink jitter parameter. Optionally, specific content reported by the UPF to the SMF may correspond to the threshold received in step704. Optionally, the UPF performs periodic reporting based on the link quality reporting period. 708. The SMF sends a second link quality notification message to the AF. Correspondingly, the AF receives the second link quality notification message. The SMF may send the content received in step706and/or step707to the AF, so that the AF learns of the quality of service parameter. 709. The SMF determines that a reporting policy needs to be updated. The SMF may determine, based on a quality of service requirement of a service, that the reporting policy needs to be updated. Alternatively, the SMF may determine, based on an indication of the AF, that the reporting policy needs to be updated. Optionally, before step709, the SMF receives the second update message sent by the AF, and then determines, based on the second update message, that the reporting policy needs to be updated. 710. The AF sends a second update message to the SMF. Correspondingly, the SMF receives the second update message. The second update message carries the identifier of the service. The second update message includes one or more of a reporting period update value, an uplink latency threshold update value, an uplink packet loss rate threshold update value, and an uplink jitter threshold update value. 711. The SMF sends a first update message to the UPF. Correspondingly, the UPF receives the first update message. The first update message is used to indicate the UPF to update the reporting policy. Optionally, the first update message includes one or more of a link quality reporting period update value, an uplink latency threshold update value, an uplink packet loss rate threshold update value, and an uplink jitter threshold update value. 712. The SMF sends a first update message to the terminal. Correspondingly, the terminal receives the first update message. The first update message is used to indicate the terminal to update the reporting policy. Optionally, the first update message includes one or more of a link quality reporting period update value, a downlink latency threshold update value, a downlink packet loss rate threshold update value, and a downlink jitter threshold update value. It should be understood that a message format of the first update message in step711may be the same as a message format of the first update message in step712, and carried content may not be completely the same. For example, an uplink threshold is sent for the UPF, and a downlink threshold is sent for the terminal, but both requests carry an ID of the monitoring link, for example, an LQAP ID. It should be noted that the SMF may determine the monitoring link based on the indication of the AF (step701), or the SMF may determine the monitoring link based on an indication of the PCF. For example, before step702, step713is performed, that is, the PCF sends reporting indication information to the SMF. The reporting indication information is used to indicate to report the quality of service parameter. The reporting indication information includes the identifier of the service corresponding to the service path. Optionally, the SMF may determine the monitoring link based on the reporting indication information sent by the PCF. In this embodiment of this application, the SMF determines the monitoring link, and notifies UE and the UPF to report the quality of service parameter of the monitoring link when the reporting policy is met, so that network performance can be learned of in real time. Further, the SMF may further update the reporting policy. The SMF may report, to the AF, the quality of service parameter reported by the UE and the UPF, so that the AF can also learn of the quality of service parameter in real time. This helps improve a capability of the AF to perceive network performance, so that the AF can highly cooperate with a 5G network. FIG.8is a schematic diagram of another example according to an embodiment of this application. As shown inFIG.8, actions performed in steps701to707and713are the same as those performed in the steps inFIG.7. For brevity, details are not described herein again. A difference lies in that the foregoing example may further include the following steps. 714. The SMF evaluates link quality. The SMF may evaluate the link quality of the monitoring link based on the quality of service parameter reported by the terminal and/or the UPF. 715. The SMF sends a repair indication to the UPF. Correspondingly, the UPF receives the repair indication from the SMF. The repair indication is used to indicate the UPF to optimize or repair the service path. 716. The SMF sends a repair indication to the terminal. Correspondingly, the terminal receives the repair indication from the SMF. The repair indication is used to indicate the terminal to optimize or repair the service path. For a specific repair process, refer to the foregoing description. It should be understood that the examples inFIG.7andFIG.8are merely intended to help a person skilled in the art understand the embodiments of this application, but are not intended to limit the embodiments of this application to specific scenarios of the examples. A person skilled in the art can apparently make various equivalent modifications or changes based on the examples shown inFIG.7toFIG.8, and such modifications or changes also fall within the scope of the embodiments of this application. In this embodiment of this application, the SMF evaluates the link quality of the monitoring link, to optimize a transmission link or a bearer between the UE and the UPF, thereby ensuring normal service transmission. It should be understood that the solutions in the embodiments of this application may be combined for use, and explanations or descriptions of terms in the embodiments may be cross-referenced or explained in the embodiments. This is not limited. The foregoing describes the link quality obtaining method according to the embodiments of this application, and the following describes an apparatus according to the embodiments of this application. FIG.9is a schematic block diagram of a link quality obtaining apparatus900according to an embodiment of this application. Optionally, a specific form of the apparatus900may be a general-purpose computer device or a chip in a general-purpose computer device. This is not limited in this embodiment of this application. The apparatus900is a session management network element, and the apparatus900includes: a determining module910, configured to determine a monitoring link, where the monitoring link is used to detect quality of service of a service path between a first communications device and a second communications device; and a transceiver module920, configured to send a first link quality reporting request to a first device, where the first link quality reporting request is used to indicate the first device to report quality of service information of the service path when a reporting policy is met, and the first device includes the first communications device and/or the second communications device. The transceiver module920is further configured to receive a first link quality notification message from the first device, where the first link quality notification message includes the quality of service information and an identifier of the monitoring link. Optionally, the quality of service information includes a quality of service parameter and/or a link status notification message, and the link status notification message is used to indicate that the quality of service parameter of the service path meets the reporting policy. Optionally, the quality of service information is obtained by the first communications device or the second communications device by sending a detection packet. Optionally, the first link quality reporting request includes a link quality reporting period. The reporting policy indicates that the first device reports the quality of service information based on the link quality reporting period. Optionally, when the quality of service information is the quality of service parameter of the service path, that the first device meets the reporting policy indicates that the first device detects that the quality of service parameter meets one or more of the following conditions: a latency parameter of the service path is greater than or equal to a latency threshold, where the quality of service parameter includes the latency parameter: a packet loss rate of the service path is greater than or equal to a packet loss rate threshold, where the quality of service parameter includes the packet loss rate; and a jitter parameter of the service path is greater than or equal to a jitter threshold, where the quality of service parameter includes the jitter parameter. Optionally, the latency threshold, the packet loss rate threshold, or the jitter threshold is determined by the first device based on a service requirement. Alternatively, the first link quality reporting request includes one or more of the latency threshold, the packet loss rate threshold, and the jitter threshold. Optionally, the latency threshold includes an uplink latency threshold and/or a downlink latency threshold. The packet loss rate threshold includes an uplink packet loss rate threshold and/or a downlink packet loss rate threshold. The jitter threshold includes an uplink jitter threshold and/or a downlink jitter threshold. Optionally, the transceiver module920is further configured to receive a second link quality reporting request from an application network element, where the second link quality reporting request includes an identifier of a service corresponding to the service path. Correspondingly, that the determining module is configured to determine a monitoring link specifically includes: determining the monitoring link based on the identifier of the service. Optionally, the transceiver module920is further configured to send a second link quality notification message to the application network element, where the second link quality notification message includes the quality of service information. Optionally, the determining module910is further configured to: when the first link quality notification message is received, determine that a network status is any one of wireless handover, user plane function UPF reselection, and packet data unit PDU session establishment or reestablishment; and determine not to send the second link quality notification message to the application network element. Optionally, when the first link quality reporting request includes one or more of a link quality reporting period, a latency threshold, a packet loss rate threshold, and a jitter threshold, the determining module910is further configured to determine that the reporting policy needs to be updated. Correspondingly, the transceiver module920is further configured to send a first update message to the first device, where the first update message is used to indicate to update the reporting policy, and the first update message includes one or more of a link quality reporting period update value, a latency threshold update value, a packet loss rate threshold update value, and a jitter threshold update value. Optionally, the transceiver module920is further configured to receive a second update message from the application network element, where the second update message carries the identifier of the service, and the second update message includes one or more of a link quality reporting period update value, a latency threshold update value, a packet loss rate threshold update value, and a jitter threshold update value. Correspondingly, that the determining module910is configured to determine that the reporting policy needs to be updated specifically includes: determining the monitoring link based on the identifier of the service; and determining that the reporting policy of the monitoring link needs to be updated. Optionally, that the determining module910is configured to determine a monitoring link specifically includes: determining, based on a quality of service requirement of a service, an identifier of a service corresponding to the service path; and determining the monitoring link based on the identifier of the service. Optionally, the transceiver module920is further configured to receive reporting indication information from a policy control network element, where the reporting indication information is used to indicate to report the quality of service information. Correspondingly, that the determining module is configured to determine a monitoring link specifically includes: determining, based on the reporting indication information, an identifier of a service corresponding to the service path; and determining, by the session management network element, the monitoring link based on the identifier of the service. Optionally, that the determining module is configured to determine the monitoring link based on the identifier of the service specifically includes: determining, based on the identifier of the service, context information corresponding to the service; and determining the monitoring link based on the context information. Optionally, the method900further includes: an evaluation module, configured to evaluate link quality of the monitoring link based on the first link quality notification message. Correspondingly, the transceiver module920is configured to send a repair indication to the first device based on an evaluation result, where the repair indication is used to indicate the first device to optimize the service path. It should be understood that the link quality obtaining apparatus900according to this embodiment of this application may correspond to the method of the session management network element in the foregoing method embodiment, and the foregoing and other management operations and/or functions of the modules in the apparatus900are respectively intended to implement corresponding steps of the method of the session management network element in the foregoing method embodiment, and therefore, can also implement beneficial effects in the foregoing method embodiment. For brevity, details are not described herein. It should also be understood that, in this embodiment, the apparatus900is presented in a form of a function module. The “module” herein may be an application-specific integrated circuit ASIC, a circuit, a processor that executes one or more software or firmware programs, a memory, an integrated logic circuit, and/or another component that may provide the foregoing functions. In a simple embodiment, a person skilled in the art can figure out that the apparatus900may be in the form shown inFIG.3. The determining module910may be implemented by using the processor301and the memory302shown inFIG.3. The transceiver module920may be implemented by using the transceiver303shown inFIG.3. Specifically, the processor is implemented by executing a computer program stored in the memory. Optionally, when the apparatus900is a chip, a function and/or an implementation process of the transceiver module920may be alternatively implemented by using a pin or a circuit. Optionally, the memory is a storage unit in the chip, for example, a register or a cache. The storage unit may alternatively be a storage unit, such as the memory302shown inFIG.3, that is in the computer device and that is located outside the chip. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. FIG.10is a schematic block diagram of a link quality obtaining apparatus1000according to an embodiment of this application. Optionally, a specific form of the apparatus1000may be a general-purpose computer device or a chip in a general-purpose computer device. This is not limited in this embodiment of this application. The apparatus1000is a first device, and the apparatus1000includes: a transceiver module1010, configured to receive a first link quality reporting request from a session management network element, where the first link quality reporting request is used to indicate the first device to report quality of service information of a service path when it is detected that a monitoring link meets a reporting policy, and the monitoring link is used to detect quality of service of a service path between the first device and a peer end of the first device. The transceiver module1010is further configured to send a first link quality notification message to the session management network element when the reporting policy is met, where the first link quality notification message includes the quality of service information and an identifier of the monitoring link. Optionally, the quality of service information includes a quality of service parameter and/or a link status notification message, and the link status notification message is used to indicate that the quality of service parameter of the service path meets the reporting policy. Optionally, the apparatus1000further includes a determining module1020, configured to determine the quality of service information by sending a detection packet to the peer end of the first device. Optionally, the first link quality reporting request includes a link quality reporting period. That the transceiver module1010sends a first link quality notification message to the session management network element when the reporting policy is met specifically includes: sending the first link quality notification message to the session management network element based on the link quality reporting period. Optionally, the quality of service information is a quality of service parameter of the service path, and that the transceiver module1010sends, by the first device, a first link quality notification message to the session management network element when the reporting policy is met specifically includes: sending the first link quality notification message to the session management network element when the first device detects that the quality of service parameter meets one or more of the following conditions: a latency parameter of the service path is greater than or equal to a latency threshold, where the quality of service parameter includes the latency parameter: a packet loss rate of the service path is greater than or equal to a packet loss rate threshold, where the quality of service parameter includes the packet loss rate; and a jitter parameter of the service path is greater than or equal to a jitter threshold, where the quality of service parameter includes the jitter parameter. Optionally, the latency threshold, the packet loss rate threshold, or the jitter threshold is determined by the first device based on a service requirement. Alternatively, the first link quality reporting request includes one or more of the latency threshold, the packet loss rate threshold, and the jitter threshold. Optionally, the latency threshold includes an uplink latency threshold and/or a downlink latency threshold. The packet loss rate threshold includes an uplink packet loss rate threshold and/or a downlink packet loss rate threshold. The jitter threshold includes an uplink jitter threshold and/or a downlink jitter threshold. Optionally, when the first link quality reporting request includes one or more of a link quality reporting period, a latency threshold, a packet loss rate threshold, and a jitter threshold, the transceiver module1010is further configured to receive a first update message from the session management network element, where the first update message is used to indicate to update the reporting policy, and the first update message includes one or more of a link quality reporting period update value, a latency threshold update value, a packet loss rate threshold update value, and a jitter threshold update value. Optionally, the transceiver module1010is further configured to receive a repair indication from the session management network element, where the repair indication is used to indicate the first device to optimize the service path. The apparatus further includes an optimization module, configured to optimize the service path. Optionally, the apparatus1000is a terminal or a user plane function UPF network element. It should be understood that the link quality obtaining apparatus1000according to this embodiment of this application may correspond to the method of the first device in the foregoing method embodiment, and the foregoing and other management operations and/or functions of the modules in the apparatus1000are respectively intended to implement corresponding steps of the method of the first device in the foregoing method embodiment, and therefore, can also implement beneficial effects in the foregoing method embodiment. For brevity, details are not described herein. It should also be understood that, in this embodiment, the apparatus1000is presented in a form of a function module. The “module” herein may be an application-specific integrated circuit ASIC, a circuit, a processor that executes one or more software or firmware programs, a memory, an integrated logic circuit, and/or another component that may provide the foregoing functions. In a simple embodiment, a person skilled in the art can figure out that the apparatus1000may be in the form shown inFIG.3. The determining module1020may be implemented by using the processor301and the memory302shown inFIG.3. The transceiver module1010may be implemented by using the transceiver303shown inFIG.3. Specifically, the processor is implemented by executing a computer program stored in the memory. Optionally, when the apparatus1000is a chip, a function and/or an implementation process of the transceiver module1020may be alternatively implemented by using a pin or a circuit. Optionally, the memory is a storage unit in the chip, for example, a register or a cache. The storage unit may alternatively be a storage unit, such as the memory302shown inFIG.3, that is in the computer device and that is located outside the chip. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. Unless otherwise specified, an expression used in this application similar to an expression that “an item includes at least one of the following: A, B, and C” usually means that the item may be any one of the following cases: A; B; C; A and B; A and C; B and C; A, B, and C; A and A; A, A, and A; A, A, and B; A, A, and C; A, B, and B; A, C, and C; B and B; B, B, and B; B, B, and C; C and C; C, C and C; and other combinations of A, B and C. The foregoing uses a total of three elements A, B. and C as an example to describe an optional case of the item. When the expression is “the item includes at least one of the following: A, B, . . . , and X”, in other words, more elements are included in the expression, a case to which the item is applicable may also be obtained according to the foregoing rule. It should be understood that sequence numbers of the foregoing processes do not mean execution sequences in various embodiments of this application. The execution sequences of the processes should be determined based on functions and internal logic of the processes, and should not be construed as any limitation on the implementation processes of the embodiments of this application. It should also be understood that the numbers “first” and “second” in the embodiments of this application are only for distinguishing different objects, for example, distinguishing different “link quality requests”, or distinguishing different “communications devices”, and do not constitute any limitation on the embodiments of this application. A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of this application. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments, and details are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the prior art, or some of the technical solutions may be implemented in a form of a software product. The software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory ROM, a random access memory RAM, a magnetic disk, or an optical disc. The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
100,449
11943663
DETAILED DESCRIPTION OF EMBODIMENTS FIG.1shows a device100for providing a quality of service (QoS) function, for a communication service111of an application entity110in a communication network1, according to an embodiment of the disclosure. The communication network may further include the user equipment120, and a plurality of network entities112,113, and114. In some embodiments, the device and/or the application entity and/or the network entities and/or the user equipment120may be configured to communicate to each other, may be located in one or more communication networks, may be located in one or more PLMNs, etc., without limiting the present disclosure to a specific network configuration. Moreover, the plurality of network entities112,113, and114may be based on Radio Access Network (RAN) nodes (e.g., Base Stations) or Core Network (CN) entities (e.g., NWDAF, SMF, PCF, UPF, Access and Mobility management Function (AMF), etc). The device100is configured to transmit a monitoring request101to one or more of: the application entity110, at least one network entity (e.g.,112inFIG.1), and the user equipment120. For example, the monitoring request101may be transmitted to different network nodes, and may further be used for collection of information for determining the QoS of the communication service, the change in the QoS, etc. Moreover, the monitoring requests101may be transmitted to the application entity110and/or the UE120, for example, directly or via another network entity. The device100is further configured to obtain a monitoring response102from one or more of: the application entity110, the at least one network entity112, and the user equipment120. As discussed, the device100may transmit the monitoring request101to the application entity110(e.g., which may be a vehicle). Moreover, the device100may further obtain the monitoring response102from the application entity110which may be, e.g., the path of the vehicle, the trajectory of the vehicle, etc. In addition, the device100may transmit the monitoring request101to the UE120(e.g., a vehicle). Moreover, the device100may further obtain the monitoring response102from the UE120which may be, e.g., the path of the vehicle, the trajectory of the vehicle, etc. The device100is further configured to determine a change in quality of service103of the at least one communication service111of the at least one application entity110, based on the obtained monitoring response102. For example, the device may obtain the monitoring response from one or more of the network entities that the monitoring request is transmitted to them. Moreover, for example, the QoS of the communication service, a change in the QoS of the communication service, a change in the coverage, etc., may be determined. The device100is further configured to transmit the determined change in the QoS103to the at least one application entity110and/or at least one of the network entities112,113, and114in the communication network1. For example, the device may determine the change in the QoS, and it may further transmit the change in the QoS (e.g., a QoS, a change in QoS, a coverage level, etc.) to the application entity and/or one more of the network entities, etc. FIG.2shows a schematic view of a system200including a device100for activating and providing a quality of service (QoS) function for a communication service of an application entity110, according to an embodiment of the disclosure. The device100in the system200performs several procedures that may be needed, in order to enable the QoS function (e.g., the P-QoS functionality). The system200includes the application entity110which sends a subscription request to the QoS function of the device100. The device100performs activation of prediction for all types of links and/or services, in response to the subscription request of the application entity110. By activating the subscription request, for example, the QoS and/or the changes in the QoS of the communication service of the application entity may be determined. The device100further performs activation of monitoring reports to the appropriate nodes for all types of links and/or services, in order to enable and/or to provide the QoS function. For example, the device100transmits the monitoring request to the RAN201and several network entities including NWDAF, SMF, PCF, Access and Mobility management Function (AMF) and UPF, in the CN202of the communication network1. Furthermore, the device100obtains a monitoring response from the RAN201, and from one or more of the network entities including NWDAF, SMF, PCF, AMF and UPF in the CN202. The device100further determines a change in the QoS of the communication service of the application entity110, based on the obtained monitoring response, and transmits the determined change in the QoS to the application entity110. For example, the device100notifies the application entity, by providing a notification of QoS change and/or coverage using timing location, probabilistic information, etc. The system200is based on a Multi-operators in which two different operators including a first mobile network operator MNO1 is interacting with a second mobile network operator MNO2. The interactions may be based on roaming and non-roaming cases. In the embodiment ofFIG.2, the device100, the network entities in the CN202, the application entity110, and the RAN201are included in the first MNO1, without limiting the present disclosure to a specific location of the different entities. The device100(and/or the system200) further performs the configuration of the P-QoS, for example, by the Management plane. In the following, several exemplary procedures for enabling and providing the QoS function (e.g., a network QoS prediction function) are described using, as an example, a 5G communication system, without limiting the present disclosure to a specific type of the communication systems. FIG.3shows a flow diagram of an exemplary procedure for activation of the QoS function, configuration of the communication network and providing a QoS function. Entities shown inFIG.3are the UE300, the AN301, the AMF302, the SMF303, the UPF304, the NWDAF305, the device100which is configured to provide the QoS function (e.g., the P-QoS functionality), the PCF306, the Network Exposure function (the NEF)307, and the application entity in the form of application function110. The UE300may be similar and/or may function similar to the UE120of theFIG.1, without limiting the present disclosure. For example, the UE120ofFIG.1and the UE300ofFIG.3, may be based on an identical vehicle or different types of vehicles, etc. For providing the QoS function (e.g., P-QoS functionality), the V2X application entity subscribes with the 5G communication network to support the predictive QoS (P-QoS) functionality. The application entity sends a P-QoS subscription parameter to the 5G system. The subscription parameter includes one or more of:A protocol data unit (PDU) session identification number (ID).A vehicle-to-everything (V2X) communication service ID.A single or a group of affected vehicle-to-everything user equipment (V2X-UE) IDs.A timing window and/or a frequency of the subscription request to the QoS function.A time horizon for the determination of the change in the QoS of the communication service (e.g., a QoS guarantee that the communication network provides for the next x seconds, the QoS downgrade or the QoS upgrade will take place in x seconds, etc.).A predefined geographical area for the determination of the change in the QoS of the communication service (e.g., the QoS downgrade/upgrade will take place in x meters or the location with coordinates of x, y, z, etc.).A threshold value of the change in the QoS (e.g., thresholds for being notified) of the at least one communication service.A duration time of the communication service.A specific prediction capability (e.g., QoS Change, out of coverage, 5G to LTE transition, etc.).A segment-based subscription or an end to end (E2E) subscription to the QoS function (this information may be required, for example, for identifying the requirement for a local or a global prediction).A required capability and a monitoring exposure to the at least one application entity (this information may be used by the communication network to determine whether the subscription request can be activated or not).A required QoS for the communication service.At least one or more alternative QoS levels that could be used, when it is determined that a primary QoS is not available.The type of the link including the Sidelink (PC5) and/or the Cellular communication (Uu). In the embodiment ofFIG.3, the activation of the QoS function (e.g., the P-QoS functionality activation) is described as a feature which can be activated, e.g., in response to a subscription request of an application entity being based on an application function (AF), and for a V2X service considering the 5G system. Note that, the procedure can be applied more generally to all types of the application entities. For example, the application entity may be based on an application server, an application function, a middleware application, etc. Moreover, the communication system may also be any type of the (e.g., cellular) communication systems. At step3a, initially AF110sends a service request message which includes:One or more V2X service IDs.One or more of the subscription parameters. For example, the AF110sends a service ID, and a time, area and periodicity information to the NEF307and the PCF306, respectively. Next, at step3b, the PCF306(or any relevant network functionality which may be responsible for the policy control and charging), after receiving the AF's110request, sends a P-QoS activation request message to the device100(e.g., its P-QoS logical unit). The P-QoS activation request message includes:One or more V2X service IDs.One or more V2X UE IDs.One or more of the P-QoS subscription parameters (e.g., parameters related to the time, geographic area, and the periodicity or monitoring requirement). At step3c, the device100(for example, its P-QoS unit) initiates the monitoring activation/subscription procedure (which will be described in detailed in the following inFIG.4). The monitoring activation/subscription procedure may include on demand the analytics activation messages to the NWDAF305, as well as, to the RAN via the AMF302and the UE300(for example, for a real-time UE monitoring). Next, at step3d, the device100(e.g., its P-QoS admission control), possibly together with the PCF306, activates the subscription of the AF110to the QoS function, in order to ensure that the communication network can support this features and monitoring response is acknowledged (ACK). As the result of the admission control, in step3d, the P-QoS functionality of the device100sends a P-QoS activation response message to the application entity110, directly, or via the NEF307. This message may include an ACK (in case of acceptance) or NACK (in case of rejection) or a negotiation of network parameters, for example, in the cases that, not all P-QoS parameters can be fulfilled by the system, etc.). At step3e, the AF110responds by an ACK/NACK, for example, in the case of negotiating the P-QoS subscription parameters. Moreover, at step3f, since the Policy and Charging Rules (PCC) rules are sent by the Point coordinates function (PCF)306(either dynamically or by pre-configuring SMF303with rules), the device100(e.g., the P-QoS function) may undertake the role of overriding or extending the PCC rules by sending a message to the PCF306and/or to the SMF303. For example, a pre-configured extended PCC rule message may be transmitted from the P-QoS of the device100to the SMF303, and/or a dynamic extended PCC rules message may be transmitted from the P-QoS of the device100to the PCF306. Moreover, both messages may include the P-QoS activation parameters. In some embodiments, depending on the location of the P-QoS function, e.g., at the PCF306or the SMF303or as a new function, etc., different interfaces may be impacted. Next, at step3g, after updating the PCC rules and the activation of monitoring on demand, the P-QoS function of the device100may be operated, for example, in a service or a session based manner. The communication network may notify the application entity110when the prediction service is not supported any more (e.g., change of the Radio Access Technology (RAT) or the cell), or in the cases when any change in the configuration of the prediction service (i.e. the QoS function) is needed. Therefore, the prediction service may be, e.g., released, modified, and updated either by the application entity and/or by the device and/or by the communication network, etc. Alternatively, the subscription request for the activating the prediction service may be sent by an application entity of a UE (or a group of UEs) to the network using control plane signaling (Radio Resource Control (RRC), Non-access stratum (NAS)), or to an application function (e.g., AF of 5GS) using user plane signaling. FIG.4shows a flow diagram of an exemplary procedure, for activation of monitoring for the QoS function. At step4a, the device100(e.g., its P-QoS function) transmits a monitoring request to a network entity, for example, to AN301in the communication network, and/or the application entity110, and/or the UE300. According to the P-QoS subscription request (e.g., the P-QoS subscription parameters, the type of the V2X service), different monitoring subscription or configuration requests may be required, transmitted and collected, e.g., by different network nodes (e.g., gNB, UPF), depending on the type of the V2X service and the agreed P-QoS requirements. These requests may allow collection of different types of information that may be needed, in order to, e.g., activate the subscription request, determine the QoS, determine the change in the QoS and/or enable the prediction of the QoS change, etc. The different types of information may be one or more of:General information per node (BS, UPF) or per link (e.g., N3 in 5G) that can be retrieved via the NWDAF or directly via the corresponding network node (e.g., average cell load, average bit rate, reliability, load, coverage information).Specific information per QoS flow that can be retrieved via the corresponding node (RAN, CN, UE), e.g., packet delay information.Analytics or historic data e.g., statistics on handover failure rate, rejected PDU sessions etc.Specific information per UE (e.g., RRC measurement reports, radio link quality, UE speed, UE mobility information) that can be retrieved via same or other neighboring UEs.Application layer information provided either by the application entity or a third party (e.g., vehicle planned route/path/trajectory, application behavior and/or configuration, road traffic information, road infrastructure information),Events that can be reported by the corresponding node that monitors and detects the events (e.g., UE reachability, communication failure). In addition, the procedure to identify the key QoS parameters and the prediction events, may be affected by, for example, the type of service, the type of the communication link that is used (e.g., cellular (Uu), sidelink (PC5)) and the configuration of the P-QoS subscription request. The device (e.g., its P-QoS function) may transmit (e.g., directly or via another CN function) the appropriate monitoring subscription request messages to the appropriate nodes (e.g., UE, RAN node, CN node, V2X application entity). The transmitted monitoring request may include one or more of:The monitoring parameters: a parameter (e.g., latency, data rate, packet error rate, bit rate, reliability, jitter, Signal-to-Interference-Plus-Noise Ratio (SINR), coverage) and/or a measure type (e.g., average, actual value, etc.).The monitoring level: the QoS flow, the link, the UE, and the network node.The time granularity of the reporting: a one-time report, a periodic (value) report, a condition based report (e.g., having a value above/below a threshold value).The area of reporting: specific cells, and the geographic location. At step4b, the node that has received the monitoring request for the prediction service, responds by an ACK or NACK. In the case if the NACK, the node may indicate an alternative monitoring configuration, if it is available. At step4c, after the successful subscription or activation of the monitoring phase, then the actual monitoring (collection of information) may be initiated, for example, a monitoring start message may be sent by the P-QoS function of the device100(e.g., directly or via another CN function). Moreover, the monitoring may be stopped or paused (e.g., with a monitoring reject message), and the monitoring configuration may be modified during the lifetime of the V2X service. In addition, the device100and/or its P-QoS function may transmit an on-demand analytic activation request message. For example, the P-QoS function100may transmit the on-demand analytic activation request message to the NWDAF305. Moreover, the NWDAF305may provide an analytic activation respond (e.g., ACK or NACk), to the P-QoS function of the device100. As discussed, the device100may transmit the monitoring request to the application entity (i.e. the AF110inFIG.4). Moreover, the device100may further obtain the monitoring response from the AF110which may be, e.g., the path of the vehicle, the trajectory of the vehicle, etc. In addition, the device100may transmit the monitoring request to the UE300which may be a vehicle. Moreover, the device100may further obtain the monitoring response from the UE120which may be, e.g., the path of the vehicle, the trajectory of the vehicle, etc. Moreover, the device100and/or its P-QoS function may transmit a mobility stats subscription, for example, to the AMF302, and the AMF302may further transmit a mobility stats response to the device100. FIG.5shows a flow diagram of an exemplary procedure, for activation of monitoring for a local prediction of the QoS function. In some embodiments, the P-QoS function (or any other CN function) may be based on a “local” predictions of, e.g., the changes in the QoS, the events that may be requested by and/or allocated to the individual network entities (e.g., Base Station (BS), UPF) or the UE. For instance, the assignment of the prediction of the changes in the QoS for a PC5/sidelink communication may be faster to take place at the RAN side. The node that has received the prediction subscription request message, may respond by ACK or NACK. Moreover, in the case of the NACK, the node may indicate an alternative prediction configuration, if possible. At step5a, the device100and/or its P-QoS function transmits a local prediction message to the application entity AF110and/or to a network entity. At step5b, the AN301transmit a prediction notification message (e.g., expected degradation of radio quality, increase of path loss of Uu and/or PC5 radio link) to the P-QoS function of the device100. The prediction subscription request message may include the type of the parameter that should be predicted by the recipient node (or a local P-QoS functionality), the type of reporting (e.g., per flow, node, session, etc.), the time granularity of the prediction and the reporting, the location of the prediction and the reporting, etc. After the (successful) subscription or activation of the “local” prediction, then the actual “local” prediction results (collection of information) may be initiated, for example, a prediction start message is transmitted by the P-QoS function (directly or via another CN function). The “local” prediction may further be stopped or paused, while the “local” prediction configuration being modified during the lifetime of a V2X Service. The “local” prediction results (i.e., the prediction notification message) may be sent to one or more of:A central P-QoS entity for generating the “e2e” prediction outcome, for a service or a communication link.The SMF in 5G networks (which is close to the existing 3GPP spirit).Transmitting directly to a V2X application entity (e.g., to monitor or to predict the UL latency value for a specific service), when a quick notification is needed. The determined change in the QoS may be transmitted to the application entity and/or one of the network entities. For example, the device100may provide a notification to the application entity110and/or one of the network entities, based on a type of the communication service, including at least one of a sidelink (PC5), communication, and a cellular communication (Uu). The outcome of the QoS function (e.g., the P-QoS functionality), regardless if it is based on a centralized P-QoS function or it is distributed among different network entities, may be transmitted to the V2X application entity (e.g., the UE, the application server, etc.) and/or one or more of the network entities. Moreover, the type and the description of the notification may depend on the initial P-QoS subscription. For example, the notification may be:The UE1 will be out of coverage in X seconds.The UE1 will be out of coverage in the location with the coordinates of (x, y, z).The QoS (e.g., the latency) of the UE1 will be downgraded in X seconds.The QoS (e.g., the latency) of the UE1 will be downgraded in the location with the coordinates of (x, y, z), and in X seconds,The QoS of a group communication (group-cast) will be downgraded at cell number x, andThe QoS (e.g., the latency) of the UE1 will be downgraded in the location with the coordinates of (x, y, z), and with a probability of X % (or the confidence interval).Potential change in QoS (e.g., bit rate) with certain probability and/or confidence interval. Moreover, the QoS function (e.g., the prediction functionality) may provide the notification of the changes in the QoS, based on the P-QoS outcome, and it may further include one or more of:The PDU session ID (per PDU session P-QoS), PDU session type, Flow ID.The vehicle-to-everything (V2X) communication service ID (per service/QoS Flow).The type of the link including the sidelink (PC5), and/or the cellular communication (Uu).The single or the group of the affected vehicle-to-everything user equipment (V2X-UE) IDs.A QoS parameter (e.g., packet delay budget, bit rate) and/or 5QI ID that will change and/or event that will be triggered (e.g., out of coverage).A proposal for a new level of the QoS that could be supported/provided.A timing information of the change in the QoS or event triggered that may be applied.A location information of the change in the QoS and/or the cell information about the change of the QoS and/or the identified event that may be applied.A level of the QoS per user equipment (UE), e.g., <current QoS, new1 QoS, new2 QoS, etc.>.A timing information of the UE, e.g., <start1, end1, start2, end2>.A probabilistic information for the change in the QoS and/or the coverage. FIGS.6a,6b,7aand7billustrate different forms of implementations of providing the notification from the device100and/or its P-QoS function100to the V2X application entity using either a control plane and/or a user plane messages. The notification may be, for example, the service ID, the V2X-UE ID, the QoS parameter (e.g., an old value of the QoS, a new value for the QoS, an event, a timing information that the changes in the QoS being applied, the geographical area that the changes in QoS being applied, probability of the QoS change, etc.). The notification response may be, for example, ACK, NACK, negotiation of the parameters, etc. FIG.6aandFIG.6bschematically illustrate two examples of signaling options via the NEF601in the 5G networks. The notified destination node is the V2X application entity located at the AF110, which can be a PLMN-owned application or a third party application. FIG.7aandFIG.7bschematically illustrate two examples of signaling options where the notified destination node is the V2X application entity located at the UE300. In this case the notification may be sent either via the AF110or via the RAN interfaces (e.g., NAS, RRC). In the MSC example of theFIG.7b, the SMF303is able to notify the PCF306, if the QoS targets for a QoS flow cannot be fulfilled (in accordance to the “3GPP-23503”), and may further update it with an early notification information, etc. For all of the above examples, the P-QoS functionality may notify the SMF303entity, and the latter may forward the notification for any change of the QoS or the coverage levels to the PCF306. Then, the PCF306undertakes to transmit the notification message to the NEF601, the AMF302, and/or any other function or network entity. For the (e.g. all potential) signaling options, the receiving V2X application entity may respond with a notification response message, for example, it may acknowledge the notification and/or accept or reject the new proposed QoS level (i.e. if the new proposed QoS level provided). In the case of the rejection, a negotiation of the QoS values between the network side and the V2X application entity may be initiated, and the later may propose a preferable alternative QoS level. Alternatively, the outcomes of the P-QoS notification about the expected QoS and/or the coverage change may be transmitted to the network entities (e.g., BS, UPF, AMF, SMF, PCF, V2X-CF of the 5G system, etc.). Moreover, the appropriate re-configuration action may be triggered and decided by the network, for example, based on the type of the notification, and in order to maintain the initial agreed QoS level and/or optimize the network performance, etc. FIG.8schematically illustrates a procedure for the configuration of the P-QoS function by the management plane, and particularly for the scenario of a slice-based architecture (i.e., V2X slice). As mentioned, the P-QoS function may be a feature which is required, for example, on demand, or it may be activated by default for a particular service. The network slicing is a key 5G requirement for enabling the verticals to operate on an end-to-end logical sub-network, e.g., after an agreement with the network operator. The automotive sector is a key vertical for the 5G system, and the 5G-V2X is envisioned as a key slicing scenario. The operator may provide the required network features, in order to meet the customized service requirements of the third party or the customer (e.g. OEM). The network features may be required, for meeting the slice KPIs, and the associated functions and resources. The P-QoS function may be one particular feature, which may be provided by default or on demand for a V2X slice. Initially, the third party801(or customer) transmits some service requirements to the CSMF802which is the entity that translates the service requirements to the network requirements, from the management perspective. Next, the Network Slice Management Function (NSMF)800creates and/or modifies the network slice instances, and it further maps them to policies (e.g. radio resource management (RRM) policy, Network Function (NF) placement, scale in/out, slice coverage, etc.). The Communication Service Management Function (CSMF)802, the NSMF800, and the Network Slice Subnet Management Function (NSSMF)803are the slice management functionalities which comprise the slice management system as it is defined in technical specification document with the reference number of “3GPP-28530”. The process can be summarized as follows:At step8a, the Network Slice Instance (NSI) is created based on the application requirement, having the P-QoS feature.At step8b, the NSMF800initially activates, e.g., on demand, the required analytics from the NWDAF305(as an activation msg0).At step8c, the NWDAF305collects the analytics from the related network functions (core and/or RAN control functions that are relevant to monitor the QoS, the resource situation, and the events).At step8d, the NSMF800receives the on demand analytics, and it configures the different segments of the 5GS (e.g., the RAN, the TN, the CN), accordingly.At step8e, the NSMF800sends the per-segment configuration to the NSSMF803for enabling the P-QoS, which includes parameters for the P-QoS activation and management.At step8f, the NSSMF803sends this information to the managed Network Element (NE)804, e.g., the control panel (CP) function. The NE804can be the PCF which is responsible (e.g., from the control-plane perspective) for providing policies for different domains (e.g., the CN, the RAN).At step8g(not shown) the NE804or the PCF applies the configuration or the policies to the underlying 5GS. FIG.9schematically illustrates a procedure for a configuration of the PC5 parameters for the P-QoS function. In some embodiments, for example, in the cases of the V2V communications, a V2X Control Function (V2XCF)900is provided. The V2XCF900is a network function for provision of the Sidelink (PC5) parameters to the UE300. In addition, the QoS function (e.g., the QoS model) for the PC5 includes the proximity services (ProSE) per packet priorities (PPPP), and the reliability (PPPR) provisioning to the UE300for the sidelink operation. In the following, an embodiment for the configuration of the PC5 parameters for the QoS function (e.g., the P-QoS functionality) is described. Moreover, there might be an interworking of the Uu and the PC5, and a unified QoS model may be provided. To this end, the P-QoS function provisions the P-QoS parameters which includes the predicted or expected changes (QoS characteristics/PPPPs/5QIs and the mapping to the QoS attributes) for the PC5 session. A message is transmitted from the device100(e.g., its P-QoS function) to the V2XCF900, and/or the application server901and/or the UE300(via the V2XCF900). The message is representative of an enhanced provisioning policies for the P-QoS message, and it includes the PC5 provisioning parameters as it is defined in technical specification documents with the reference numbers of “3GPP-24386” and “3GPP-23786”. The transmitted message includes one or more of:The sidelink, PC5, parameter.A mapping of proximity services, ProSE, per packet priorities quality of service, PPPP/QoS, class to packet delay budget, PDB, and packet error rate, PER, for a current QoS, and the predicted change in the QoS.A radio parameter for the current QoS, and the determined change in the QoS.A timer information, Txxxx, indicating the UE changing the radio parameters or mapping of the proximity services, ProSE, per packet priorities, PPPP, for the vehicle-to-everything, V2X, communication service over the sidelink, PC5. FIG.10schematically illustrates a procedure for providing a P-QoS function in a Multi-MNO environment. In some embodiments, a UE (e.g., a vehicle) may enable the P-QoS functionality. Furthermore, its subscription to the QoS function (e.g., the P-QoS functionality) of a V2X service may be activated and/or it may be connected to an MNO with a QoS function. The UE may further move to a second MNO (e.g., a national MNO, a cross-border MNO). In such a case, it may be required to, for example, before the actual roaming (for the service continuity, the reliability, and the availability of the P-QoS function), request the second MNO (i.e., the target MNO), whether the same P-QoS capabilities and/or configuration are supported. In other words, an extension of the subscription may be requested. The interaction between the MNOs may take place, for example:either via the NEF interfaces that both MNOs have, or via the AFs dedicated for this purpose, orvia the inter PLMN-control plane signaling. FIG.10schematically illustrates a procedure for transmitting a P-QoS subscription request based on an inter NEF interaction, in a Multi-MNO environment, using a roaming service. As discussed, a subscription request of the application entity in the first communication network may be transmitted to a network entity in the second communication network. The subscription request may include the QoS service request message for determining, upon a roaming of the application entity from the first communication network to the second communication network, whether the QoS function for the communication service of the application entity is supported in the second communication network. InFIG.10, the PLMN/MNO1 asks via a P-QoS service request message whether the P-QoS subscription parameters for a V2X Service of one or more UEs could be supported, when the UE moves to the second PLMN/MNO2. The PLMN 1 includes the PCF Point coordinates function (1000) (or P-QoS 1), and the Network Exposure function (NEF)1001(NEF PLMN1). Moreover, the PLMN 2 includes the PCF1002(or P-QoS 2), and the NEF1003(NEF PLMN2). The PLMN/MNO2 responds with the ACK or the NACK. In the case of NACK (i.e., P-QoS service response message), it may propose, an optional alternative supported configuration. As discussed, the interaction between the MNOs may take place via the NEF or the AFs. FIG.11aandFIG.11bschematically illustrate a procedure for transmitting a P-QoS subscription request based on an inter-PLMN control plane, in a Multi-MNO environment, using a roaming service. InFIG.11aandFIG.11b, the interactions between the MNOs for the request of the P-QoS capabilities and/or the configuration that could be supported, i.e., by the second MNO are performed via the inter PLMN interfaces. For instance, inFIG.11a, the N24 and the N32 reference points of the 5G system architecture, according to the technical specification document with the reference number of “3GPP-23501” may be used for, e.g., the P-QoS service request, the response messages, and the subscription of the P-QoS capabilities to another MNO/PLMN. In addition, inFIG.11b, the N27 (between NRFs) may be used for inter-PLMN P-QoS service discovery function (e.g., the query, the response). For example, in the cases that the PLMN/MNO1 needs to check if the PLMN/MNO2 provides any P-QoS service. In some embodiments, the QoS function (e.g., the prediction of the QoS) may be provided, for example, when two or more UEs are attached at different MNOs and they communicate via the cellular (Uu) and/or the Sidelink (PC5) interfaces (i.e. an identical or with different frequency bands). For instance, the vehicle-to-network-to-vehicle (V2N2V) communication may be performed between two or more vehicles attached at different MNOs, e.g., sensor sharing, cooperative maneuver, etc. Moreover, the prediction of the changes in the QoS may require an exchange of the prediction results (e.g., by a notifications) for a session of the V2X service that involves vehicles which are attached at different MNOs. For example, the PLMN1 may detect an expected change of the QoS, based on its individual P-QoS calculations. Moreover, the PLMN1 may notify both of the vehicles that are attached at its own network, and it may also notify the other involved vehicles that are attached at the PLMN2. The messages representing notification of the changes in the QoS may include a proposed supported QoS level. A detailed description of the notification of the QoS changes is described above (e.g., inFIG.6a,FIG.6b,FIG.7a, andFIG.7b). Similarly, the other vehicles (of the same or the other MNOs) may accept or reject, and/or negotiate the new QoS that could be supported, e.g., by the triggering the corresponding MNO. FIG.12a,FIG.12b, andFIG.12cschematically illustrate message sequence charts for providing QoS function in a Multi-MNO environment, where the application entities are attached at different MNOs. InFIG.12a,FIG.12b, andFIG.12c, three alternative implementation forms of the MSC are illustrated for the interaction between different MNOs, and consequently, the transmission of the notification messages to the V2X application entities (e.g., in UEs) that are attached to different MNOs. FIG.12aillustrates an interaction between different MNOs, based on the NEF interfaces or AFs. The PLMN 1 including the PCF1200(or P-QoS 1), and the NEF1201(NEF PLMN1) communities with the PLMN 2 includes the PCF12002(or P-QoS 2), and the NEF1203(NEF PLMN2). The communication between the MNOs is performed based on the the NEF interfaces. FIG.12billustrates an interaction between different MNOs, based on the control plane interfaces, which may be the N24 and the N32 (i.e. reference points of the 5G system architecture “3GPP-23501”, as it is described inFIG.11a). The control plane interfaces may be used for the notification and the notification response messages to the other MNO/PLMN. For example, the second MNO may undertake to forward the notifications to the appropriate V2X application entities (e.g., of the UEs) that are attached to the first MNO. InFIG.12b, the PLMN 1 including the P-QoS11204, and the PCF1205communities with the PLMN 2 which includes the P-QoS21206and the PCF1207. The communication between the MNOs is performed based on the control plane interfaces. FIG.12cillustrates an interaction between different MNOs, based on the sidelink interface (PC5). The vehicles exchange notification messages that have received from their MNOs (i.e. MNO that they are attached to it). Similarly, the corresponding notification responses is transmitted via the sidelink interface using either sidelink control plane (e.g., RRC) and/or the user plane messages. In some embodiments, the discussed three implementation forms ofFIG.12a,FIG.12b, andFIG.12cmay be used either for the cellular Uu (V2N2V) or the sidelink/PC5 communication between two or more UEs. FIG.13shows a method1300according to an embodiment of the disclosure for providing a quality of service (QoS) function, for a communication service111of an application entity110in a communication network1comprising a plurality of network entities112,113,114. The method1300may be carried out by the device100, as it described above. The method1300comprises a step1301of transmitting a monitoring request to one or more of: the at least one application entity, at least one network entity, and at least one user equipment. The method1300further comprises a step1302of obtaining a monitoring response from one or more of: the at least one application entity, the at least one network entity, and the at least one user equipment. The method1300further comprises a step1303of determining a change in quality of service (QoS) of the at least one communication service of the at least one application entity, based on the obtained monitoring response. The method1300further comprises a step1304of transmitting the determined change in the QoS to the at least one application entity and/or at least one of the network entities in the communication network. The present disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed disclosure, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.
39,592
11943664
DETAILED DESCRIPTION For the scenario of transportation vehicles equipped with wireless communication modules that provide connectivity to public communication networks, but also provide a direct communication capability for exchanging information among the road participants, wireless communication offers the opportunity to implement a wide range of applications. A lot of research concerns the fields of cooperative and autonomous driving. Direct communication between transportation vehicles is often referred to be vehicle-to-vehicle communication V2V. Also possible is communication from and to a transportation vehicle with infrastructure communication stations such as a road side unit RSU, also called V2I communication (vehicle-to-infrastructure). Both types of communication are very often referred to as V2X communication (vehicle-to-everything), comprising both V2V and V2I communication. Within the concept of predicted Quality of Service (pQoS), the application supported by a communications system adapts its settings to the foreseen quality of service (QoS). The quality of the communication link is, therefore, critical, as the performance of the application is strongly dependent on it. To allow the application to cope with variations of the quality of service, pQoS provides an information on the future quality of the link. This information comes with a prediction horizon, that is the delta time in the future for which the predicted value is applicable. On another hand, the ability to predict the future QoS is an enabler for these applications, as they have the ability to cope with a drop in QoS performance in advance. Predicting future QoS in a network is facilitated by the static feature of one of the communication nodes, the base station. Known solutions use the study of the static (slow fading) properties of the surrounding of the base station. A well-known technique to provide coverage in out of coverage areas is the so called relaying. Where one user at the edge of a cell (butt still in coverage) relays a link from the BS to a user which is out of coverage. Currently the following mobile communication technologies are applicable for bringing connectivity to a transportation vehicle: 3GPP-based UMTS, HSPA, LTE, and the upcoming 5G standards. For the V2V or V2X communication the following technologies are readily available: LTE-V sidelink communication, also called PC5 interface, 5G PC5 communication, WLAN p communication (IEEE 802.11p). Communication standards define performance metrics for communication technologies, such as minimums, maximums, averages, etc. of some key performance indicators KPIs. The indicators, such as latency τ of a data communication, throughput Th, data rate DR, packet error rate PER, and packet inter-reception time PIR, vary within and around these values, sometimes drastically dropping or increasing. This variation can drastically affect the quality of applications. For safety-related applications, such as some applications of cooperative automated driving, the average achievable latency with best effort policy does not comply with the quality requirements of the automotive industry, for instance. Especially when it comes to V2V and V2X and time critical safety related applications, this potential variation and this absence of guarantee of quality of service QoS seriously affects the potential use of such technologies. One application that requires a high QoS is tele-operated driving, hereinafter abbreviated ToD. In the field of communication prediction, QoS- and radio maps are state of the art tools that enable an adaptation to QoS variations. These maps may be generated by making use of knowledge about the environment as well as knowledge from statistical/historical data. Knowledge about the environment may be some shadowing effect prediction, white spot and static scatters mapping, and Doppler shift prediction caused by dynamic scatters like trucks, busses or other transportation vehicles building an obstacle for the direct communication to another transportation vehicle. Historical QoS data can be gathered by managing nodes, such as the base station eNodeB of the LTE mobile communication system, on the effective QoS and be mapped to the environment knowledge. From US 2018/0343598 A1 an electronic device and a wireless communication method in a wireless communication system are known. The method comprises: acquiring scenario identification information, comprising first link information that indicates the quality of a link between the electronic device and a user equipment (UE), second link information that indicates the quality of a link between the electronic device and a base station, serving cell received power change rate information, and neighboring cell received power change rate information; and determine scenario information based on the scenario identification information, to inform the UE, so as to assist the UE to execute a relay reselection process, or to assist the electronic device to execute a relay selection process. By using the electronic device and the wireless communication method of the present disclosure, a remote UE is enabled to acquire a scenario in which the electronic device is located, so that the remote UE can better perform relay reselection or that the electronic device can better execute relay selection, thereby increasing the system performance and reducing overheads of an X2 interface. From US 2019/0124015 A1 a transmitting device for transmitting vehicular data via a sidelink interface to one or more receiving devices is known. The transmitting device performs autonomous radio resource allocation for transmitting the vehicular data via the sidelink interface. An application layer generates the vehicular data and forwards the vehicular data together with a priority indication and one or more QoS parameters to a transmission layer responsible for transmission of the vehicular data via the sidelink interface. The transmission layer performs autonomous radio resource allocation based on the received priority indication and the one or more QoS parameters. The transmission layer transmits the vehicular data via the sidelink interface to the one or more receiving devices according to the performed autonomous radio resource allocation. US 2009/0117851 A1 discloses a quality of service map for a wireless network. The map comprises several layers of information visible at the same time. A first layer is a diagram showing physical features within the space where communications are provided by a service provider. Additional layers indicate the value of respective quality of service metrics at locations indicated by the first layer. Users of mobile wireless devices within the network contract with the service provider to have one or more selected communications services delivered to the mobile device. The users also contract with the service provider to have the selected services provided at respective selected service levels. The service provider, or the user, or both, use information from the map to enable provision of the selected communications services at the respective selected service levels. US 2019/0281644 A1 discloses a base station for cellular communication with a plurality of communication devices configured for D2D communication using a D2D communication channel. The base station comprises a communication interface configured to receive a request from the transmitter communication device, and a processor configured to select a subset of the plurality of relay communication devices for relaying the communication message to the at least one receiver communication device and to configure the subset of relay communication devices to relay the communication message using one of a plurality of relay modes. The article R. M. Panthangi et al.: “Online Learning Framework for V2V Link Quality Prediction”, 2019 IEEE Global Communications Conference (GLOBECOM 2019), discloses an approach for addressing the problem of predicting channel quality between transportation vehicles in terms of path loss, which exhibits strong fluctuations over time due to highly dynamic vehicular environments. The approach makes use of a framework for data-driven path loss prediction models that are obtained from datasets comprising information related to message transmissions and the communication scenario. By combining a changepoint detection method and online learning, the proposed framework adapts the current prediction model based on its performance, thus accounting for the dynamics in the environment and the cost of re-training. EP 3 253 126 A1 discloses a method for route selection in a mobile communication system. The mobile communication system comprises a base station, relay nodes, and user equipment nodes. A user equipment node sends measurement reports to the base station concerning the channel quality of the direct communication path to the base station. The base station determines a route for the communication with the reporting user equipment based on the measurement reports. Autonomous driving is on the rise. An automated transportation vehicle requires a certain QoS for an application (e.g., ToD) on a path via the Uu link. A threshold may be provided to perform a certain application, e.g., ToD. Applying this threshold to a predicted QoS profile it can be easily seen that this threshold will be violated and the application could not run properly. What is needed is a solution for the problem how to improve the communication performance for a communication between a base station and a moving communication partner and a solution for the problem how to make a QoS prediction for the communication between the transportation vehicle and the base station. Disclosed embodiments provide a method and apparatus for managing a communication between a base station of a cellular mobile communication system and at least one moving communication partner, a corresponding computer program and a transportation vehicle. At least one disclosed embodiment concerns a method for managing a communication between a base station of a cellular mobile communication system and a first moving communication partner. The method comprises:collecting service quality reporting messages from a plurality of moving communication partners registered in a communication cell managed by the base station, the service quality reporting messages comprising service quality reporting messages for direct communications between two of the plurality of moving communication partners;generating quality of service maps for the communications between the base station and the first moving communication partner, as well as for the direct communications between two of the moving communication partners;receiving planned trajectories from the plurality of moving communication partners;calculating predicted quality of service profiles for the communications between the base station and a moving communication partner as well as predicted quality of service profiles for the direct communications between the moving communication partners based on the generated quality of service maps and the planned trajectories, the predicted quality of service profiles representing a temporal evolution of at least one quality of service parameter;receiving a request from the first moving communication partner for a communication between the base station and the first moving communication partner, the request demanding a high quality of service;determining, based on the predicted quality of service profiles, if the high quality of service demanded by the request can be satisfied along the planned trajectory of the first moving communication partner by a direct communication between the base station and the first moving communication partner or by relaying the communication via one of the other moving communication partners; andestablishing the communication between the base station and the first moving communication partner accordingly along the planned trajectory. The extension of the QoS map generation to direct communication between the moving communication partners allows for an improvement of the service quality for a communication between base station and a moving communication partner. This is particularly helpful for applications with high demanding QoS requirements, such as the application of tele-operated driving where video and audio plus a control stream needs to be communicated. For determining if the high quality of service demanded by the request can be satisfied along the planned trajectory of the first moving communication partner by a direct communication between the base station and the first moving communication partner, the at least one quality of service parameter represented by the predicted quality of service profile for the communication between the base station and the first moving communication partner is compared with a threshold representing a minimum requirement which the quality of service parameter should satisfy for the requested high demanding communication. This enables to decide if the direct communication between the base station and the first moving communication partner is able to provide for the required quality of service. For determining if the high quality of service demanded by the request can be satisfied along the planned trajectory of the first moving communication partner by relaying the communication via one of the other moving communication partners, a moving communication partner in the vicinity of the first moving communication partner is selected, a combined predicted quality of service profile is calculated, the combined predicted quality of service profile representing a temporal evolution of at least one quality of service parameter when the communication is relayed via the selected moving communication partner, and the at least one quality of service parameter represented by the combined predicted quality of service profile is compared with the threshold. This allows for quickly deciding if relaying is an option to guarantee the required quality of service for high demanding communication applications. The method further comprises recording a position or time information from which on the relayed communication should be started. This allows to switch seamlessly between direct communication and relayed communication if the switching point is registered beforehand. The disclosed embodiments may be used in the field of V2X communication, where the moving communication partners are transportation vehicles equipped with an on-board communication module capable of performing transportation vehicle to everything communication V2X, including performing communication to the base station via Uu-link and direct communication from transportation vehicle to transportation vehicle via sidelink, e.g., PC5-link. In at least one disclosed embodiment, the request demanding a high quality of service corresponds to a requested tele-operated driving session. The disclosed embodiments also concern an apparatus for managing a communication between a base station of a cellular mobile communication system and a first moving communication partner, wherein the apparatus comprises a processing device which is adapted to perform a disclosed method. Such an apparatus may be exemplified with a base station computer. Disclosed embodiments further concern a computer program comprising program code, which when run in a processing device, causes the processing device to perform the disclosed method. Another exemplary embodiment concerns a transportation vehicle comprising a communication module and a processing device. The processing device is adapted to form a message for requesting a communication from or to the base station, the request demanding a high quality of service. The communication module hence is adapted to send the formed message to the base station and is adapted to receive a message from the base station informing about the need of a relayed communication for the requested communication demanding the high quality of service. Further on, the processing device is adapted to register the need of relayed communication in a memory, where the registering information includes a position information or time information from which on the relayed communication needs to be provided. The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure. All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Thus, for example, it will be appreciated by those skilled in the art that the diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. The functions of the various elements shown in the figures may be provided by the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read only memory (ROM) for storing software, random access memory (RAM), and nonvolatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context. In the claims hereof, any element expressed as a way for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited methods or mechanisms are combined and brought together in the way the claims call for. It is thus regarded that any method or mechanism that can provide those functionalities are equivalent to those shown herein. FIG.1shows the system architecture for the proposal. Reference number10denotes a user device. The depicted user device is exemplified as a transportation vehicle and, more in particular, it is a car. In other examples it may be differently exemplified, e.g., a smart phone, a smart watch, a tablet computer, notebook or laptop computer or the like. Shown is a passenger car. If exemplified with a transportation vehicle, it may be any type of a vehicle. Examples of other types of vehicles are: buses, motorcycles, commercial vehicles, in particular trucks, agricultural machinery, construction machinery, rail vehicles, etc. The use of the disclosed embodiments would be generally possible in land vehicles, rail vehicles, watercrafts and aircrafts. The transportation vehicle10is equipped with an on-board connectivity module160including a corresponding antenna such that the transportation vehicle10can participate in any form of a mobile communication service.FIG.1illustrates that transportation vehicle10may transmit and receive signals to and from a base station210of a mobile communication service provider. Such base station210may be, e.g., an eNodeB base station of an LTE (Long Term Evolution) or 5G mobile communication service provider. The base station210and the corresponding equipment are part of a mobile communication network with a plurality of network cells, where each cell is served by one base station210. The base station210inFIG.1is positioned close to one or a plurality of roads on which the transportation vehicles10are driving. Of course, other transportation vehicles may also drive on the road. In the terminology of LTE, a mobile terminal corresponds to an UE, which allows a user to access network services, connecting to the UTRAN or Evolved-UTRAN via the radio interface. Typically, such UE corresponds to a smart phone. Of course, mobile terminals are also used in the transportation vehicles10. The cars10are equipped with the on-board connectivity module OCU160. This OCU corresponds to an LTE or 5G communication module with which the transportation vehicle10can receive mobile data in downstream direction and can send such data in upstream direction. In terms of the LTE mobile communication system, the Evolved UMTS Terrestrial Radio Access Network E-UTRAN of LTE consists of a plurality of eNodeBs, providing the E-UTRA user plane (PDCP/RLC/MAC/PHY) and control plane (RRC) protocol terminations towards the UE. The eNodeBs are interconnected with each other by the so-called X2 interface. The eNodeBs are also connected by the so-called S1 interface to the EPC (Evolved Packet Core)200, more specifically to the MME (Mobility Management Entity) by the S1-MME and to the Serving Gateway (S-GW) by the S1-U interface. From this general architecture,FIG.1shows that eNodeB210is connected to the EPC200via the S1 interface and that EPC200is connected to the Internet300. A backend server320, to which the transportation vehicles10may send messages to and receive messages from, is also connected to the Internet300. In the field of cooperative and autonomous driving, the backend server320typically is located in a Control Center (CC). This also includes the application of tele-operated driving. “Tele-operated driving” means in this context that an external operator controls the transportation vehicle remotely. The external operator is located in the control center. There may be a large distance between the control center and the transportation vehicle. Control center and transportation vehicle are connected via a radio communication system and their backhaul. Primarily, the radio communication system is part of a public mobile communication system such as LTE or 5G. Tele-operated driving belongs to safety-related time-critical applications and the requirements for the exchange of information are low latency, high data rate and high reliability. ToD is seen as an enabler of the automated driving, because it will solve deadlock situations in which the automated transportation vehicle gets caught. Finally, an infrastructure network component is also shown. This may be exemplified by RSU310. For the ease of implementation, it is considered that all components have assigned an Internet address, typically as an IPv6 address, such that the packets transporting messages between the components can be routed, correspondingly. The various interfaces of the LTE or 5G network architecture are standardized. It is particularly referred to the various LTE and 5G specifications, which are publicly available for the sake of sufficiently disclosing further implementation details. FIG.2shows an example scenario for a ToD session for one transportation vehicle in a mobile communication cell of a base station210.FIG.2depicts a sub-urban scenario, with a plurality of streets having intersections and buildings in between. There are three transportation vehicles depicted driving on the streets. The three transportation vehicles are labelled V1to V3. Transportation vehicle V1is experiencing a blocking situation, which does not allow the automated driving function inside transportation vehicle V1to move on. The reason for this is a truck12parking on the road for unloading goods. The street is quite narrow, such that there is not enough widths left for the transportation vehicle V1to go forward and pass the truck. The autonomous driving function in transportation vehicle V1does not have admissibility to drive the transportation vehicle over the sidewalk to pass a narrow point. Hence, it is a deadlock situation for transportation vehicle V1. This calls for a ToD session demanded by transportation vehicle V1. Its future path is labelled with reference sign15. All transportation vehicles V1to V3are registered at base station210. The base station210is informed about all paths (dashed lines) from the three transportation vehicles V1to V3. Such information is consistently exchanged among autonomous transportation vehicles and base station210. This also includes the exchange of position information, heading information, and acceleration information, as will be explained in more detail later. FIG.3illustrates direct communication links Uu1to Uu3for communications between a base station210of a communication cell and three transportation vehicles V1to V3and two sidelink communication links PC5 between the transportation vehicles V1to V3for serving as relaying node towards a first transportation vehicle V1. FIG.4shows schematically a block diagram of the transportation vehicle's10board electronics system. Part of the board electronics system is an infotainment system which comprises: the touch-sensitive display unit20, a computing device40, an input unit50, and a memory60. The display unit20includes both a display area for displaying variable graphical information and an operator interface (touch-sensitive layer) arranged above the display area) for inputting commands by a user. The memory device60is connected to the computing device40via a further data line80. In the memory60, a pictogram directory and/or symbol directory is deposited with the pictograms and/or symbols for possible overlays of additional information. The other parts of the infotainment system such as camera150, radio140, navigation device130, telephone120and instrument cluster110are connected via the data bus100with the computing device40. As data bus100is the high-speed option of the CAN bus according to ISO standard 11898-2 taken into consideration. Alternatively, for example, the use of an Ethernet-based bus system such as IEEE 802.03cg is another example. Bus systems in which the data transmission via optical fibers happens are also usable. Examples are the MOST Bus (Media Oriented System Transport) or the D2B Bus (Domestic Digital Bus). For inbound and outbound wireless communication, the transportation vehicle10is equipped with an on-board communication module160. This communication module160is often referred to as an on-board unit (OBU). It can be used for mobile communication, e.g., mobile communication according to the LTE standard or 5G standard. Reference numeral172denotes an engine control unit. The reference numeral174corresponds to an ESC control unit corresponding to electronic stability control and the reference numeral176denotes a transmission control unit. The networking of such control units, all of which are allocated to the category of the drive train, typically occurs with the CAN bus system (controller area network)104. Since various sensors are installed in the transportation vehicle and these are no longer only connected to individual control units, such sensor data are also distributed via the bus system104to the individual control devices. However, the modern transportation vehicle can also have further components such as further surroundings scanning sensors like a LIDAR (Light Detection and Ranging) sensor186or RADAR (Radio Detection and Ranging) sensor182and more video cameras, e.g., as a front camera, rear camera or side camera. Such sensors are used more and more in transportation vehicles for surroundings observation. Further control devices, such as an automatic driving control unit ADC184, etc. may be provided in the transportation vehicle. The RADAR182and LIDAR sensors186could be used for scanning a range up to 250 m or 150 m and the cameras cover a range from 30 to 120 m. The components182to186are connected to another communication bus102. The Ethernet-Bus is a choice for this communication bus102due to its higher bandwidth for data transport. One Ethernet-Bus adapted to the special needs of car communication is standardized in the IEEE 802.1Q specification. Moreover, a lot of information for surroundings observation may be received via V2V communication from other road participants. Particularly for those road participants not being in line of sight LOS to the observing transportation vehicle it is very beneficial to receive the information about their position and motion via V2V communication. Reference number190denotes an on-board diagnosis interface. For the purpose of transmitting the vehicle-relevant sensor data via the communication module160to another transportation vehicle or to a central computer320, the gateway30is provided. This is connected to the different bus systems100,102,104and106. The gateway30is adapted to convert the data it receives via the one bus into the transmission format of the other bus so that it can be distributed in the packets specified there. For the forwarding of this data to the outside, i.e., to another transportation vehicle or to control center computer320, the on-board unit160is equipped with the communication interface to receive these data packets and, in turn, to convert them into the transmission format of the correspondingly used mobile radio standard. The gateway30takes all the necessary format conversions if data are to be exchanged between the different bus systems if required. Under the considered scenario of cooperative or autonomous driving the transportation vehicles broadcast so-called Cooperative Awareness Messages CAM, Collective Perception Messages CPM and Decentralized Environment Notification Messages DENM periodically such that they are aware which other transportation vehicles are in the vicinity. Cooperative awareness messages contain important status information from a sending transportation vehicle, such as position, speed, heading, accelerating data, etc. Since CAM messages are standardized, more detailed information about CAM, messages is provided in the ETSI standard ETSI EN 302 637-2. CAM information provides information about the traffic flow. They are compressed and transmitted to the traffic control center320. Also a planned travel route or a section of a planned travel route may be delivered to the control center in one or more CAM messages. By aggregating these dates, average values for the speed, or number of stops can be calculated. In one example application, the traffic lights can be controlled traffic-dependent. CPM messages are specified in ETSI TS 103 324, see also ETSI TR 103 562 V2.1.1 (2019-12). In the CPM messages, V2X vehicles equipped with local perception sensors broadcast their locally perceived objects in the surroundings derived from the analysis of the sensor data. Since the environment sensors deliver picture setting information, the typical analysis algorithms correspond to image processing algorithms such as object recognition algorithms. DENM messages hence are specified in ETSI EN 302 637-3. Such a message contains, e.g., a standardized warning message, for example, detailed information about a danger point or the traffic situation in the context of V2X communication. The base station210computer can fill a database with all this information from the plurality of transportation vehicles. The base station210computer further can predict the QoS for the communications of Uu links to certain transportation vehicles for sections of their travel routes. In the following, the process of data aggregation and QoS prediction on the side of the base station210is explained in more detail with the help of the flow chart ofFIG.5. The start of the computer program is labelled with reference sign S1. To enable the base station210to do the job, it needs to have channel quality reporting messages like CQI/PMI/RI reports, which are transmitted periodically with a certain interval from the registered subscribers in the mobile communication cell. Here, CQI means Channel Quality Indicator, PMI means Pre-coding Matrix Indicator, and RI means Rank Indicator. With CQI a subscriber reports to the base station210the modulation scheme and coding scheme. To predict the downlink channel condition, a CQI report feedback from a UE is an input. CQI reporting can be based on PMI and RI messages. The higher the CQI value (from 0 to 15) is reported by the UE, the higher the modulation scheme (from QPSK to 64QAM) and the higher the coding rate that will be used by the base station to achieve a higher efficiency. With a PMI report, a UE indicates to the base station210which precoding matrix should be used for downlink transmission which is determined by the RI report. In an RI report, n UE indicates to the base station the number of layers that should be used for downlink transmission to the UE. Furthermore, in addition to the above mentioned classical LTE and 5G reporting messages, the transportation vehicles V1to V3transmit channel quality reporting messages about the sidelink transmissions, i.e., the transmissions via PC5 link. These reporting messages up to date concern classical network metrics, such as packet delivery ratio (PDR) and received signal strength indication (RSSI), which are frequently used for this purpose. Moreover, messages with information about the maximum allowed latency requirements may be reported to the base station210. This defines the maximum duration of time allowable between when information is available for transmission (by the sender) and when it is received by the receiver, e.g., 100 ms is a typical value for this. All these messages are received by the base station computer212. A plurality of messages will be aggregated in operation at S2. In operation at S3the aggregated messages will be evaluated to update a base station owned database214. One example of such a database corresponds to a coverage map that informs about a certain QoS parameter such as receiving signal strength, signal to noise ratio or the like for the different Uu links or PC5 links. An example of a coverage map for this purpose is described in the reference “Nonparametric Radio Maps Reconstruction via Elastic Net Regularization with Multi-Kernels” from M. Gutierrez-Estevez, R. Cavalcante and S. Stanczak in 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). This article describes the calculation of a coverage map for the QoS parameter NMSE for the signal to noise ratio SNR, where NMSE means Normalized Mean Squared Error. The coverage map described in this paper can be calculated based on link layer information. An example for a higher layer-based QoS parameter with which also a coverage map could be calculated is the PIR time. The PIR time is defined as the interval of time elapsed between two successful beacon receptions, and is promoted in literature as a metric that describes the level of “situation-awareness” achieved onboard transportation vehicles more accurately than other parameters. With the coverage map determined in operation at S3, the base station computer calculates in operation at S4the predicted QoS parameters for the different Uu and PC5 links in its cell. For this purpose the planned trajectories which have also been delivered to base station210in data aggregation phase need to be used. The pQoS parameters can be predicted for the uplink and downlink directions separately. It is noted that different frequency ranges may be used for uplink and downlink direction as well as different modulation schemes, etc. It is noted that the skilled person often distinguishes between trajectory information and future path information, e.g., as a navigation route. Here, the trajectory information is considered to be a very accurate description in time and space of where the object will be located in future. In the field of transportation vehicle driving maneuvers, a trajectory typically is valid for the next 10 s. A future path like a GPS track is not that accurate in space and time but lasts longer, i.e., it may have a validity of several minutes or hours. Therefore, for the prediction of QoS parameters it makes a difference in terms of accuracy if the trajectory information is used or the future path information. With regard to the scenario depicted inFIG.2, the base station computer212in operation at S4predicts the coverage which transportation vehicle V1will experience on the planned trajectory15and it does the same for transportation vehicles V2and V3.FIG.6Ashows a pQoS profile for the Uu1link.FIG.6Bshows a pQoS profile for the Uu3link.FIG.6Cshows a pQoS profile for the PC5 link for direct communication between transportation vehicles V3and V1. The pQoS parameter noted along the ordinate is the receiving signal power. It is however noted that a different QoS parameter may be used alternatively, such as Doppler compensation information, latency information, data rate information, throughput information, packet error rate information, signal to noise ratio, and packet inter-reception time (PIR). Along the abscissa the time information is noted. In operation at S5it is checked if from any of the transportation vehicles V1to V3registered at the base station210a request for support by a ToD session has been received. If not, the program is ended in operation at S11. If yes, it will be checked for the transportation vehicle demanding the ToD session, if the pQoS profile is above a threshold Th. In the scenario ofFIG.2, the transportation vehicle V1will send the request for a ToD session as explained above. Here, the ToD application requires a minimum receiving signal power or a minimum requirement for a different pQoS parameter such as PIR time, etc. If this threshold might be violated based on the pQoS profile, then the base station210evaluates if it can establish a relaying support via other nodes (transportation vehicles, road side units310, traffic lights, etc.) to increase the received power. InFIG.6A, it is seen that the pQoS profile for Uu1link violates the minimum requirement. The curve for the pQoS profile drops below the threshold Th for a certain time interval, increases again and stays above the threshold Th for another time period and drops below the threshold for the rest of the pQoS profile.FIG.6Bdepicts the Uu3link pQoS profile. Since transportation vehicle V3has the greatest distance to the base station210, the pQoS profile does not fulfill the minimum requirement at the beginning but is good enough for the rest of the time since according to the planned trajectory15transportation vehicle V3approaches the base station210.FIG.6Cshows a pQoS profile for the sidelink communication from transportation vehicle V3to V1. It shows a similar form with a drop down at the beginning and an increase to sufficient QoS thereafter since both transportation vehicles are approaching. Of course, the environments of the approaching transportation vehicles also play a role. But this can also be taken into consideration for QoS prediction based on the information in CPM and DENM messages as mentioned above. The checking of the Uu link pQoS profile for the requesting transportation vehicle happens in operation at S6. If the Uu link profile fulfills the requirement, it will be decided to use the Uu link for the requested ToD session. A corresponding entry will be set in a register memory in operation at S10. At the same operation, a corresponding message is sent to the demanding transportation vehicle V1to inform it about this selection. If it has been found in operation at S6that the pQoS profile does not fulfill the minimum requirements, it follows in operation at S7the testing of the possibility of using relaying for fulfilling the minimum requirement for the requested ToD session. The base station210needs to know if the sidelink communication between V1and another transportation vehicle may be used to guarantee the minimum requirement for the ToD session. This is not always possible. Particularly, since with relaying some more latency will be added to the communication process, it might be possible that relaying is not an option even though the receiving signal power is good enough. That's why the higher layer indicator packet inter-reception time is a better choice for determining a pQoS profile. Optionally, the operation at S7includes an operation of calculating a combined pQoS profile out of the profiles for the Uu3and PC5 (V3, V1) link.FIG.6Dshows a resulting pQoS profile for the combination of Uu3and PC5 (V3, V1) link. As seen inFIGS.6A and6D, where the Uu1link profile drops below the threshold Th, the combined profile exceeds the threshold Th. This shows that relaying could help to maintain the ToD session for the section of the trajectory15for which transportation vehicle V1has requested ToD support. In operation at S8it is checked if the combined profile for relaying might be used at the trajectory section where the Uu1pQoS profile drops below the threshold Th. If not, the program is ended in operation at S11. If the result in query S8is that both profiles complement each other, it follows in operation at S9the setting of a corresponding entry in the register memory and the sending of a message to demanding transportation vehicle V1. The ToD session will be invoked by transportation vehicle V1by sending a ToD session request message to the backend server320in the control center. FIG.7shows the message exchange between control center with backend server320, the base station and the involved transportation vehicles V1and V3. First, the transportation vehicle V1in need of ToD support sends a request to base station210for a predicted QoS profile. The base station210answers either with the message to use the Uu link for ToD session communication if pQoS profile is good enough or answers with a message to use relaying support if not good enough. After that the transportation vehicle V1invokes the ToD session by sending a message to the backend server320in the control center. The backend server320starts sending ToD messages to the base station210. The base station210uses the Uu link or the relaying support for exchanging the ToD messages with transportation vehicle V1. It is to be understood that the proposed method and apparatus may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Special purpose processors may include application specific integrated circuits (ASICs), reduced instruction set computers (RISCs) and/or field programmable gate arrays (FPGAs). Optionally, the proposed method and apparatus is implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage device. The application program may be uploaded to and executed by a machine comprising any suitable architecture. Optionally, the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s). The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof), which is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device. It should be understood that the elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. Optionally, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces. Herein, the phrase “coupled” is defined to mean directly connected to or indirectly connected with through one or more intermediate components. Such intermediate components may include both hardware and software based components. It is to be further understood that, because some of the constituent system components and method operations depicted in the accompanying figures may be implemented in software, the actual connections between the system components (or the process operations) may differ depending upon the way in which the proposed method and apparatus is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the proposed method and apparatus. REFERENCE SIGN LIST 10Transportation vehicle12Truck15Planned trajectory20Touch Screen30Gateway40Computing Device50Operation Element Unit60Memory Unit70Data Line to Display Unit80Data Line to Memory Unit90Data Line to Operation Element Unit1001stCommunication Bus1022ndCommunication Bus1043rdCommunication Bus1064thCommunication Bus110Multifunction Display120Telephone130Navigation System140Radio150Camera160On-Board Communication Unit172Engine Control Unit174Electronic Stability Control Unit176Transmission Control Unit182RADAR Sensor184Automatic Driving Control Unit186LIDAR Sensor190On-Board Diagnosis Unit200Evolved Packet Core210Base Station212Base Station Computer300Internet310Road Side Unit320Backend ServerPC5 (V2, V1) 1stPC5 Communication LinkPC5 (V3, V1) 2ndPC5 Communication LinkUu1-Uu3Uu Communication LinksV1-V3Transportation vehiclesS1-S11Various Method Operations of a Computer Program
46,315
11943665
To facilitate understanding, identical reference numerals have been used to designate elements having substantially the same or similar structure and/or substantially the same or similar function. DETAILED DESCRIPTION The description and drawings illustrate the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its scope. Furthermore, all examples recited herein are principally intended expressly to be for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Additionally, the term, “or,” as used herein, refers to a non-exclusive or (i.e., and/or), unless otherwise indicated (e.g., “or else” or “or in the alternative”). Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. FIG.1is a block diagram of an example wireless local area network (WLAN)10, according to an embodiment. Such a WLAN10may need to be able to update operating parameters across a range of different versions of Wi-Fi or IEEE 802.11. An access point (AP)14-1includes a host processor15coupled to a network interface16. The network interface16includes a medium access control (MAC) processing unit18and a physical layer (PHY) processing unit20. The PHY processing unit20includes a plurality of transceivers21, and the transceivers21are coupled to a plurality of antennas24. Although three transceivers21and three antennas24are illustrated inFIG.1, the AP14may include different numbers (e.g., 1, 2, 4, 5, etc.) of transceivers21and antennas24in other embodiments. The WLAN10may include multiple APs14-1,14-2,14-3as shown, but any number of APs14may be included in WLAN10. The WLAN10includes a plurality of client stations (STA)25. Although four client stations25are illustrated inFIG.1, the WLAN10may include different numbers (e.g., 1, 2, 3, 5, 6, etc.) of client stations25in various scenarios and embodiments. The WLAN10may also include AP multi-link device (MLD) where one AP MLD includes multiple affiliated APs and client STA multi-link devices (MLD) where one STA MLD includes multiple affiliated STAs. Two or more of the STAs of an STA MLD25are configured to receive corresponding data streams that are transmitted simultaneously by the AP14. Additionally, two or more of the STAs of an STA MLD25are configured to transmit corresponding data streams to one AP MLD14such that the AP MLD14simultaneously receives the data streams. Also, the client station MLD25are configured to receive data streams that are transmitted simultaneously by multiple APs of one AP MLD14. Likewise, the STAs of an STA MLD25may transmit data streams simultaneously to the multiple APs of an AP MLD14. A client station25-1includes a host processor26coupled to a network interface27. The network interface27includes a MAC processing unit28and a PHY processing unit29. The PHY processing unit29includes a plurality of transceivers30, and the transceivers30are coupled to a plurality of antennas34. Although three transceivers30and three antennas34are illustrated inFIG.1, the client station25-1may include different numbers (e.g., 1, 2, 4, 5, etc.) of transceivers30and antennas34in other embodiments. In an embodiment, one or more of the client stations25-2,25-3, and25-4has a structure the same as or similar to the client station25-1. In these embodiments, the client stations25structured like the client station25-1have the same or a different number of transceivers and antennas. For example, the client station25-2has only two transceivers and two antennas (not shown), according to an embodiment. In an embodiment, the APs14and the client stations25contend for communication medium using carrier sense multiple access with collision avoidance (CSMA/CA) protocol or another suitable medium access protocol. Further, in an embodiment, the APs14or a client station25dynamically selects a bandwidth for a transmission based on channels available for the transmission. In an embodiment, the APs14are configured to simultaneously transmit different orthogonal frequency division multiplexing (OFDM) units to different client stations25by forming an OFDM access (OFDMA) data unit that includes the different OFDM data units modulated in respective sub-channel blocks of the OFDMA data unit. In an embodiment, the AP14allocates different sub-channels to different client stations and forms the OFDMA data unit that includes OFDM data units directed to by modulating the different client stations in sub-channel blocks corresponding to the sub-channels assigned to the client stations. In an embodiment, the APs14are configured to simultaneously transmit different OFDM units to different client stations25by transmitting the different OFDM data units via different space time streams of a MU-MIMO communication channel. In an embodiment, the APs14allocates different sub-channels (i.e., space time streams) to different client stations and forms the OFDM data units and modulates the different OFDM data units to the space time streams corresponding to the sub-channels assigned to the client stations. Various iterations of the 802.11 specification are referred to herein. IEEE 802.11ac is referred to as very high throughput (VHT). IEEE 802.11ax is referred to as high efficiency (HE). IEEE 802.11be is referred to as extreme high throughput (EHT). The terms VHT, HE, and EHT will be used in the descriptions found herein. As described above an AP MLD operates on multiple links where each link has one AP affiliated with the AP MLD. This may be accomplished by having multiple different radios. A STA MLD operates on one or multiple links where each link has one STA affiliated with the STA MLD. One way to implement the STA MLD is using two or more radios, where each radio is associated with a specific link. Another way to implement the STA MLD is using a single radio in two different bands. Each band may be associated with a specific link. In this case only one link is available at a time. In yet another implementation, an enhanced single-radio (ESR) STA MLD may be used. The ESR STA MLD uses two radios in different bands to implement the STA. For example, one radio may be a lower cost radio with lesser capabilities and the other radio may be a fully functional radio supporting the latest protocols. The ESR STA MLD may dynamically switch its working link while it can only transmit or receive through one link at any time. The ESR STA MLD may monitor two links simultaneously, for example, detecting medium idle/busy status of each link, or receiving a PHY Protocol Data Unit (PPDU) on each link. Each radio may have its own backoff time, and when the backoff counter for one of the radios becomes zero that radio and link may be used for transmission. For example, if an AP wants to use the fully functional radio, it may send a control frame that is long enough for the ESR STA MLD to switch from the lesser capable radio to the fully functional radio, that may then transmit data to the AP. Unscheduled automatic power save delivery (U-APSD) is defined in current IEEE 802.11 for power save operation. STAs may use U-APSD to have some or all of their buffer units (BUs) delivered during unscheduled service periods (SPs). If there is no unscheduled SP in progress, the unscheduled SP begins when the AP receives a trigger frame from a STA, which is a quality of service (QoS) Data or QoS Null frame using an access category (AC) the STA has configured to be trigger-enabled. An aggregated MAC protocol data unit (A-MPDU) that contains one or more trigger frames acts as a trigger frame. An unscheduled SP ends after the AP has attempted to transmit at least one BU using a delivery-enabled AC and destined for the STA, but no more than the number indicated in the Max SP Length field of the QoS Capability element of the STA's (Re)Association Request frame, if the field has a nonzero value. At every beacon interval, the AP shall assemble a partial virtual bitmap containing the buffer status per destination for STAs in the PS mode and shall send this out in the traffic indication map (TIM) element of the Beacon frame. At every beacon interval, the APSD-capable AP shall assemble the partial virtual bitmap containing the buffer status of nondelivery-enabled ACs (if there exists at least one nondelivery-enabled AC) per destination for STAs in PS mode, and the APSD-capable AP shall send this out in the TIM element of the Beacon frame. When all ACs are delivery-enabled, the APSD-capable AP shall assemble the partial virtual bitmap containing the buffer status for all ACs per destination. When the STA detects that the bit corresponding to its AID is 1 in the TIM, the STA shall issue a PS-Poll frame or a trigger frame if the STA is using U-APSD and all ACs are delivery-enabled to retrieve the buffered BU. At each unscheduled SP for a STA, the AP shall attempt to transmit at least one BU, but no more than the value specified in the Max SP Length field in the QoS Capability element from delivery-enabled ACs, that are destined for the STA. During the standardization of the IEEE 802.11be protocol, the following ideas has been discussed. A bit in a partial virtual bitmap of a traffic indication map (TIM) element (hereinafter TIM bit) that corresponds to a non-AP MLD is set to 1 if any individually addressed BUs for the non-AP MLD are buffered by the AP MLD. When a non-AP MLD makes a multi-link setup with an AP MLD, one association ID (AID) is assigned to the non-AP MLD across all links. Each STA of an MLD may independently select and manage its operational parameters unless specified otherwise in the IEE 802.11be standard. The use of MLD devices raises issues with specifying QoS. The QoS capability element include a QoS info field.FIG.2illustrates the QoS info field200. The QoS info field200includes four one bit AC flags: AC voice (AC_VO)205, AC video (AC_VI)210, AC background (AC_BK)215, and AC best effort (AC_BE)220. The QoS info field200further includes a one bit Q-Ack field225, a two bit Max SP Length field230, and a one bit More Data Ack field235. This is the QoS Info field sent by a non-AP STA. Each of the ACs U-APSD Flag subfields AC_VO205, AC_VI210, AC_BK215, and AC_BI22is set to 1 in (Re)Association Request frames to indicate that the corresponding AC is both trigger-enabled and delivery-enabled. They are set to 0 in (Re)Association Request frames to indicate that the corresponding AC is neither trigger-enabled nor delivery-enabled. If a non-AP MLD's QoS capability on different link is set independently, it is possible that U-APSD flags for ACs on different links may be set in different ways. A TIM bit corresponding to a non-AP MLD indicates buffered BU status for a non-AP MLD. However for each AC, if U-APSD flags are set differently for different links, the TIM indication may result in conflicting information. An operation example will now be described showing such a conflict. A non-AP MLD operates on link1and link2. On link1, a first AC is set to delivery-enabled AC while some other ACs are set to nondelivery-enabled AC. On link2, the first AC is set to a nondelivery-enabled AC. If a serving AP MLD has a buffered BU for the non-AP MLD, whose access category is the first AC: from the link1perspective, as the first AC is a delivery-enabled AC, the AP MLD is not supposed to set the TIM bit corresponding to the non-AP MLD to 1; however from the link2perspective, as the first AC is a nondelivery-enabled AC, the AP MLD is supposed to set the TIM bit corresponding to the non-AP MLD to 1. As there is only one TIM indication for the non-AP MLD, the AP cannot decide how to set the TIM bit for the non-AP MLD in this situation because different link requests require the setting of the TIM bit differently. Various options for solving this problem will now be described. In a first option, for a non-AP MLD, U-APSD related QoS capabilities shall be set to the same values across all links. More specifically, if a STA is affiliated with a non-AP MLD, the non-AP MLD shall have the same U-APSD Flag value for each AC across all setup links. For example, if a first AC is set to a trigger-enabled AC on a first link of the non-AP MLD, the first AC shall be set to a trigger-enabled AC on all other links of the non-AP MLD that a multi-link has set up with an AP MLD. If a first AC is set to a nontrigger-enabled AC on a first link of the non-AP MLD, the first AC shall be set to a nontrigger-enabled AC on all other links of the non-AP MLD that a multi-link is set up with an AP MLD. If a first AC is set to a delivery-enabled AC on a first link of the non-AP MLD, the first AC shall be set to a delivery-enabled AC on all other links of the non-AP MLD that a multi-link is set up with an AP MLD. If a first AC is set to a nondelivery-enabled AC on a first link of the non-AP MLD, the first AC shall be set to a nondelivery-enabled AC on all other links of the non-AP MLD that a multi-link is set up with an AP MLD. When the non-AP MLD transmits a first frame including one or more QoS capability elements, wherein the one or more QoS capability elements indicates the non-AP MLD's QoS capability on multiple links that the non-AP MLD supports, the following subfields of the QoS capability element for different links may be set to the same value across all the multiple links:AC_VO U-APSD Flag;AC_VI U-APSD Flag;AC_BK U-APSD Flag;AC_BE U-APSD Flag;(Max SP Length); and(More Data Ack). In one embodiment, the first frame is an Association Request frame and/or Reassociation Request frame. In another embodiment, the QoS capability elements for different link transmitted by the non-AP MLD shall have the same value across all the multiple links. When the non-AP MLD transmits an add traffic stream (ADDTS) Request frame for an AC to the AP MLD, the APSD subfield and the Schedule subfield value for different link shall be set to the same value. In one embodiment, a single APSD subfield indicates the APSD status for all links and a single Schedule subfield indicates the scheduling status for all links of the non-AP MLD. In another embodiment, a single APSD/Schedule subfields indicates the APSD/scheduling status for one link and the APSD subfields of all different links are set to the same value. In another embodiment, a TS (Traffic Stream) is set up per non-AP MLD. The same TS parameters are applied to all links that the non-AP MLD made during the multi-link setup with the AP MLD. The same traffic classification (TCLAS)/traffic specification (TSPEC) are applied to all links that the non-AP MLD made during the multi-link setup with the AP MLD. In a second option, an AP MLD sets a TIM bit corresponding to a non-AP STA to 1 if conditions for setting the TIM bit to 1 on any link for the non-AP STA are satisfied. The TIM bit is set to 1 if the following conditions for a link are met on any link within a set of links that multi-link has setup for the non-AP STA: the AP MLD has buffered BU of nondelivery-enabled ACs for the non-AP MLD if there exists at least one nondelivery-enabled AC on the link; and the AP MLD has buffered BU for the non-AP MLD when all ACs are delivery-enabled. For each link within the set of links, each AC can be categorized as either a delivery-enabled AC or a nondelivery-enabled AC independently. In a third option, an AP MLD sets a TIM bit corresponding to a non-AP STA to 1 if conditions for setting the TIM bit to 1 on all links for the non-AP STA are satisfied. The TIM bit is set to 1 if the following conditions for a link are met on all links within a set of links that multi-link has setup for the non-AP STA: the AP MLD has buffered BU of nondelivery-enabled ACs for the non-AP MLD if there exists at least one nondelivery-enabled AC on the link; and the AP MLD has buffered BU for the non-AP MLD when all ACs are delivery-enabled. For each link within the set of links, each AC can be categorized as either a delivery-enabled AC or a nondelivery-enabled AC independently. Issues that arise with the listen interval during multi-link operation will now be described. The Listen Interval field is used to indicate to the AP how often a non sub 1 GHz (non-S1G) STA in power save mode wakes to listen to Beacon frames. An AP may delete buffered BUs for implementation dependent reasons, including the use of an aging function and availability of buffers. The AP may base the aging function on the listen interval indicated by the STA in its (Re)Association Request frame or the wireless network monitoring (WNM) sleep interval specified by the non-AP STA in the WNM Sleep Mode Request frame. Also, the WNM Sleep Interval field indicates to the AP how often a STA in WNM sleep mode wakes to receive Beacon frames, defined as the number of delivery traffic indication map (DTIM) intervals. STAs in WNM sleep mode can wake up as infrequently as once every WNM sleep interval to check whether the corresponding TIM bit is set or group addressed traffic is pending. However, for a non-AP MLD that multi-link is setup with an AP MLD, as the non-AP MLD's PS-Poll/Trigger frame transmission can happen on any of setup links, it is possible that the non-AP MLD triggers the buffered BU transmission on other links before the listen interval on a link expires. Various solutions regarding the use of a listen interval during multi-link operation will now be described. In a first solution, a non-AP MLD maintains one listen interval value across all links that multi-link is setup with an AP MLD; or, a non-AP MLD maintains one listen interval value for a TID (or AC) across all links to which the TID (or AC) is mapped. More specifically, when a (re)association is for a multi-link (re)setup, the Listen Interval field is used to indicate to the AP MLD how often at least a STA affiliated with a non-AP MLD wakes to listen to Beacon frames if all STAs affiliated with the non-AP MLD and associated with the multi-link (re)setup are in power save mode. An AP MLD uses the listen interval in determining the lifetime of frames that it buffers for a non-AP MLD. Any AP MLD aging function shall not cause the buffered BU to be discarded after any period that is shorter than that indicated by the non-AP MLD for which the BUs are buffered in the Listen Interval field of its (Re)Association Request frame. If all STAs operating on enabled links and affiliated with the non-AP MLD that is associated with the multi-link (re)setup are in power save mode, at least one of these STAs shall wake up to receive at least one Beacon frame scheduled for transmission within the interval of duration equal to the listen interval indicated by the non-AP MLD in its (Re)Association Request frame, starting from the last TBTT for which another STA or the same STA affiliated with the MLD was awake. The following variations may be implemented as well. When a non-AP MLD transmits a frame that includes one or more Listen Interval fields for a traffic ID (TID) (or AC), the one or more Listen Interval field(s) indicates to the AP MLD how often the non-AP MLD in power save mode wakes to listen to Beacon frames. The frame may be an Association Request frame and/or Reassociation Request frame. The frame may include only one Listen Interval field, and the Listen Interval field indicates how often the non-AP MLD in power save mode wakes to listen to Beacon frames. The Beacon frames may be Beacon frames on any active link that the non-AP MLD made a multi-link setup with an AP MLD. The frame includes more than one Listen Interval field, and the minimum value of the more than one Listen Interval fields indicates how often the non-AP MLD in power save mode wakes to listen to Beacon frames. The frame includes more than one Listen Interval field, and the maximum value of the more than one Listen Interval fields indicates how often the non-AP MLD in power save mode wakes to listen to Beacon frames. The value of the Listen Interval field is in units of a reference beacon interval. In one variation, beacon intervals of all links are the same. In another variation, In the reference beacon interval is the beacon interval of a link that the frame is transmitted. In another variation, each of the more than one Listen Interval corresponds to each link within a set of links that multi-link is setup with the AP MLD, and the reference beacon interval for each Listen Interval field for a link is the beacon interval of the link. In a second solution that is similar to the first solution, a non-AP MLD indicates a listen interval value of one or more links to an AP MLD, and when the non-AP MLD is in power save mode on the one or more links, the non-AP MLD listens to Beacon frames of the AP MLD of any link that indicates the information of the one or more links at least once in the listen interval value. In a third solution for implementing a Listen interval, a non-AP MLD maintains one WNM sleep interval value across all links that multi-link is setup with an AP MLD. More specifically, the WNM sleep interval advertised by a STA of a non-AP MLD is applied at the MLD level and the WNM procedures are performed at the MLD level and apply to all the STAs affiliated with the MLD. All STAs affiliated with an MLD shall advertise the same WNM Sleep Mode capability. The WNM sleep state is maintained at the MLD level and WNM sleep mode procedures are performed at the MLD level and apply to all the STAs affiliated with the MLD. The AP MLD may delete buffered BUs for the implementation dependent reasons, including the use of an aging function and availability of buffers where the aging function is based on the listen interval indicated by the non-AP MLD in its (Re)Association Request frame or the WNM sleep interval specified by the non-AP MLD in the WNM Sleep Mode Request frame. For example, when a non-AP MLD transmits a frame that includes one or more WNM Sleep Interval subfields, the one or more WNM Sleep Interval subfields indicates to the AP MLD how often the non-AP MLD in WNM sleep mode wakes to receive Beacon frames. In one variation, the frame is a (possibly modified version of) WNM Sleep Mode Request frame. In another variation, the Beacon frames may be Beacon frames on any active link that the non-AP MLD made a multi-link setup with an AP MLD. In another variation, the frame includes only one Listen Interval field, and the Listen Interval field indicates how often the non-AP MLD in WNM sleep mode wakes to listen to Beacon frames. In another variation, the frame includes more than one WNM Sleep Interval subfield, and the minimum value of the more than one WNM Sleep Interval subfields indicates how often the non-AP MLD in WNM sleep mode wakes to listen to Beacon frames. In another variation, the frame includes more than one WNM Sleep Interval subfield, and the maximum value of the more than one WNM Sleep Interval subfields indicates how often the non-AP MLD in WNM sleep mode wakes to listen to Beacon frames. In another variation, the value of the WNM Sleep Interval subfield is in units of a reference DTIM interval. The DTIM intervals of all links may be the same; the reference DTIM interval is the DTIM interval of a link that the frame is transmitted; or each of the more than one WNM Sleep Interval corresponds to each link within a set of links that multi-link is setup with the AP MLD, and the reference DTIM interval for each WNM Sleep Interval subfield for a link is the DTIM interval of the link. In another variation, the non-AP MLD is in WNM sleep mode on one or more links, the non-AP MLD checks whether the corresponding TIM bit is set or group addressed traffic is pending from the AP MLD at intervals not longer than the WNM sleep interval value. During the standardization of IEEE 802.11be, multi-link device (MLD) with different capabilities were defined: single link/radio non-AP MLD: a non-AP MLD that supports operation on more than one link but receives or transmits frames only on one link at a time; nonsimultaneous transmit and receive (NSTR) link pair (NSTR MLD): a pair of links for which a STA of an MLD has indicated an nonsimultaneous transmit and receive relationship as defined in 35.3.12.3 of IEEE 802.11be (Nonsimultaneous. Each link of such a pair is a member of the NSTR link pair); enhanced multi-link single radio (eMLSR) operation: as defined in 35.3.13 of IEEE 802.11be_D0.1 standard; and enhanced multi-link multi radio (eMLMR) operation As defined in 35.3.14 of IEEE 802.11be_D0.1 standard. Issues related to implementing Power Save operation for an NSTR MLD will now be described. If an AP MLD has a buffered BU for a non-AP MLD that is an NSTR MLD on a pair of links, a corresponding bit in a virtual bitmap of a TIM element in a Beacon frame on a link from the AP MLD is set to 1. When the non-AP MLD identifies that the corresponding bit is equal to 1 in the virtual bitmap of the TIM element, STAs on one or more links (within the pair of links) are supposed to send a PS-Poll frame or a U-APSD trigger frame to the serving AP(s) to indicate that the one or more STAs are awake and ready to receive the buffered BU. However, because of the NSTR property of the non-AP MLD, it is hard for the non-AP MLD to send the PS-Poll or the U-APSD trigger frame on multiple links. FIG.3illustrates an operation example of the issue with power save mode using MLDs. A first link (link1)310is set up between AP1of a MLD AP and STA1of a MLD STA. Likewise, a second link (link2)340is set up between AP2of the MLD AP and STA2of the MLD STA. The non-AP MLD (STA1on link1310and STA2on link2340) is an NSTR MLD on link1310and link2340. The AP MLD (AP1on link1310and AP2on link2340) indicates in a Beacon frame311on link1310that the AP MLD has buffered BUs for the non-AP MLD. The non-AP MLD intends to transmit trigger frame320on both link1310and link2340. On link1310, the channel is idle and backoff ends. However on link2340, the channel is busy341. The non-AP MLD transmits trigger frame Trg1320on link1310first. AP1then sends an acknowledge (ACK) frame312followed by a request to send (RTS) frame313. In response, STA1send a clear to send (CTS) frame321. Due to the transmission of a frame Trg1320on link1310, link2340becomes blind. Therefore, the non-AP MLD holds off transmission on link2340for a predetermined time (T0). Before the non-AP MLD transmits a trigger frame on link2340, the AP MLD transmits DL frame314on link1310only. Due to the Tx/Rx on link1310, link2340becomes blind again, and thus, the non-AP MLD cannot transmit the trigger frame until the end of the DL frame exchange which ends when STA1sends a block acknowledge frame (BA)322. Therefore, the non-AP MLD can only use link1310for the DL frame exchange. This operation example illustrates the issues that arise from an AP MLD perspective. When the AP MLD receives a trigger frame on a link from a non-AP MLD that is an NSTR MLD, several different operation scenarios are possible for the non-AP MLD. In a first scenario, the non-AP MLD intends to wake up on the link only, and the non-AP MLD stays in Doze state on the other link. In this case, the AP MLD should transmit DL frames on the link only. In a second scenario, the non-AP MLD intends to wake up on both links. However, the backoff counter becomes zero on only on the one link, and the backoff counter takes more time to expire on the other link. In this case, the AP MLD should wait further for the trigger frame on the other link. As the AP MLD cannot identify which scenario the non-AP MLD is in, it is not clear how the AP MLD should operate when the AP MLD receives a trigger frame only on one link from the non-AP MLD. Four different solutions to overcome these issues will now be described. In a first solution, when a non-AP MLD that is an NSTR MLD intends to transmit a trigger frame on more than one link, the start time of the trigger frame transmission on more than one link is aligned. This first solution may include the following different variations: when an AP MLD receives a trigger frame on one link from the non-AP MLD, the AP MLD considers that the non-AP MLD intends to be in Awake state on one link only; when an AP MLD receives a trigger frame on a set of links from the non-AP MLD simultaneously, the AP MLD considers that the non-AP MLD intends to be in Awake state on the set of links; when an AP MLD receives a trigger frame on one link from the non-AP MLD, the AP MLD initiates a DL frame transmission to the non-AP MLD without waiting for the reception of the trigger frame on other link(s) from the non-AP MLD; or when the non-AP MLD intends to transmit the trigger frame on more than one link, any kind of backoff mechanisms that enables for the non-AP MLD to align the start time of the trigger frame transmission can be used. FIGS.4and5illustrate two different operational examples of the first solution of Tx time alignment for the trigger frame. InFIGS.4and5similar numbers are used to describe similar elements as those found inFIG.3. A non-AP MLD (STA1on link1410or510and STA2on link2440or540) is an NSTR MLD on link1410or510and link2440or540. AP MLD (AP1on link1410or510and AP2on link2440or540) indicates in a Beacon frame411or511on link1410or510that the AP MLD has buffered BU for the non-AP MLD. InFIG.4, link2440of NSTR MLD is busy at the beginning, and the NSTR MLD waits until the backoff counter values of both link1410and link2440become zero before transmitting trigger frame420and442on both link1410and link2440. Each link then goes through the ACK, RTS, CTS, DATA, and BA frame sequence as described with respect toFIG.3. As a result, DL transmission occurs on both link1410and link2440. InFIG.5, link2540of NSTR MLD is busy at the beginning, and the NSTR MLD decides to transmit the trigger frame520on link1510only as shown inFIG.3. Thus, DL transmission happens on link1510only. In a second solution, when a non-AP MLD that is an NSTR MLD transmits a trigger frame on a link, the trigger frame indicates if the non-AP MLD is in Awake state or not on the other link(s) among a set of links (e.g., the NSTR link pair). This second solution may include the following different variations: when an AP MLD receives a trigger frame on the link from the non-AP MLD, wherein the trigger frame indicates that there is no other link switched to the Awake state, the AP MLD considers that the non-AP MLD intends to be in the Awake state on the link only; when an AP MLD receives a trigger frame on the link from the non-AP MLD, wherein the trigger frame indicates that there is no other link switched to the Awake state, the AP MLD initiates a DL frame transmission sequence on the link without further waiting for trigger frame reception on other link(s) from the non-AP MLD; when an AP MLD receives a trigger frame on the link from the non-AP MLD, wherein the trigger frame indicates that the non-AP MLD switched to the Awake state on other link(s), the AP MLD initiates a DL frame transmission sequence on multiple links; and when an AP MLD receives a trigger frame on the link from the non-AP MLD, wherein the trigger frame indicates if the non-AP MLD is not in the Awake state on other link(s), the AP MLD initiates a DL frame transmission sequence on the link only without further waiting for trigger frame reception on other link(s) from the non-AP MLD. The indication if the non-AP MLD is in the Awake state or not on the other link(s) among a set of links is included in a MAC header part of the trigger frame. In one embodiment, the indication is a link bitmap, wherein each different bit in the link bitmap indicates if a link corresponding to the bit is in the Awake state or not. In another embodiment, the size of the indication is one bit, and the indication is set to a state (e.g., “1”) if the non-AP MLD is in the Awake state on the other link within a NSTR link pair. In a third solution, when an AP MLD receives a trigger frame on a link from a non-AP MLD that is an NSTR MLD, the AP MLD waits for a predetermined time before initiating DL frame transmission sequence to the non-AP MLD so that the non-AP MLD can transmit another trigger frame on another link(s) before the DL frame transmission sequence. This third solution may include the following different variations: the predetermined time is defined by the standard; the predetermined time is a value that the non-AP MLD indicates to the AP MLD as the non-AP MLD's capability when the non-AP MLD associates with the AP MLD; the predetermined time is determined based on the negotiation between the non-AP MLD and the AP MLD; the predetermined time is a value that the AP MLD indicates as the AP MLD's capability; the predetermined time starts from the time that the AP MLD receives the trigger frame; the predetermined time starts from the time that the AP MLD sends back an acknowledgement frame to the trigger frame; the predetermined time starts from the time that the AP MLD receives the first trigger frame from the non-AP MLD when the non-AP MLD is in the Doze state on all NSTR links; and the trigger frame further includes an indication that the AP MLD can initiate the DL frame transmission sequence without waiting for the predetermined time. FIGS.6and7illustrate two different operational examples of the third solution of the AP MLD waiting a predetermined period of time. InFIGS.6and7similar numbers are used to describe similar elements as those found inFIGS.3,4, and5. A non-AP MLD (STA1on link1610or710and STA2on link2640or740) is an NSTR MLD on link1610or710and link2640or740. The AP MLD (AP1on link1610or710and AP2on link2640or740) indicates in a Beacon frame611or711on link1610or710that the AP MLD has buffered BU for the non-AP MLD. InFIG.6, both STA1and STA2sends trigger frame620and642during the predetermined time (T0). Each link then goes through the ACK, RTS, CTS, DATA, and BA frame sequence as described with respect toFIGS.3and4. As a result, DL transmission occurs on both link1610and link2640. InFIG.7, STA1indicates that the AP MLD can initiates DL Tx without waiting for T0(“Start now” indication). Thus, DL transmission happens on link1510only. In a fourth solution, when a link of a non-AP MLD that is an NSTR MLD becomes blind due to the non-AP MLD's transmission on another link, the non-AP MLD starts a first timer at the end of the non-AP MLD's transmission on the other link, and the non-AP MLD can transmit a trigger frame while the first timer is running. This fourth solution may include the following different variations: if during a transmission of a STA (STA1) of a non-STR non-AP MLD, another STA (STA2) of the same MLD cannot detect its medium state when required (due to STA1's UL transmission interference), STA2shall start a MediumSyncDelay timer at the end of STA1's transmission, unless the STA2ended a transmission at the same time: the MediumSyncDelay timer expires after a duration value that is either assigned by AP or specified in spec or if at least either of the following events happens: any received PPDU with a valid MPDU; or a received PPDU with a valid TxOP_duration, whichever happens first; while the MediumSyncDelay timer is running the STA is only allowed to attempt to initiate up to number of transmit opportunities (TxOPs) assigned by the AP (at least 1) or trigger frame transmission and shall attempt to initiate that TxOP with the transmission of an RTS frame using regular EDCA backoff using baseline CCA but a TBD ED threshold value; the TBD ED threshold value has a default value specified in the spec (e.g., −62 dBm) but can also be assigned by the AP MLD within a limited range such as between −82 dBm and −62 dBm; the format of the trigger frame that is allowed to transmit while the first timer is running is QoS Null frame; the format of the trigger frame that is allowed to transmit while the first timer is running is PS-Poll frame; the length of the trigger frame that is allowed to transmit while the first timer is running is less than or equal to a first value; in one embodiment, the first value is either assigned by the AP MLD or specified in the standard; in another embodiment, the length of the trigger frame is the size of the payload of the frame; and in another embodiment, the length of the trigger frame is the duration of a PPDU that carries the trigger frame; the non-AP MLD's transmission on the other link is a transmission of a trigger frame; while the first timer is running, the number of trigger frame transmission is less than a second value; in one embodiment, the second value is either assigned by the AP MLD or specified in the standard; and when an AP MLD receives a trigger frame on the link from the non-AP MLD, the AP MLD waits for a predetermined time before initiating DL frame transmission sequence to the non-AP MLD so that the non-AP MLD can transmit another trigger frame on the other link before the DL frame transmission sequence. FIG.8illustrates an operational example of the fourth solution of the trigger frame being allowed during MediumSyncDelay. InFIG.8similar numbers are used to describe similar elements as those found inFIGS.3,4,5,6, and7. A non-AP MLD (STA1on link1810and STA2on link2840) is an NSTR MLD on link1810and link2840. The AP MLD (AP1on link1810and AP2on link2840) indicates in a Beacon frame811on link1810that the AP MLD has buffered BU for the non-AP MLD. After STA1transmits a trigger frame Trg1820, MediumSyncDelay (T0) starts on link2840. Then STA2transmits a trigger frame Trg2842on link2840while T0timer is running. Each link then goes through the ACK, RTS, CTS, DATA, and BA frame sequence as described with respect toFIGS.3and4. As a result, DL transmission occurs on both link1810and link2840. MLDs in eMLMR operation may face the similar problems as NSTR MLD operation. If an AP MLD has a buffered BU for a non-AP MLD that is in eMLMR operation on a set of links, a corresponding bit in a virtual bitmap of a TIM element in a Beacon frame on a link from the AP MLD is set to 1. When the non-AP MLD identifies that the corresponding bit is equal to 1 in the virtual bitmap of the TIM element, STAs on one or more links (within the set of links) are supposed to send a trigger frame to the serving AP(s) to indicate that the one or more STAs are awake and ready to receive the buffered BU. However, if the AP MLD transmits a frame that initiates a DL frame exchange to the non-AP MLD when the AP MLD receives a trigger frame from the non-AP MLD on one link among the set of links before the non-AP MLD transmits the trigger frame on other link among the set of links, the AP MLD cannot choose the better link for DL transmission among the set of links. The same solutions discussed above may be applied for eMLMR operation as well. In the first solutions, when a non-AP MLD operating in eMLMR mode intends to transmit a trigger frame on more than one link, the start time of the trigger frame transmission on more than one link is aligned. In the second solution, when a non-AP MLD operating in eMLMR mode transmits a trigger frame on a link, the trigger frame indicates if the non-AP MLD is in Awake state or not on the other link(s) among a set of links. In a third solution, when an AP MLD receives a trigger frame on a link from a non-AP MLD operating in eMLMR mode, the AP MLD waits for a predetermined time before initiating DL frame transmission sequence to the non-AP MLD so that the non-AP MLD can transmit another trigger frame on another link(s) before the DL frame transmission sequence. In the fourth solution, when a link of a non-AP MLD operating in eMLMR mode becomes blind due to the non-AP MLD's transmission on another link, the non-AP MLD starts a first timer at the end of the non-AP MLD's transmission on the other link, and the non-AP MLD may transmit a trigger frame while the first timer is running. Also, further embodiments described in the solutions discussed above may be applied to eMLMR operation. During the standardization of EHT WLAN protocol in IEEE 802.11be, the concept of single radio (SR) MLD has been introduced. A multi radio (MR) MLD is a MLD that has more than one radios such that the MLD can transmit frames on more than on link at a time and receive frames on more than one link at a time. An SR non-AP MLD is an MLD that transmits or receives frames on a single link to another MLD at a time. An enhanced SR (ESR) non-AP MLD is an MLD that transmits or receives (data/management) frames to another MLD on one link, and listens/monitors on one or more links. As an SR MLD can only transmit and receive on one link at a time. i.e., active mode is possible on one link only, current power save operation causes problems such as: if the SR MLD is in active mode for more than one link, and if an AP MLD has a frame to transmit to the SR MLD, the AP MLD does not know which link the SR MLD can receive the frame. Therefore, embodiments will be described that define mechanisms so that the AP MLD may identify on which link the SR MLD is ready to receive a DL frame. Three solutions to address the issue that only one link is in the active mode will be described. In a first solution, if an SR MLD is in Active mode on one link, the SR MLD shall be in Power Save (PS) mode on other link. Further variations of this solution may include the following: an SR MLD shall not be in Active mode on more than one link; if the SR MLD is in Active mode on one link, the state of the PS mode on other link is the Doze state; if the SR MLD sends a frame with Power Management (PM) subfield set to 0 and receives an acknowledgement frame for the frame on one link, the SR MLD shall be in PS mode on other link and the state of the PS mode on the other link is the Doze state; if the SR MLD is not in the Doze state before sending the frame on the other link, the SR MLD's state is changed to the Doze state when the SR MLD receives the acknowledgement frame; if the SR MLD sends a frame with Power Management (PM) subfield set to 0 on one link, the SR MLD shall be in PS mode on other link and the state of the PS mode on the other link is the Doze state; and if the SR MLD is not in the Doze state before sending the frame on the other link, the SR MLD's state is changed to the Doze state when the SR MLD transmits the frame. When an SR MLD is in Active mode on one link but the SR MLD transmits a frame on the other link, it is not clear for an AP MLD which link the SR MLD can receive DL frames.FIG.9illustrates an operation example of upload (UL) TX. An SR MLD is in Active mode on link1910(STA1in Active mode) and in a PS mode the Doze state on link2940(STA2in PS Doze mode). When the SR MLD monitors link1910for UL transmission at T0, the link is busy911. The SR MLD switches its link to link2940(STA2in Active mode and STA1in PS Doze mode) and initiates UL transmission941on link2940. AP2sends a BA frame942in response to the UL transmission941. After finishing the UL transmission941, the SR MLD may stay on link21040. However, the AP MLD still considers that the SR MLD is in the Active mode on link1910and in the PS mode Doze state on link2940. Therefore, when there is a DL frame to transmit at T1, the AP MLD will transmit DL frame on link1910while the SR MLD is active on link2940. In a second solution for addressing UL TX, if an SR MLD has a successful UL transmission on one link, the SR MLD shall be in the PS Doze state on other link. Further variations of this second solution include: a successful UL transmission implies that an acknowledgement frame is received from the serving AP MLD for the UL transmission; if the SR MLD initiates a UL TXOP, the SR MLD shall be in the PS Doze state on the other link; if the SR MLD transmits a frame that initiates a UL TXOP and receives an acknowledgement frame for the frame, the SR MLD shall be in the PS Doze state on the other link; if the SR MLD's UL TXOP on the link ends, the SR MLD shall be in the PS Doze state on the other link; when the SR MLD transmits a frame that initiates a UL TXOP and receives an acknowledgement frame for the frame, if the SR MLD is not in the PS Doze state on the other link (such as Active mode, PS Awake state, etc.), the SR MLD switches to PS Doze state on the other link. A third solution includes the following variations: if an SR MLD is in Active mode on a link, the SR MLD shall not transmit a UL frame on the other link; if an SR MLD is in Awake mode on a link, the SR MLD shall not transmit a UL frame on the other link; if an SR MLD is in Active mode on a link, the SR MLD shall transmit a UL frame on the link only; if an SR MLD is in Awake mode on a link, the SR MLD shall transmit a UL frame on the link only; if an SR MLD is in Active mode on a link and the SR MLD intends to switch an operating link to the other link, the SR MLD shall transition to PS mode on the link first before switching the operating link to the other link; and if an SR MLD is in Active mode on a link and the SR MLD intends to transmit a UL frame on the other link, the SR MLD shall transition to the PS mode on the link first before transmitting the UL frame on the other link. FIG.10illustrates another operation example of UL TX. An SR MLD is in Active mode on link11010(STA1in Active mode) and in a PS mode Doze state on link21040(STA2in PS Doze mode). When the SR MLD monitors link11010for UL transmission at T0, the link is busy1011. The SR MLD switches its link to link21040(STA2in Active mode and STA1in PS Doze mode) and initiates UL transmission1041on link21040. AP2sends a BA frame1042in response to the UL transmission1041. When a serving AP MLD receives a UL frame1041on link21040, the AP MLD identifies that the SR MLD's operating link is switched to link2. When the AP MLD has DL frames1043to send at T1, the AP MLD transmits the DL frame1043on link21040. The STA2then responds with a BA frame1044 FIG.11illustrates another issue that arises is transition during the SP. If an SR MLD is in PS mode on both links1110and1140:when the SR MLD receives a Beacon frame where a bit in a partial virtual bitmap of a TIM element that corresponds to the SR MLD is set to 1, the SR MLD sends a PS Poll or a U-APSD Trigger frame1112on one link to retrieve DL frames; then, the serving AP MLD responses with an ACK frame1214initiates a DL transmission1116to the SR MLD on the link and the STA sends a BA frame1118; and however, before all the buffered frames are transmitted to the SR MLD (EOSP=0 and/or MD=1), if the SR MLD switches an operating link from the link to the other link, as the serving AP MLD does not know about the SR MLD's link switching, the serving AP MLD keeps transmitting the remaining frames1120on the link, which results in transmission failure. In the current IEEE 802.11 standard, an unscheduled SP ends after the AP has attempted to transmit at least one BU using a delivery-enabled AC and destined for the STA, but no more than the number indicated in the Max SP Length field of the QoS Capability element of the STA's (Re)Association Request frame if the field has a nonzero value. However, if the SR MLD switches its link during the SP, it is not clear how to apply the Max SP Length information as the transmission includes multiple links. A solution for addressing a transition between links during a SP will now be described. If an SR MLD intends to switch an operating link from a first link to a second link during a SP, the SR MLD transmits a first frame on the first link to a serving AP MLD to indicate that the SR MLD's operating link will change from the first link to the second link. Further variations of this solutions include the following: the SP is an unscheduled service period for U-APSD operation; after receiving an acknowledgement to the first frame from the serving AP MLD, the SR MLD switches its operating link to the second link in a predetermined time; the serving AP transmits remaining DL frames on the second link a predetermined time after the transmission of the acknowledgement to the first frame to the SR MLD; the first frame is a U-APSD trigger frame; the first frame includes a link bitmap, wherein each bit in the link bitmap indicates the power save state of the SR MLD on corresponding link, wherein a first bit corresponding to the first link is set to a value indicating a Doze state and a second bit corresponding to the second link is set to a value indicating an Awake state; the first frame includes a one bit indication, wherein a first value of the one bit indication indicates that the SR MLD switches is operating link to other link that is not current operating link (the first link); the number of BUs delivered during the SP shall be no more than the number indicated in the Max SP Length field for the first link; the number of BUs delivered during the SP on the first link shall be no more than the number indicated in the Max SP Length field for the first link; the number of BUs delivered during the SP shall be no more than the minimum of the numbers indicated in the Max SP Length field for the first link and indicated in the Max SP Length field for the second link; the number of BUs delivered during the SP shall be no more than the maximum of the numbers indicated in the Max SP Length field for the first link and indicated in the Max SP Length field for the second link; and if an SR MLD transmits a U-APSD trigger frame on a link, the operating link of the SR MLD shall be the link until the end of a service period that the U-APSD trigger frame initiates. FIG.12illustrates an operation example of transition during a SP. The SR MLD is in PS mode on both links1210and1240. When the SR MLD receives a Beacon frame where a bit in a partial virtual bitmap of a TIM element that corresponds to the SR MLD is set to 1, the SR MLD sends a U-APSD Trigger frame Trg11212on link11210to retrieve DL frames1216. The serving AP MLD initiates a DL transmission Data11216to the SR MLD on Link11210. The AP STA responds with a BA frame1218. Before all the buffered frames are transmitted to the SR MLD (EOSP=0 and/or MD=1), the SR MLD sends another Trigger frame Trg21222to indicate switching of an operating link from Link11210to Link21240. After receiving Ack frame1224for Trg21222, the SR MLD switches its operating link to Link21240during T0period. At T0time after transmitting Ack frame, the AP MLD continues DL data frame transmission on Link2by transmitting DATA21242. The MLD STA responds with a BA frame1240. The system and method described herein may be carried out using specific hardware to perform the actions or software running on a processor may implement the embodiments. The processor may be connected to memory and storage, where the software instructions are stored in the storage. The processor may be any general purpose processor, a graphics processor, a signal processor, or any other type of specialized processor. Any combination of specific software running on a processor to implement the embodiments of the invention, constitute a specific dedicated machine. As used herein, the term “non-transitory machine-readable storage medium” will be understood to exclude a transitory propagation signal but to include all forms of volatile and non-volatile memory. It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Although the various exemplary embodiments have been described in detail with particular reference to certain exemplary aspects thereof, it should be understood that the invention is capable of other embodiments and its details are capable of modifications in various obvious respects. As is readily apparent to those skilled in the art, variations and modifications can be affected while remaining within the spirit and scope of the invention. Accordingly, the foregoing disclosure, description, and figures are for illustrative purposes only and do not in any way limit the invention, which is defined only by the claims.
52,520
11943666
DETAILED DESCRIPTION OF EMBODIMENTS FIG.1shows an apparatus100for a mobile network system according to an embodiment of the present disclosure, in particular for a 4G system or a 5G system. The apparatus100is configured to support multiple QoS levels101for a session102related to an application103or service. The different QoS levels101are here exemplarily labelled with a, b, c and d, respectively. The fact that the apparatus100supports these multiple QoS levels101means preferably that the apparatus100is able to switch from one (current) QoS level101for the session102to another QoS level101for the session102, especially in case of a modification of the session102. This advantageously allows the apparatus100to continue the application103and or service with a different QoS level101than before the session modification, and may avoid a termination of the session102, and thus the application103or service. FIG.2shows an apparatus100according to an embodiment of the present disclosure, which builds on the apparatus100shown inFIG.1. Accordingly, also the apparatus100shown inFIG.2supports multiple QoS levels101for a session102. Same elements inFIG.1andFIG.2are accordingly labelled with the same reference signs and function likewise. The apparatus100shown inFIG.2includes, in particular, an AF200and a PCF201. The AF200is configured to request (e.g. by a request message202) from the PCF201to configure the multiple QoS levels101for the session102. This advantageously provides the AF200with a QoS influencing capability. The AF200may also be provided with a QoS monitoring capability. To this end, the PCF201, and/or any other NF, and/or an AN of the apparatus100is preferably configured (e.g. by request of the AF200) to notify the AF200of an intended and/or completed modification of the session102. FIG.3shows a method300according to an embodiment of the present disclosure for a mobile network system, particularly for a 4G system or 5G system. The method300may be carried out by the apparatus100shown inFIG.1or inFIG.2. In particular, the method300includes a step301of configuring, by an AF200in the mobile network system or in the apparatus100, a PCF201in the mobile network system or in the apparatus100with multiple QoS levels101for a session102related to an application103or service. The QoS influencing capability and the QoS monitoring capability of the AF200may have to be activated. For instance, the PCF201of the apparatus100shown inFIG.2may be configured to provide a QoS level service operation, and the AF200may be configured to invoke this QoS level service operation, in order to subsequently request202the PCF201to configure the multiple QoS levels101. Likewise, the AF200may also invoke the QoS level service operation, in order to request that it is subsequently notified of any intended and/or completed modification of the session102. The activations of these new QoS capabilities are now described in more detail. The AF200can specifically activate the QoS influencing capability by invoking an “Npcf_QoS_Influence_Request/Response” service operation provided by the PCF201. This service operation allows the AF200to configure the PCF201by preferably indicating one or more of:DNN and/or Network Slice identifiers (S-NSSAIs) and/or AF-Service-Identifiers for which the QoS influencing capability (i.e. the multiple QoS levels101) shall apply.UE(s) identifiers, for which the QoS influencing capability (i.e. the multiple QoS levels101) shall apply.A multiple QoS level mode of operation. Two different QoS level modes of operation are illustrated with respect toFIG.4. In particular,FIGS.4(A)and (B) show different sessions102for an application103or service both supporting multiple QoS levels101. The first QoS level operation mode illustrated with respect toFIG.4(A)is a “Multi-QoS-Profile Flow Mode”. In this mode of operation, the session102supports the multiple QoS levels101through multi-QoS-profile flows400. A QoS flow400of the session102shown inFIG.4(A)is associated with multiple QoS profiles for implementing the multiple QoS levels101(indicated as a, b, c and d, respectively). In particular, each of multiple QoS flows400of the session102could be associated with multiple QoS profiles, and the QoS profile of a QoS flow400may change according to network conditions (e.g. network load, radio link capacity etc.) and/or in case of a session modification. The second QoS operation mode illustrated with respect toFIG.4(B)is a “Multi-Flow Session Mode”. In this mode of operation, the session102supports the multiple QoS levels101through multiple single-QoS-profile flows400. A group of QoS flows400is established for the session102, and each QoS flow400of the group is associated with one (different) QoS profile for implementing a determined QoS level101(the different QoS levels101are again indicated as a, b, c and d, respectively).The set of QoS profiles associated to a QoS flow400in case of the “Multi-QoS-Profile Flow Mode” or the group of QoS flows400and the corresponding QoS profile per QoS flow400in case of the “Multi-Flow Session Mode”. Further, the AF200can specifically activate the QoS monitoring capability by invoking an “Nxxx_QoS_Monitoring_Request/Response” service operation (wherein “xxx” can be AN, AMF, SMF, UDM, PCF), provided by the AN, AMF, SMF, UDM and/or PCF. The QoS monitoring capability can be activated on each single NF (i.e. AN, AMF, SMF, UDM and/or PCF). The service operations allow the AF200to configure the AN, AMF, SMF, UDM, and/or PCF201by indicating one or more of:DNN and/or Network Slice identifiers (S-NSSAIs) and/or AF-Service-Identifiers for which the QoS monitoring capability shall apply (i.e. the notification of the AF200of an intended and/or completed modification of the session102, in particular comprising information on the QoS level101change).UE(s) identifiers for which the QoS monitoring capability shall apply (i.e. the notification of the AF200of an intended and/or completed modification of the session102, in particular comprising information on the QoS level101change).A QoS monitoring timing (e.g. early, late or both, as described below in more detail with respect to the operation of the QoS monitoring capability). The QoS monitoring timing is a QoS monitoring parameter.A QoS monitoring mode (e.g. direct or indirect, as described below in more detail with respect to the operation of the QoS monitoring capability). The QoS monitoring mode is a QoS monitoring parameter. The operations of the new QoS capabilities are now described in more detail. After the QoS influencing capability has been activated, the PCF201is configured to support a multi-QoS-level session102as indicated in the Npcf_QoS_Influence_Request.The session102relating to DNN and/or Network Slice identifiers (S-NSSAIs) and/or AF-Service-Identifiers and/or UE(s) indicated by the Nxxx_QoS_Influence_Request shall support the multiple QoS levels101as indicated in the Nxxx_QoS_Influence_Request.In case of the “Multi-QoS-Profile Flow Mode” (seeFIG.4(A)), whenever a session modification occurs, the at least one QoS flow400of the session102can switch its QoS profile according to the multiple QoS levels101indicated in the Nxxx_QoS_Influence_Request. In other words, the apparatus100can switch from a current QoS profile of the at least one QoS flow400of the session102to another QoS profile of the at least one QoS flow400of the session102in case of the modification of the session102.In case of the “Multi-Flow Session Mode” (seeFIG.4(B)), for each requested QoS flow400as indicated in the Nxxx_QoS_Influence_Request, the group of QoS flows400is established at session establishment. Radio Resource reservation is preferably applied only to the requested QoS flow400. In case of a session modification, if the modification relates to the active QoS flow400, no radio resource reservation is kept for the active QoS flow400, and resource reservation is applied to another inactive QoS flow400of the group. In other words, the apparatus100can switch from a currently active QoS flow400of the session102associated with a first QoS profile to a currently inactive QoS flow400of the session102associated with a second QoS profile in case of a modification of the session102, and/or switch from a currently active QoS profile to a currently inactive QoS profile. After the QoS monitoring capability has been activated on any NF supporting it (AN, AMF, SMF, UDM, PCF201), the NF shall notify the AF200on QoS changes relating to the session102as indicated in the Nxxx_QoS_Monitoring_Request.The NF shall notify any QoS changes for PDU Session102relating to DNN and/or Network Slice identifiers (S-NSSAIs) and/or AF-Service-Identifiers and/or UE(s) indicated by the Nxxx_QoS_Monitoring_Request.If the QoS monitoring timing in the Nxxx_QoS_Monitoring_Request is set to “early”, the NF shall notify the AF200on the occurrence of conditions for a session modification.If the QoS monitoring timing in the Nxxx_QoS_Monitoring_Request is set to “late”, the NF shall notify the AF200on the completion of the session modification following the occurrence of conditions for a session modification.If the QoS monitoring timing in the Nxxx_QoS_Monitoring_Request is set to “both”, the NF shall notify the AF200on the occurrence of conditions for a session modification and on the completion of the session modification following the occurrence of conditions for a session modification.For AN, AMF, SMF, UDF and PCF201, if the QoS monitoring mode in the Nxxx_QoS_Monitoring_Request is set to “direct”, the NF shall notify the AF200of the QoS change directly via the Nxxx_EventExposure_Notify service operation.For AN, AMF, SMF and UDF, if the QoS monitoring mode in the Nxxx_QoS_Monitoring_Request is set to “indirect”, the NF shall notify the PCF201of the QoS change via the Nxxx_EventExposure_Notify service operation, and the PCF201shall notify indirectly the AF of the QoS change directly via the Npcf_EventExposure_Notify service operation In the following, an exemplary and simplified implementation of the solution of the present disclosure is now illustrated. The implementation is in particular conceived as an incremental enhancement of 5GS. The implementation allows the AF200of the apparatus100according to an embodiment of the present disclosure (e.g. as inFIG.1orFIG.2) the following:To configure the PCF201on the preferred QoS level downgrades (preferably per QoS flow400) to take place when the target QoS for an application103or service cannot be fulfilled during the lifetime of the corresponding QoS flow(s)400via the Npcf_QoS_Influence_Request message. The Npcf_QoS_Influence_Request message includes the preferred QoS level downgrades for QoS attributes and values, e.g. 5QI (per flow), ARP (per flow), RQA (per flow), GFBR (UL and DL) (per flow), MFBR (UL/DL) (per flow), Session-AMBR (per PDU Session102).Whenever the conditions for a session (here PDU session102) modification occur, to be notified of the PDU Session Modification initiation either directly by the NFs at which such conditions are detected (the NF can be either AMF, or UDM, of SMF, or AN) via the Nxxx_EventExposure_Notify message (xxx can be AN, AMF, UDM, SMF), or indirectly via PCF Npcf_EventExposure_Notify message after the PCF201has been involved in the PDU Session Modification. FIG.5illustrates for this exemplary implementation an AF Request/Response procedure for activating the QoS monitoring capability and the QoS influencing capability, respectively. Step 1.a: The AF200invokes an Npcf_QoS_Influence_Request service operation. The request contains:Either a DNN and possibly slicing information (S-NSSAI) or an AF-Service-Identifier;Information of the UE(s) whose QoS needs to be influenced;A preferred QoS level downgrade configuration message (message details shown later). Step 2.a: The AF200sends its request to the PCF201directly or via the Network Exposure Function (NEF)504. Step 3.a: The PCF201stores the preferred QoS level downgrade rules for DNN, S-NSSAI, AF-Service-Identifier, and UE(s). Step 4.a: The PCF201invokes Npcf_QoS_Influence_Response service operation. Step 1.b: The AF200invokes an Npcf_QoS_Monitoring_Request service operation. The request contains:Either a DNN and possibly slicing information (S-NSSAI) or an AF-Service-Identifier;Information of the UE(s) whose QoS needs to be monitored;The list of triggers for PDU Session Modification to be early monitored.The PDU Session Modification triggers notification mode (QoS monitoring mode), i.e. direct notification or indirect notification. Step 2.b: The AF200sends its request202to at least one of the following NFs: AN500, AMF501, SMF502, UDM503, PCF201(depending on the list of triggers for PDU Session Modification to be early monitored) directly or via the NEF504. Step 3.b: At least one of the following NFs: AN500, AMF501, SMF502, UDM503, PCF201(depending on the list of triggers for PDU Session Modification to be early monitored) invokes the Nxxx_QoS_Monitoring_Response service operation. FIG.6shows the PDU Session Modification procedure as per TS23.502, highlighting in step 0 the activation of the QoS influencing capability, by which the AF200is able to request, in request202, the PCF201to configure the different QoS levels101and, preferably, the preferred QoS level downgrade(s). The PDU Session Modification procedure will be influenced according to a Preferred QoS Downgrade Configuration message received by the PCF201. This Preferred QoS Downgrade Configuration message includes preferably:A “QoS Downgrade Mode parameter”.A “QoS Downgrade Rules table”. The QoS Downgrade Mode parameter indicates how the QoS downgrade rules shall apply in case of multiple flow PDU Sessions102. The QoS Downgrade Mode can be set to one of:“per flow independent”“intertwined flows” If the QoS Downgrade Mode parameter is set to “per flow independent”, the QoS downgrade rules apply independently on each QoS flow400according to the QoS Downgrade Rules table. In other words, the preferred QoS level downgrade is attempted QoS flow400by QoS flow400, independently, according to the QoS Downgrade Rules table. If the QoS Downgrade Mode parameter is set to “intertwined flows”, the QoS level downgrade rules apply jointly on all QoS flows400according to the QoS Downgrade Rules table. In other words, the preferred QoS level downgrade is attempted considering jointly all QoS flows400of the PDU Session102, according to the QoS Downgrade Rules table and the relative values of other QoS parameters, including e.g., Priority Level, Packet Delay Budget, Packet Error Rate, Default Averaging Window, ARP. An exemplary QoS Downgrade Rules table is shown below and indicates the preferred QoS downgrade rule per QoS flow400. In particular, for each 5QI value, the table indicates the preferred fallback 5QI. Per each GBR 5QI value, the table indicates also the preferred GBR scaling factor (i.e. the preferred GBR value reduction after the PDU Session Modification completes). CurrentPreferredGBRFlowFallbackScaling5QI5QIFactor1PD1SF12PD2SF23PD3SF34PD4SF465PD65SF6566PD66SF6675PD75SF755PD5n.a.6PD6n.a.7PD7n.a.8PD8n.a.9PD9n.a.69PD69n.a.70PD70n.a.79PD79n.a. The present disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed invention, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description, the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.
16,031
11943667
DETAILED DESCRIPTION For the purpose of explanation, details are set forth in the following description in order to provide a thorough understanding of the embodiments disclosed. It is apparent, however, to those skilled in the art that the embodiments may be implemented without these specific details or with an equivalent arrangement. FIG.1is a diagram showing an existing wireless communication system into which MU-MIMO is applicable. The wireless communication system may be a time division duplexing (TDD) system in LTE 4G or NR 5G. As shown, the wireless communication system comprises a base station and a plurality of user equipments (UEs). In the base station, the high layer (HL) conveys data and signaling with the physical layer (PHY). The HL refers to open system interconnection (OSI) layers above layer2and the PHY refers to OSI layer1. The data buffer stores uplink (UL) and downlink (DL) data for UE. The radio resource scheduler assigns radio resource to UE. According to the allocation results, the data buffer prepares data for the PHY and the PHY transmits data to or receives data from UE. The PHY takes care of channel measurement and reports to the scheduler the measurement results which are input arguments to scheduling algorithms. UE shall follow the configuration and scheduling for radio resource from the base station. In practice, channel knowledge may not be perfect. Channel estimation error is introduced by interference and/or noise. Another impact is time variation of channels. During the period between channel measurement and beamformed transmission, channels are still varying. If the channel varies much, beamforming gain becomes negligible.FIG.2illustrates an existing implementation for dynamic beamforming. As shown, sounding reference signal (SRS) signals from different UEs are received by a base station. Then, channels for different UEs can be individually estimated. According to the channel estimates, the beamforming weights for UL and DL may be calculated respectively. Then, the digital beamforming is applied for UL and DL. The reference signals used inFIG.2may raise potential limitations or performance loss for beamforming. On the other hand, channel measurement can be performed by measuring physical uplink shared channel (PUSCH). The issue of channel time variance remains for downlink beamforming, since the downlink beams are calculated or selected based on the measurement over uplink PUSCH. There may exit two cases. One is that only downlink physical downlink shared channel (PDSCH) is scheduled for a certain user during a period within which no PUSCH occurs for this user. The other is that even though PUSCH is scheduled while different PRBs are used for PDSCH. In practical networks, one cell may accommodate hundreds of UEs. Voice services or text services are provided for many UEs. Due to the limitations on channel measurement mentioned above, SRS resources become precious and thus are allocated to UE with big data buffer to increase cell throughput. It is not worthy to perform dynamic beamforming for UE with low traffic demand. However, due to the limitations of criteria for assigning SRS resources, there are not so many UEs with dense traffic demand. Thus, the gain of MU massive MIMO may be very likely diluted. For example, for massive MIMO downlink, suppose that: 1) the system allows for assigning the same radio time and frequency resource to up to 8 UEs; 2) there are 200 UEs and 8 of them have dense traffic and are assigned with SRS resources while other UEs demand only small package transmission from time to time; 3) due to some hardware (HW) limitations, one base station can schedule up to 20 users per transmission opportunity per cell; 4) round-robin scheduling is adopted. Then all UEs may be scheduled once every (around) 10 times. Among them, 8-layer gain may be achieved once by scheduling that MU group. The massive MIMO gain can be roughly estimated as: 8(number of layers)/10(times)+1/10*9=1.7, which is quite lower than the maximum gain of 8. Note that although this example is quite rough, it is used to only illustrate the average of massive MIMO gain over time. The issue above is induced by the combination of SRS and UE traffic load. Although there have been some solutions which can resolve this issue, those solutions require large memory to store channel measurements and induce high complexity on system design. Moreover, if a UE demands only DL services but no UL services, base stations have no chance to measure the channel effectively without some particular design. The present disclosure proposes an improved solution for MU-MIMO. The basic idea is to propose a learning-based scheduling algorithm to show the massive MIMO gain. The solution may be applied to a wireless communication system including a UE and a base station. The UE can communicate through a radio access communication link with the base station. The base station can provide radio access communication links to UEs that are within its communication service cell. The base station may be, for example, an evolved node B (eNB) in LTE or a gNB in NR. Note that the communications may be performed between the UE and the base station according to any suitable communication standards and protocols. The UE may also be referred to as, for example, terminal device, access terminal, mobile station, mobile unit, subscriber station, or the like. It may refer to any end device that can access a wireless communication network and receive services therefrom. By way of example and not limitation, the UE may include a portable computer, an image capture terminal device such as a digital camera, a gaming terminal device, a music storage and playback appliance, a mobile phone, a cellular phone, a smart phone, a tablet, a wearable device, a personal digital assistant (PDA), or the like. In an Internet of things (IoT) scenario, a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network equipment. In this case, the UE may be a machine-to-machine (M2M) device, which may, in a 3rd generation partnership project (3GPP) context, be referred to as a machine-type communication (MTC) device. Particular examples of such machines or devices may include sensors, metering devices such as power meters, industrial machineries, bikes, vehicles, or home or personal appliances, e.g. refrigerators, televisions, personal wearables such as watches, and so on. Hereinafter, the solution will be described in detail with reference toFIGS.3-9.FIG.3is a flowchart illustrating a method implemented at a base station according to an embodiment of the disclosure. At block302, historical scheduling characteristic information is obtained. The historical scheduling characteristic information may refer to the information that indicates the characteristic(s) or merit(s) of historical scheduling events occurring at the base station. For example, block302may be implemented as block402ofFIG.4. That is, the historical scheduling characteristic information may be determined based on historical information related to historical scheduling events at block402. The historical information related to a historical scheduling event may include historical resource demand information and historical scheduling result information. The historical resource demand information indicates the resource demand in the historical scheduling event. For example, the historical resource demand information may include any one of or a combination of: the number of active users existing in a serving area of the base station, buffer status of the active users, or the like. The historical scheduling result information indicates the scheduling result(s) in the historical scheduling event. For example, the historical scheduling result information may include any one of or a combination of: the number of users scheduled in the historical scheduling event, the number of MU-MIMO users among the scheduled users, the number of non-MU-MIMO users among the scheduled users, the number of CCE candidates assigned in the historical scheduling event, the number of PRBs allocated in the historical scheduling event, the number of PRBs allocated to MU-MIMO users in the historical scheduling event, the number of PRBs allocated to non-MU-MIMO users in the historical scheduling event, an instantaneous throughput achieved in the historical scheduling event, or the like. The historical information may be stored in a storage unit (e.g. a dedicated memory) of the base station. Optionally, the time duration for data effectiveness may be adjusted with respect to the available storage space and traffic scenarios. When there is no historical information for a base station (e.g. the base station is turned on for the first time), the base station may allocate resources by using various conventional scheduling algorithms. When sufficient historical information has been available, block402may be performed. The historical scheduling characteristic information determined at block402may vary depending on the requirement of block304or403. The historical scheduling characteristic information may be determined by using various data processing techniques such as statistical processing, pattern recognition, machine learning, or the like. The historical scheduling characteristic information may be determined as any one of or a combination of: traffic model information, bottleneck information, statistical information about the historical scheduling result information, or the like. The traffic model information may include any one of or a combination of: the number of users demanding dense traffic, the number of users with voice services, the number of users with small package services, the number of users with middle package services, or the like. For illustration purpose, an exemplary traffic model which may be obtained at block302is shown inFIG.5. It is a typical traffic model from China mobile communications group Co., Ltd (CMCC). For example, the traffic model information may be determined by performing statistical processing on the historical resource demand information over a predetermined period of time (e.g. a sliding time window). The bottleneck information indicates the bottleneck(s) of constrain(s) for scheduling users at the base station. For example, the bottleneck information may include any one of or a combination of: the maximum number of users capable of being scheduled by the base station (e.g. due to HW limitation), the maximum number of CCE candidates available for the base station, the maximum number of PRBs available for the base station, or the like. The statistical information about the historical scheduling result information may include any one of or a combination of: the average number of scheduled users, the average number of assigned CCE candidates, the average number of allocated PRBs, the average number of PRBs allocated to MU-MIMO users, the average number of PRBs allocated to non-MU-MIMO users, or the like. The above average numbers may be calculated over a predetermined period of time (e.g. a sliding time window). Note that the above examples for the statistical information are merely exemplary examples for illustration purpose and any other suitable statistical processing may be used instead. Referring back toFIG.3, at block304, resources are reserved for MU-MIMO users based at least on the historical scheduling characteristic information. In this way, unfairness is introduced among UEs to enhance cell throughput without deteriorating legacy key performance indicators (KPIs). For example, block304may be implemented as blocks304-1to304-4ofFIG.6. At block304-1, the number of the MU-MIMO users for which the resources are reserved is determined. As an exemplary example, the number of the MU-MIMO users may be determined as a first predetermined margin multiplied by a difference between the maximum number of users capable of being scheduled and the average number of scheduled users. This may be represented as: nrOfUsersSchedRev=floor((nrOfUserCapSched−avgNrOfUserSched)*margin1), where nrOfUsersSchedRev denotes the number of the MU-MIMO users, nrOfUserCapSched denotes the maximum number of users capable of being scheduled, avgNrOfUserSched denotes the average number of scheduled users, and margin1 denotes the first predetermined margin which may be a safe margin value belonging to (0, 1). At block304-2, the number of CCE candidates reserved for the MU-MIMO users is determined. As an exemplary example, the number of the reserved CCE candidates may be determined as a second predetermined margin multiplied by a difference between the maximum number of CCE candidates and the average number of assigned CCE candidates. This may be represented as: nrOfCceCandiRev=floor((nrOfMaxCceCandi−avgNrOfCceAssined)*margin2), where nrOfCceCandiRev denotes the number of the reserved CCE candidates, nrOfMaxCceCandi denotes the maximum number of CCE candidates, avgNrOfCceAssined denotes the average number of CCE candidates, and margin2 denotes the second predetermined margin which may be a safe margin value belonging to (0, 1). At block304-3, the number of PRBs reserved for the MU-MIMO users is determined. As an exemplary example, the number of the reserved PRBs may be determined as a third predetermined margin multiplied by a difference between the maximum number of PRBs and the average number of allocated PRBs. This may be represented as: nrOfPrbRev=floor((nrOfAvailPrbs−avgNrOfPrbUsed)*margin3), where nrOfPrbRev denotes the number of the reserved PRBs, nrOfAvailPrbs denotes the maximum number of PRBs, avgNrOfPrbUsed denotes the average number of PRBs, and margin3 denotes the third predetermined margin which may be a safe margin value belonging to (0, 1). In the above examples for blocks304-1to304-3, only the bottleneck information and statistical information is used to determine the reserved resources. However, the present disclosure is not limited to these examples and any suitable historical scheduling characteristic information mentioned above may be used to determine the reserved resources depending on the specific application scenario. At block304-4, CCE resources and PRBs are reserved based on the determined number of CCE candidates and the determined number of PRBs respectively. As an example, the PRBs within a fixed range corresponding to the determined number of PRBs may be reserved. For instance, if the determined number of PRBs is 40, then PRB 0 to PRB 39 may be reserved. Similarly, the CCE resources within a fixed range corresponding to the determined number of CCE candidates may be reserved. In this way, the MU and non-MU users can be multiplexed in a fixed frequency division instead of a random manner. The determination of the users, scheduling resource and radio resource at block304-1to304-3may be performed once for every time window and the resource reservation at block304-4may be performed per scheduling opportunity. Optionally, the resources may be reserved for the MU-MIMO users based further on current resource demand information, as shown in block404ofFIG.4. That is, the resources may be reserved for the MU-MIMO users based on the historical scheduling characteristic information and current resource demand information. For example, the number of the CCE candidates/PRBs to be used according to the current resource demand information may be compared with the average number of assigned CCE candidates/allocated PRBs which is determined based merely on the historical scheduling characteristic information. If the difference between the two is greater than a predetermined difference value, then the average number of assigned CCE candidates/allocated PRBs may be modified based on the number of the CCE candidates/PRBs to be used. For example, a weighted sum of the average number of assigned CCE candidates/allocated PRBs and the number of the CCE candidates/PRBs to be used may be used as the modified resources. In this case, the determination of the users, scheduling resource and radio resource at block304-1to304-3may also be performed if the above difference between the two is greater than the predetermined difference value. Optionally, the base station may determine whether to use MU-MIMO based on at least one of the historical scheduling characteristic information and current resource demand information, as shown in block403ofFIG.4. The resources may be reserved for the MU-MIMO users when it is determined to use MU-MIMO, as shown in block404ofFIG.4. As a first example, the base station may determine to use MU-MIMO when the following conditions are satisfied: 1) the number of users demanding dense traffic is above a first predetermined threshold; 2) the average number of PRBs allocated to MU-MIMO users is below a second predetermined threshold; and 3) the number of users with small package services is below a third predetermined threshold (e.g., there are not too many users with small package services). Note that some further conditions may be taken in the future, when new traffic demand appears. The first predetermined threshold may be set such that the condition 1) is satisfied when there are several MU users demanding dense traffic. Thus, the prerequisite for showing massive MIMO gain is relaxed to that there are a few UEs with big buffer. In this way, the limit of SRS resources is not critical to let massive MIMO benefit cell throughput. As a second example, the base station may determine not to use MU-MIMO when one or more of following conditions are satisfied: the number of users demanding dense traffic is smaller than the first predetermined threshold (e.g., the present traffic is not dense, or there is no user demanding dense data traffic); the number of users with small package services is greater than the third predetermined threshold (e.g., there are too many users with small package services); and the number of active users is below a fourth predetermined threshold (e.g., there are only a few users accommodating in the serving cell of the base station). In the above two examples, only historical scheduling characteristic information is used to determine whether to use MU-MIMO. As a third example, the base station may determine not to use MU-MIMO when the current resource demand information indicates one or more of following conditions: the number of users currently demanding dense traffic is smaller than the first predetermined threshold; the number of users currently with small package services is greater than the third predetermined threshold (e.g., there are too many users with small package services); and the number of the current active users is below a fourth predetermined threshold. As a fourth example, similar to the first example, the base station may determine to use MU-MIMO when the three conditions in the first example are satisfied. The difference only lies in that any one or more of the number of users demanding dense traffic, the average number of PRBs allocated to MU-MIMO users, and the number of users with small package services is modified based on the current resource demand information. Referring back toFIG.3, at block306, the reserved resources are allocated to the MU-MIMO users. For example, the reserved resources may be allocated to the MU users as often as possible and as much as possible. In this way, SRS resource can be released earlier and then assigned to next ones. In addition, the convergence of link adaptation (LA) for MU users can be accelerated.FIG.7illustrates an exemplary example for resource reservation according to an embodiment of the disclosure. Suppose the reserved resources at blocks304-1to304-3are represented as (nrOfUsersSchedRev, nrOfCceCandiRev, nrOfPrbRev). Then, in the example ofFIG.7, the reserved resource are represented as (8, 8, 40). In this case, the first 40 PRBs are always assigned to MU users and PRB 40 to PRB 99 are allocated to other user(s). If the rough calculation mentioned above is used, then the expected massive MIMO gain is: 40/100*8+60/100=3.8. Thus, the cell throughput can be enhanced by 3.8/1.7*100%=223.5%. This means massive MIMO can be significantly enhanced and customers' experience can be improved. Optionally, at block308, at least part of unreserved resources is allocated to one or more non-MU-MIMO users. It should be noted that two blocks shown in succession in the figures may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. FIG.8is a block diagram showing an apparatus suitable for use in practicing some embodiments of the disclosure. For example, the base station described above may be implemented through the apparatus800. As shown, the apparatus800may include a processor810, a memory820that stores a program, and a communication interface830for communicating data with other external devices through wired and/or wireless communication. The program includes program instructions that, when executed by the processor810, enable the apparatus800to operate in accordance with the embodiments of the present disclosure, as discussed above. That is, the embodiments of the present disclosure may be implemented at least in part by computer software executable by the processor810, or by hardware, or by a combination of software and hardware. The memory820may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memories, magnetic memory devices and systems, optical memory devices and systems, fixed memories and removable memories. The processor810may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on multi-core processor architectures, as non-limiting examples. FIG.9is a block diagram showing a base station according to an embodiment of the disclosure. As shown, the base station900comprises an obtaining module902, a reservation module904and an allocation module906. The obtaining module902may be configured to obtain historical scheduling characteristic information, as described above with respect to block302. The reservation module904may be configured to reserve resources for MU-MIMO users based at least on the historical scheduling characteristic information, as described above with respect to block304. The allocation module906may be configured to allocate the reserved resources to the MU-MIMO users, as described above with respect to block306. Optionally, the reservation module904may be configured to reserve resources for MU-MIMO users based further on current resource demand information. The allocation module906may be further configured to allocate at least part of unreserved resources to one or more non-MU-MIMO users. The base station900may further comprise a determination module configured to determine whether to use MU-MIMO based on at least one of the historical scheduling characteristic information and current resource demand information. The reservation module904may be configured to reserve the resources for the MU-MIMO users when the determination module determines to use MU-MIMO. The modules described above may be implemented by hardware, or software, or a combination of both. In general, the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the disclosure is not limited thereto. While various aspects of the exemplary embodiments of this disclosure may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof. As such, it should be appreciated that at least some aspects of the exemplary embodiments of the disclosure may be practiced in various components such as integrated circuit chips and modules. It should thus be appreciated that the exemplary embodiments of this disclosure may be realized in an apparatus that is embodied as an integrated circuit, where the integrated circuit may comprise circuitry (as well as possibly firmware) for embodying at least one or more of a data processor, a digital signal processor, baseband circuitry and radio frequency circuitry that are configurable so as to operate in accordance with the exemplary embodiments of this disclosure. It should be appreciated that at least some aspects of the exemplary embodiments of the disclosure may be embodied in computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one skilled in the art, the function of the program modules may be combined or distributed as desired in various embodiments. In addition, the function may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. References in the present disclosure to “one embodiment”, “an embodiment” and so on, indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. It should be understood that, although the terms “first”, “second” and so on may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of the disclosure. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed terms. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “has”, “having”, “includes” and/or “including”, when used herein, specify the presence of stated features, elements, and/or components, but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof. The terms “connect”, “connects”, “connecting” and/or “connected” used herein cover the direct and/or indirect connection between two elements. The present disclosure includes any novel feature or combination of features disclosed herein either explicitly or any generalization thereof. Various modifications and adaptations to the foregoing exemplary embodiments of this disclosure may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. However, any and all modifications will still fall within the scope of the non-Limiting and exemplary embodiments of this disclosure.
28,713
11943668
DETAILED DESCRIPTION Transmission of a reservation signal may be in accordance to a packet size. Although transmission of a reservation signal may reduce interference in wireless communications systems, there may be occasions where resource allocation related to reservation signals is wasted. For example, when a packet transmission is below a packet size threshold, the wireless communications device may determine to refrain from transmitting a reservation signal. That is, because a reservation signal may have a fixed overhead (e.g., resource allocation) per packet transmission, there may be occasions where resources may be unused. As a result, wireless communications devices may experience inefficient management of resources related to reservation signaling. The described techniques relate to improved methods, systems, devices, and apparatuses that support pre-reservation resource management. The described techniques may enable a wireless communications device in a wireless communications system that supports in-direct or direct communications between wireless communications devices (e.g., direct communications between multiple UEs), such as a D2D system, a V2X system (or other systems such as V2V networks, C-V2X networks), and the like to reliably determine when to transmit a reservation signal and select resources for the reservation signal using resources either from a same resource pool as resources for normal packet transmission or in a dedicated pool. A reservation signal may be a short transmission that reserves the resource for one or many subsequent data transmissions. These special transmissions require a small amount of time and frequency resources (e.g., a resource block, a slot, a transmission time interval, etc.) that may be sent separately ahead of the main data transmissions. The present disclosure addresses managing resources used for reservation signals in coexistence with the resource pool used for normal data transmissions. This may be achieved by having UEs select resources for transmitting reservation signals from a shared resource pool. The shared resource pool may be related to resources allocated, reserved, and selected for packet transmissions (e.g., normal traffic). In another aspect, a separated resource pool may be dedicated for the transmission of reservation signals. In some cases, the resources dedicated for reservation signals may include unoccupied resources (e.g., when the reservation signals do not occupy all of the resources dedicated for the reservation signals). Accordingly, in some example implementations of the techniques described herein, UEs may use the unoccupied resources of the resources dedicated for reservation signals for their own transmissions (e.g., data transmissions), which may result in more efficient use of resources. Aspects of the disclosure are initially described in the context of wireless communications systems. Aspects are then described with respect to a process flow that supports pre-reservation resource management. Aspects of the disclosure are further illustrated by and described with reference to apparatus diagrams, system diagrams, and flowcharts that relate to pre-reservation resource management for wireless communications. FIG.1illustrates an example of a wireless communications system100that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The wireless communications system100includes base stations105(e.g., gNodeBs (gNBs), and/or radio heads (RHs)), UEs115, and a core network130. In some examples, the wireless communications system100may be an LTE network, an LTE-A network, an LTE-A Pro network, or a NR network. In some cases, wireless communications system100may support enhanced broadband communications, ultra-reliable (e.g., mission critical) communications, low latency communications, or communications with low-cost and low-complexity devices. Base stations105may wirelessly communicate with UEs115via one or more base station antennas. Base stations105described herein may include or may be referred to by those skilled in the art as a base transceiver station, a radio base station, an access point, a radio transceiver, a NodeB, an eNodeB (eNB), a next-generation NodeB or giga-NodeB (either of which may be referred to as a gNB), a Home NodeB, a Home eNodeB, or some other suitable terminology. Wireless communications system100may include base stations105of different types (e.g., macro or small cell base stations). The UEs115described herein may be able to communicate with various types of base stations105and network equipment including macro eNBs, small cell eNBs, gNBs, relay base stations, and the like. Each base station105may be associated with a particular geographic coverage area110in which communications with various UEs115is supported. Each base station105may provide communication coverage for a respective geographic coverage area110via communication links125, and communication links125between a base station105and a UE115may utilize one or more carriers. Communication links125shown in wireless communications system100may include uplink transmissions from a UE115to a base station105, or downlink transmissions from a base station105to a UE115. Downlink transmissions may also be called forward link transmissions while uplink transmissions may also be called reverse link transmissions. The geographic coverage area110for a base station105may be divided into sectors making up a portion of the geographic coverage area110, and each sector may be associated with a cell. For example, each base station105may provide communication coverage for a macro cell, a small cell, a hot spot, or other types of cells, or various combinations thereof. In some examples, a base station105may be movable and therefore provide communication coverage for a moving geographic coverage area110. In some examples, different geographic coverage areas110associated with different technologies may overlap, and overlapping geographic coverage areas110associated with different technologies may be supported by the same base station105or by different base stations105. The wireless communications system100may include, for example, a heterogeneous LTE/LTE-A/LTE-A Pro or NR network in which different types of base stations105provide coverage for various geographic coverage areas110. The term “cell” refers to a logical communication entity used for communication with a base station105(e.g., over a carrier), and may be associated with an identifier for distinguishing neighboring cells (e.g., a physical cell identifier (PCID), a virtual cell identifier (VCID)) operating via the same or a different carrier. In some examples, a carrier may support multiple cells, and different cells may be configured according to different protocol types (e.g., machine-type communication (MTC), narrowband Internet-of-Things (NB-IoT), enhanced mobile broadband (eMBB), or others) that may provide access for different types of devices. In some cases, the term “cell” may refer to a portion of a geographic coverage area110(e.g., a sector) over which the logical entity operates. UEs115may be dispersed throughout the wireless communications system100, and each UE115may be stationary or mobile. A UE115may also be referred to as a mobile device, a wireless device, a remote device, a handheld device, or a subscriber device, or some other suitable terminology, where the “device” may also be referred to as a unit, a station, a terminal, or a client. A UE115may also be a personal electronic device such as a cellular phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a personal computer. In some examples, a UE115may also refer to a wireless local loop (WLL) station, an Internet of Things (IoT) device, an Internet of Everything (IoE) device, or an MTC device, or the like, which may be implemented in various articles such as appliances, vehicles, meters, or the like. Some UEs115, such as MTC or IoT devices, may be low cost or low complexity devices, and may provide for automated communication between machines (e.g., via Machine-to-Machine (M2M) communication). M2M communication or MTC may refer to data communication technologies that allow devices to communicate with one another or a base station105without human intervention. In some examples, M2M communication or MTC may include communications from devices that integrate sensors or meters to measure or capture information and relay that information to a central server or application program that can make use of the information or present the information to humans interacting with the program or application. Some UEs115may be designed to collect information or enable automated behavior of machines. Examples of applications for MTC devices include smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access control, and transaction-based business charging. Some UEs115may be configured to employ operating modes that reduce power consumption, such as half-duplex communications (e.g., a mode that supports one-way communication via transmission or reception, but not transmission and reception simultaneously). In some examples half-duplex communications may be performed at a reduced peak rate. Other power conservation techniques for UEs115include entering a power saving “deep sleep” mode when not engaging in active communications, or operating over a limited bandwidth (e.g., according to narrowband communications). In some cases, UEs115may be designed to support critical functions (e.g., mission critical functions), and a wireless communications system100may be configured to provide ultra-reliable communications for these functions. In some cases, a UE115may also be able to communicate directly with other UEs115(e.g., using a peer-to-peer (P2P) or D2D protocol). One or more of a group of UEs115utilizing D2D communications may be within the geographic coverage area110of a base station105. Other UEs115in such a group may be outside the geographic coverage area110of a base station105, or be otherwise unable to receive transmissions from a base station105. In some cases, groups of UEs115communicating via D2D communications may utilize a one-to-many (1:M) system in which each UE115transmits to every other UE115in the group. In some cases, a base station105facilitates the scheduling of resources for D2D communications. In other cases, D2D communications are carried out between UEs115without the involvement of a base station105. Base stations105may communicate with the core network130and with one another. For example, base stations105may interface with the core network130through backhaul links132(e.g., via an S1, N2, N3, or other interface). Base stations105may communicate with one another over backhaul links134(e.g., via an X2, Xn, or other interface) either directly (e.g., directly between base stations105) or indirectly (e.g., via core network130). A UE115may communicate with the core network130through communication link135. The core network130may provide user authentication, access authorization, tracking, Internet Protocol (IP) connectivity, and other access, routing, or mobility functions. The core network130may be an evolved packet core (EPC), which may include at least one mobility management entity (MME), at least one serving gateway (S-GW), and at least one Packet Data Network (PDN) gateway (P-GW). The MME may manage non-access stratum (e.g., control plane) functions such as mobility, authentication, and bearer management for UEs115served by base stations105associated with the EPC. User IP packets may be transferred through the S-GW, which itself may be connected to the P-GW. The P-GW may provide IP address allocation as well as other functions. The P-GW may be connected to the network operators IP services. The operators IP services may include access to the Internet, Intranet(s), an IP Multimedia Subsystem (IMS), or a Packet-Switched (PS) Streaming Service. At least some of the network devices, such as a base station105, may include subcomponents such as an access network entity, which may be an example of an access node controller (ANC). Each access network entity may communicate with UEs115through a number of other access network transmission entities, which may be referred to as a radio head, a smart radio head, or a transmission/reception point (TRP). In some configurations, various functions of each access network entity or base station105may be distributed across various network devices (e.g., radio heads and access network controllers) or consolidated into a single network device (e.g., a base station105). Wireless communications system100may operate using one or more frequency bands, for example, in the range of 300 megahertz (MHz) to 300 gigahertz (GHz). The region from 300 MHz to 3 GHz is known as the ultra-high frequency (UHF) region or decimeter band, since the wavelengths range from approximately one decimeter to one meter in length. UHF waves may be blocked or redirected by buildings and environmental features. However, the waves may penetrate structures sufficiently for a macro cell to provide service to UEs115located indoors. Transmission of UHF waves may be associated with smaller antennas and shorter range (e.g., less than 100 km) compared to transmission using the smaller frequencies and longer waves of the high frequency (HF) or very high frequency (VHF) portion of the spectrum below 300 MHz. Wireless communications system100may also operate in a super high frequency (SHF) region using frequency bands from 3 GHz to 30 GHz, also known as the centimeter band. The SHF region includes bands such as the 5 GHz industrial, scientific, and medical (ISM) bands, which may be used opportunistically by devices that may be capable of tolerating interference from other users. Wireless communications system100may also operate in an extremely high frequency (EHF) region of the spectrum (e.g., from 30 GHz to 300 GHz), also known as the millimeter band. In some examples, wireless communications system100may support millimeter wave (mmW) communications between UEs115and base stations105, and EHF antennas of the respective devices may be even smaller and more closely spaced than UHF antennas. In some cases, this may facilitate use of antenna arrays within a UE115. However, the propagation of EHF transmissions may be subject to even greater atmospheric attenuation and shorter range than SHF or UHF transmissions. Techniques disclosed herein may be employed across transmissions that use one or more different frequency regions, and designated use of bands across these frequency regions may differ by country or regulating body. In some cases, wireless communications system100may utilize both licensed and unlicensed radio frequency spectrum bands. For example, wireless communications system100may employ License Assisted Access (LAA), LTE-Unlicensed (LTE-U) radio access technology, or NR technology in an unlicensed band such as the 5 GHz ISM band. When operating in unlicensed radio frequency spectrum bands, wireless devices such as base stations105and UEs115may employ listen-before-talk (LBT) procedures to ensure a frequency channel is clear before transmitting data. In some cases, operations in unlicensed bands may be based on a carrier aggregation configuration in conjunction with component carriers operating in a licensed band (e.g., LAA). Operations in unlicensed spectrum may include downlink transmissions, uplink transmissions, peer-to-peer transmissions, or a combination of these. Duplexing in unlicensed spectrum may be based on frequency division duplexing (FDD), time division duplexing (TDD), or a combination of both. In some examples, base station105or UE115may be equipped with multiple antennas, which may be used to employ techniques such as transmit diversity, receive diversity, multiple-input multiple-output (MIMO) communications, or beamforming. For example, wireless communications system100may use a transmission scheme between a transmitting device (e.g., a base station105) and a receiving device (e.g., a UE115), where the transmitting device is equipped with multiple antennas and the receiving device is equipped with one or more antennas. MIMO communications may employ multipath signal propagation to increase the spectral efficiency by transmitting or receiving multiple signals via different spatial layers, which may be referred to as spatial multiplexing. The multiple signals may, for example, be transmitted by the transmitting device via different antennas or different combinations of antennas. Likewise, the multiple signals may be received by the receiving device via different antennas or different combinations of antennas. Each of the multiple signals may be referred to as a separate spatial stream, and may carry bits associated with the same data stream (e.g., the same codeword) or different data streams. Different spatial layers may be associated with different antenna ports used for channel measurement and reporting. MIMO techniques include single-user MIMO (SU-MIMO) where multiple spatial layers are transmitted to the same receiving device, and multiple-user MIMO (MU-MIMO) where multiple spatial layers are transmitted to multiple devices. Beamforming, which may also be referred to as spatial filtering, directional transmission, or directional reception, is a signal processing technique that may be used at a transmitting device or a receiving device (e.g., a base station105or a UE115) to shape or steer an antenna beam (e.g., a transmit beam or receive beam) along a spatial path between the transmitting device and the receiving device. Beamforming may be achieved by combining the signals communicated via antenna elements of an antenna array such that signals propagating at particular orientations with respect to an antenna array experience constructive interference while others experience destructive interference. The adjustment of signals communicated via the antenna elements may include a transmitting device or a receiving device applying certain amplitude and phase offsets to signals carried via each of the antenna elements associated with the device. The adjustments associated with each of the antenna elements may be defined by a beamforming weight set associated with a particular orientation (e.g., with respect to the antenna array of the transmitting device or receiving device, or with respect to some other orientation). In one example, a base station105may use multiple antennas or antenna arrays to conduct beamforming operations for directional communications with a UE115. For instance, some signals (e.g., synchronization signals, reference signals, beam selection signals, or other control signals) may be transmitted by a base station105multiple times in different directions, which may include a signal being transmitted according to different beamforming weight sets associated with different directions of transmission. Transmissions in different beam directions may be used to determine (e.g., by the base station105or a receiving device, such as a UE115) a beam direction for subsequent transmission and/or reception by the base station105. Some signals, such as data signals associated with a particular receiving device, may be transmitted by a base station105in a single beam direction (e.g., a direction associated with the receiving device, such as a UE115). In some examples, the beam direction associated with transmissions along a single beam direction may be determined based at least in in part on a signal that was transmitted in different beam directions. For example, a UE115may receive one or more of the signals transmitted by the base station105in different directions, and the UE115may report to the base station105an indication of the signal it received with a highest signal quality, or an otherwise acceptable signal quality. Although these techniques are described with reference to signals transmitted in one or more directions by a base station105, a UE115may employ similar techniques for transmitting signals multiple times in different directions (e.g., for determining a beam direction for subsequent transmission or reception by the UE115), or transmitting a signal in a single direction (e.g., for transmitting data to a receiving device). A receiving device (e.g., a UE115, which may be an example of a mmW receiving device) may try multiple receive beams when receiving various signals from the base station105, such as synchronization signals, reference signals, beam selection signals, or other control signals. For example, a receiving device may try multiple receive directions by receiving via different antenna subarrays, by processing received signals according to different antenna subarrays, by receiving according to different receive beamforming weight sets applied to signals received at a plurality of antenna elements of an antenna array, or by processing received signals according to different receive beamforming weight sets applied to signals received at a plurality of antenna elements of an antenna array, any of which may be referred to as “listening” according to different receive beams or receive directions. In some examples a receiving device may use a single receive beam to receive along a single beam direction (e.g., when receiving a data signal). The single receive beam may be aligned in a beam direction determined based at least in part on listening according to different receive beam directions (e.g., a beam direction determined to have a highest signal strength, highest signal-to-noise ratio, or otherwise acceptable signal quality based at least in part on listening according to multiple beam directions). In some cases, the antennas of a base station105or UE115may be located within one or more antenna arrays, which may support MIMO operations, or transmit or receive beamforming. For example, one or more base station antennas or antenna arrays may be co-located at an antenna assembly, such as an antenna tower. In some cases, antennas or antenna arrays associated with a base station105may be located in diverse geographic locations. A base station105may have an antenna array with a number of rows and columns of antenna ports that the base station105may use to support beamforming of communications with a UE115. Likewise, a UE115may have one or more antenna arrays that may support various MIMO or beamforming operations. In some cases, wireless communications system100may be a packet-based network that operate according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. A Radio Link Control (RLC) layer may perform packet segmentation and reassembly to communicate over logical channels. A Medium Access Control (MAC) layer may perform priority handling and multiplexing of logical channels into transport channels. The MAC layer may also use hybrid automatic repeat request (HARQ) to provide retransmission at the MAC layer to improve link efficiency. In the control plane, the Radio Resource Control (RRC) protocol layer may provide establishment, configuration, and maintenance of an RRC connection between a UE115and a base station105or core network130supporting radio bearers for user plane data. At the Physical layer, transport channels may be mapped to physical channels. In some cases, UEs115and base stations105may support retransmissions of data to increase the likelihood that data is received successfully. HARQ feedback is one technique of increasing the likelihood that data is received correctly over a communication link125. HARQ may include a combination of error detection (e.g., using a cyclic redundancy check (CRC)), forward error correction (FEC), and retransmission (e.g., automatic repeat request (ARQ)). HARQ may improve throughput at the MAC layer in poor radio conditions (e.g., signal-to-noise conditions). In some cases, a wireless device may support same-slot HARQ feedback, where the device may provide HARQ feedback in a specific slot for data received in a previous symbol in the slot. In other cases, the device may provide HARQ feedback in a subsequent slot, or according to some other time interval. Time intervals in LTE or NR may be expressed in multiples of a basic time unit, which may, for example, refer to a sampling period of Ts=1/30,720,000 seconds. Time intervals of a communications resource may be organized according to radio frames each having a duration of 10 milliseconds (ms), where the frame period may be expressed as Tf=307,200 Ts. The radio frames may be identified by a system frame number (SFN) ranging from 0 to 1023. Each frame may include 10 subframes numbered from 0 to 9, and each subframe may have a duration of 1 ms. A subframe may be further divided into 2 slots each having a duration of 0.5 ms, and each slot may contain 6 or 7 modulation symbol periods (e.g., depending on the length of the cyclic prefix prepended to each symbol period). Excluding the cyclic prefix, each symbol period may contain 2048 sampling periods. In some cases, a subframe may be the smallest scheduling unit of the wireless communications system100, and may be referred to as a transmission time interval. In other cases, a smallest scheduling unit of the wireless communications system100may be shorter than a subframe or may be dynamically selected (e.g., in bursts of shortened transmission time intervals) or in selected component carriers using shortened transmission time intervals). In some wireless communications systems, a slot may further be divided into multiple mini-slots containing one or more symbols. In some instances, a symbol of a mini-slot or a mini-slot may be the smallest unit of scheduling. Each symbol may vary in duration depending on the subcarrier spacing or frequency band of operation, for example. Further, some wireless communications systems may implement slot aggregation in which multiple slots or mini-slots are aggregated together and used for communication between a UE115and a base station105. The term “carrier” refers to a set of radio frequency spectrum resources having a defined physical layer structure for supporting communications over a communication link125. For example, a carrier of a communication link125may include a portion of a radio frequency spectrum band that is operated according to physical layer channels for a given radio access technology. Each physical layer channel may carry user data, control information, or other signaling. A carrier may be associated with a pre-defined frequency channel (e.g., an evolved universal mobile telecommunication system terrestrial radio access (E-UTRA) absolute radio frequency channel number (EARFCN)), and may be positioned according to a channel raster for discovery by UEs115. Carriers may be downlink or uplink (e.g., in an FDD mode), or be configured to carry downlink and uplink communications (e.g., in a TDD mode). In some examples, signal waveforms transmitted over a carrier may be made up of multiple sub-carriers (e.g., using multi-carrier modulation (MCM) techniques such as orthogonal frequency division multiplexing (OFDM) or DFT-S-OFDM). The organizational structure of the carriers may be different for different radio access technologies (e.g., LTE, LTE-A, LTE-A Pro, NR). For example, communications over a carrier may be organized according to transmission time intervals or slots, each of which may include user data as well as control information or signaling to support decoding the user data. A carrier may also include dedicated acquisition signaling (e.g., synchronization signals or system information, etc.) and control signaling that coordinates operation for the carrier. In some examples (e.g., in a carrier aggregation configuration), a carrier may also have acquisition signaling or control signaling that coordinates operations for other carriers. Physical channels may be multiplexed on a carrier according to various techniques. A physical control channel and a physical data channel may be multiplexed on a downlink carrier, for example, using time division multiplexing (TDM) techniques, frequency division multiplexing (FDM) techniques, or hybrid TDM-FDM techniques. In some examples, control information transmitted in a physical control channel may be distributed between different control regions in a cascaded manner (e.g., between a common control region or common search space and one or more UE-specific control regions or UE-specific search spaces). A carrier may be associated with a particular bandwidth of the radio frequency spectrum, and in some examples the carrier bandwidth may be referred to as a “system bandwidth” of the carrier or the wireless communications system100. For example, the carrier bandwidth may be one of a number of predetermined bandwidths for carriers of a particular radio access technology (e.g., 1.4, 3, 5, 10, 15, 20, 40, or 80 MHz). In some examples, each served UE115may be configured for operating over portions or all of the carrier bandwidth. In other examples, some UEs115may be configured for operation using a narrowband protocol type that is associated with a predefined portion or range (e.g., set of subcarriers or RBs) within a carrier (e.g., “in-band” deployment of a narrowband protocol type). In a system employing MCM techniques, a resource element may consist of one symbol period (e.g., a duration of one modulation symbol) and one subcarrier, where the symbol period and subcarrier spacing are inversely related. The number of bits carried by each resource element may depend on the modulation scheme (e.g., the order of the modulation scheme). Thus, the more resource elements that a UE115receives and the higher the order of the modulation scheme, the higher the data rate may be for the UE115. In MIMO systems, a wireless communications resource may refer to a combination of a radio frequency spectrum resource, a time resource, and a spatial resource (e.g., spatial layers), and the use of multiple spatial layers may further increase the data rate for communications with a UE115. Devices of the wireless communications system100(e.g., base stations105or UEs115) may have a hardware configuration that supports communications over a particular carrier bandwidth, or may be configurable to support communications over one of a set of carrier bandwidths. In some examples, the wireless communications system100may include base stations105and/or UEs115that support simultaneous communications via carriers associated with more than one different carrier bandwidth. The wireless communications system100may support communication with a UE115on multiple cells or carriers, a feature which may be referred to as carrier aggregation or multi-carrier operation. A UE115may be configured with multiple downlink component carriers and one or more uplink component carriers according to a carrier aggregation configuration. Carrier aggregation may be used with both FDD and TDD component carriers. In some cases, wireless communications system100may utilize enhanced component carriers (eCCs). An eCC may be characterized by one or more features including wider carrier or frequency channel bandwidth, shorter symbol duration, shorter transmission time interval duration, or modified control channel configuration. In some cases, an eCC may be associated with a carrier aggregation configuration or a dual connectivity configuration (e.g., when multiple serving cells have a suboptimal or non-ideal backhaul link). An eCC may also be configured for use in unlicensed spectrum or shared spectrum (e.g., where more than one operator is allowed to use the spectrum). An eCC characterized by wide carrier bandwidth may include one or more segments that may be utilized by UEs115that are not capable of monitoring the whole carrier bandwidth or are otherwise configured to use a limited carrier bandwidth (e.g., to conserve power). In some cases, an eCC may utilize a different symbol duration than other component carriers, which may include use of a reduced symbol duration as compared with symbol durations of the other component carriers. A shorter symbol duration may be associated with increased spacing between adjacent subcarriers. A device, such as a UE115or base station105, utilizing eCCs may transmit wideband signals (e.g., according to frequency channel or carrier bandwidths of 20, 40, 60, 80 MHz, etc.) at reduced symbol durations (e.g., 16.67 microseconds). A transmission time interval in eCC may consist of one or multiple symbol periods. In some cases, the transmission time interval duration (that is, the number of symbol periods in a transmission time interval) may be variable. Wireless communications system100may be an NR system that may utilize any combination of licensed, shared, and unlicensed spectrum bands, among others. The flexibility of eCC symbol duration and subcarrier spacing may allow for the use of eCC across multiple spectrums. In some examples, NR shared spectrum may increase spectrum utilization and spectral efficiency, specifically through dynamic vertical (e.g., across the frequency domain) and horizontal (e.g., across the time domain) sharing of resources. In some wireless communications system100, such as a V2X system (or other systems such as V2V networks, C-V2X networks, and the like), wireless communications devices may perform pre-reservation resource management. These wireless communications devices may be examples of UEs115. For example, a UE115may support direct communications with other UEs115, and may transmit a reservation signal, prior to a packet transmission, to UEs115in the wireless communications system100. A reservation signal may also be referred to herein as a pre-reservation signal. The reservation signal may provide an indication to the UEs115in the wireless communications system100of reserved resources for the packet transmission. As a result, any UE115that may be within a threshold range (e.g., distance) that receives the reservation signal may refrain from using resources that overlap with the reserved resources of the packet transmission. A reservation signal may consume a small amount of time and frequency resources (e.g., a resource block, a slot, a transmission time interval, etc.). That is, a reservation signal may have a dedicated (e.g., a fixed) resource pool for selecting resources for transmission of the reservation signal. In some examples, UEs115may determine whether to transmit a reservation signal based in part on a parameter (e.g., a packet size, a packet priority, and the like). Therefore, in some examples, UEs115may refrain from transmitting a reservation signal when one or more packets for a packet transmission are below a packet size, a packet priority, and the like. In this example, UEs115may experience inefficient management of resources (e.g., wasted resources) related to reservation signaling because resources allocated for reservation signaling may go unused. By way of example, if a fixed resource pool is allocated for reservation signaling (e.g., transmission of a reservation signal), the allocated pool will be wasted when transmissions from UEs115is low (e.g., low traffic) or congested when transmissions from UEs115is high (e.g., high traffic). In some examples, overall resources for UEs115in the wireless communications system100may also be fragmented. To address challenges related to pre-reservation resource management, such as inefficiencies, among others, UEs115may select resources for transmitting reservation signals from a shared resource pool. The shared resource pool may be related to resources allocated, reserved, and selected for packet transmissions (e.g., normal traffic). As a result, UEs115in the wireless communications system100may benefit from improved efficiency and reduced latency associated with processes related to scheduling resources for packet transmission or packet re-transmission because UEs115may use resources otherwise dedicated largely for reservation signaling. In addition, selecting resources for transmitting reservation signals from a shared resource pool may not have any adverse properties on normal packet transmissions since they may use limited resources. For example, each reservation signal may occupy one transmission time interval and less than one subchannel. In this example, one subchannel may be capable of fitting multiple nonoverlapping reservation signals. Therefore, each UE115searching to transmit a reservation signal may search for a subchannel and a transmission time interval (e.g., 1 subchannel by 1 transmission time interval) resource that is non-overlapping with any reserved resources (e.g., of other reservation signals associated with other UEs115, or packet transmissions of other UEs115). The UEs115may then select a resource location (e.g., a resource block) randomly within the subchannel and the transmission time interval resource. One or more of the base stations105may include a base station communication manager101, which may support distance based resource exclusion. UEs115may include a UE communication manager102, which may support distance based resource exclusion. For example, a UE communication manager102may determine a packet for transmission, determine whether to transmit a reservation signal prior to the transmission of the packet based in part on a condition, the reservation signal reserving resources for the transmission of the packet, and the reservation signal sharing resources from a same resource pool as the resources for the transmission of the packet, and refraining from transmitting the reservation signal based in part on determining whether to transmit the reservation signal prior to the transmission of the packet based in part on the condition. The UE communication manager102may additionally, or alternatively, determine to transmit a reservation signal prior to a transmission of a packet based in part on a condition, allocate resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool based in part on a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet, and determine whether to transmit the reservation signal using the allocated resources. The pre-reservation resource pattern may be a pattern of resource block locations where reservation signals can start. A dedicated resource pool may also be referred to herein as one or more resources dedicated for pre-reservation (e.g., a reservation signal). Accordingly, pre-reservation resource management may provide benefits and enhancements to the operation of UEs115. For example, by enabling UEs115to reliably determine when to transmit a reservation signal and select resources for the reservation signal using resources from a same resource pool as resources for normal packet transmission, operational characteristics, such as power consumption, processor utilization, and memory usage related to packet transmission may be reduced. The pre-reservation resource management may also provide efficiency to UEs115by reducing latency associated with processes related to scheduling resources for packet transmission or packet re-transmission, and more specifically avoiding unexploited resources in the wireless communications system100. For example, UEs115may improve latency when packet transmissions can be transmitted directly with a reservation signal, or improve reliability when there is pre-reservation and the packet transmission is protected from interference. FIG.2illustrates an example of a wireless communications system200that supports pre-reservation resource management for wireless communications in accordance with one or more aspects of the present disclosure. The wireless communications system200may include a base station105-a, a UE115-a, and a UE115-b, which may be examples of the corresponding devices described with reference toFIG.1. In some examples, the wireless communications system200may implement aspects of the wireless communications system100. For example, the wireless communications system200may enable the UE115-aand the UE115-bto reliably determine when to transmit a reservation signal and reserve resources for the reservation signal using resources from a same resource pool as resources for normal packet transmission. As a result, the UE115-aand the UE115-bmay experience improved efficiency by reducing latency associated with processes related to scheduling resources for packet transmission or packet re-transmission, and more specifically avoiding unexploited resources in the wireless communications system200. In some examples the wireless communications system200may be a 4G systems such as LTE systems, LTE-A systems, or LTE-A Pro systems, and 5G systems which may be referred to as NR systems. In this example, base station105-amay perform a connection procedure (e.g., an RRC procedure, such as a cell acquisition procedure, a random access procedure, an RRC connection procedure, an RRC configuration procedure) with the UE115-a, and establish a communication link205. Base station105-amay provide communication coverage for a respective geographic coverage area110-a. In other examples, the wireless communications system200may additionally, or alternatively, support direct communications (e.g., between multiple UEs). Examples of direct communications may include, but are not limited to, D2D communications, vehicle-based communications, which may also be referred to as V2X networks, V2V networks, C-V2X networks, and the like. In this example, UE115-amay establish a communication link210via direct communications (e.g., D2D) with the UE115-b. The UE115-amay transmit reservation signals as well as packet transmissions to the base station105-aand the UE115-bvia communications links205,210. A UE115-amay determine a packet for transmission. For example, UE115-amay have one or more packets for transmission to base station105-a, UE115-b, or one or more other UEs (not shown). Prior to transmission of the packet, UE115-amay determine whether to transmit reservation signal(s)215-a,215-b. This determination may be based in part on a condition. For example, a condition may be a congestion level. In some examples, UE115-amay enable or disable reservation signaling according to a congestion level associated with the wireless communications system200. A congestion level may be per packet. For example, a congestion level may be based in part on a packet size, a priority of a packet, a number of available resources for reservation signaling, a number of reserved resources by other UEs, a number of reserved resources for a packet, a reliability requirement of a packet (e.g., a QoS), a number of re-transmissions of a packet, or a combination thereof. UE115-amay therefore enable or disable reservation signaling based in part on the congestion level being below (or equal to) or above a threshold. For example, if a congestion level associated with the wireless communications system200is equal to or below a threshold, UE115-amay enable reservation signaling. Otherwise, if the congestion level is (equal to) or above the threshold, UE115-amay disable reservation signaling. In further examples, a congestion level may be based in part on a traffic load in the wireless communications system200. As such, when a traffic load is high (e.g., above a threshold), transmission of the reservation signal(s)215-a,215-bmay further affect the traffic load of the wireless communications system200. Because transmission of the reservation signal(s)215-a,215-bmay be performed via a number of narrow band transmissions, the reservation signal(s)215-a,215-bmay fragment consecutive blocks of resources, which can otherwise be used for normal packet transmissions. To improve pre-reservation resource management in the wireless communications system200, the UE115-a(and/or the UE115-b) may be configured to enable or disable reservation signaling. Additionally, or alternatively, UE115-amay enable or disable reservation signaling according to a packet drop ratio. For example, if a packet drop ratio is equal to or above a threshold, UE115-amay disable reservation signaling. Otherwise, UE115-amay enable the reservation signaling. UE115-amay also monitor and determine a packet drop ratio based in part on a packet size, a priority of a packet, a number of available resources for reservation signaling, a number of reserved resources by other UEs, a number of reserved resources for a packet, a reliability requirement of a packet (e.g., a QoS), a number of re-transmissions of a packet, or a combination thereof. In some examples, UE115-amay be configured to continuously have reservation signaling enabled or disabled irrespective of a congestion level or a packet drop ratio associated with the wireless communications system200. Returning to the congestion level examples, UE115-amay also determine a congestion index value according to a determined congestion level. For example, UE115-amay map a determined congestion level (e.g., based in part on a resource unavailability, a packet size, or a packet priority, and the like) to a congestion index value in a relational database, a bitmap, a table, or the like, that has a set of congestion index values. A relational database, a bitmap, a table, or the like may provide an indication to UE115-aon whether to enable or disable reservation signaling based in part on a congestion index value. For example, a first congestion level determined by UE115-afor a first packet (e.g., based in part on a resource unavailability, a packet size, or a packet priority, and the like) may map to a first congestion index value, which may indicate to UE115-ato enable reservation signaling for the first packet. In another example, a second congestion level determined by UE115-afor a second packet (e.g., based in part on a resource unavailability, a packet size, or a packet priority, and the like) may map to a second congestion index value, which may indicate to UE115-ato disable reservation signaling for the second packet. In some examples, the relational database, a bitmap, a table, or the like may be configured to indicate enabling or disabling reservation signaling when a congestion level is within a congestion level range. For example, the relational database, a bitmap, a table, or the like may be configured with a set of congestion level ranges (e.g., a first range including a first set of congestion index values, a second range including a second set of congestion index values, and the like). In this example, a determined congestion level mapped by the UE115-ato a congestion index value that is within a range may coincide with whether the UE115-aenables or disables reservation signaling. After UE115-adetermines to enable reservation signaling (e.g., that UE115-amay proceed with transmitting reservation signal(s)215-a,215-b), UE115-amay determine (e.g., identify) and select resources (e.g., time and frequency resources) for the reservation signaling. To address standing challenges related to pre-reservation resource management, such as inefficiencies, among others, UE115-amay determine and select resources for reservation signaling from a shared resource pool. The shared resource pool may be related to resources allocated, reserved, and selected for packet transmissions (e.g., normal traffic) for UE115-aor one or more other UEs115(e.g., UE115-b). As a result, UE115-amay benefit from improved efficiency and reduced latency associated with processes related to scheduling resources for packet transmission or packet re-transmission because UE115-amay use resources originally dedicated largely for reservation signaling (when reservation signaling is disabled). In addition, selecting resources for reservation signaling from a shared resource pool may not have any adverse properties on packet transmissions (e.g., normal traffic) since reservation signaling use fewer resources compared to packet transmissions. UE115-amay subsequently select one or more resources for future data transmissions. Selection of the one or more resources for a future data transmission may occur after a resource for pre-reservation has been identified. This prevents the possibility of a resource being stale by the time UE115-aperforms the data transmission. That is, there may be an occasion that by the time the resource for pre-reservation is identified, the selected one or more resources for the data transmission may have become stale because another UE (e.g., UE115-b) may have already claimed those resources. In some examples, because each reservation signal may occupy one transmission time interval and less than one subchannel, to reduce (receiver) complexity for blind-decoding reservation signaling, resource selection for reservation signaling may be bound to at most one subchannel and one transmission time interval according to a pre-reservation resource pattern. In the wireless communications system200, UE115-a(and UE115-b) may be configured with resource (start) positions (e.g., resource block locations). For example, UE115-amay determine a resource start position for resource allocation of reservation signaling according to a resource allocation map (e.g., a bitmap). Therefore if resource blocks coinciding to the resource start position for resource allocation of reservation signaling is unavailable, UE115-amay buffer and select resources for resource allocation for the reservation signaling in a subsequent resource (e.g., a subsequent slot or transmission time interval). In some examples, the pre-reservation resource pattern may hop from slot to slot to provide a randomness for reservation signaling. Thereby by having UE115-asubsequently select one or more resources for future data transmissions. Selection of the one or more resources for a future data transmission may occur after a resource for pre-reservation has been identified. This prevents the possibility of a resource being stale by the time UE115-aperforms the data transmission. That is, there may be an occasion that by the time the resource for pre-reservation is identified, the selected one or more resources for the data transmission may have become stale because another UE (e.g., UE115-b) may have already claimed those resources. UE115-amay be configured with the pre-reservation resource pattern or a network device (e.g., base station105-a) may configure the UE115-awith it. As such, UE115-amay be aware of resource locations to use for reservation signaling. In return, UEs receiving reservation signaling (e.g., UE115-bfrom UE115-a) may perform blind decoding to detect and receive the reservation signaling according to the configured resource locations. In some examples, UE115-amay use resources reserved for a reservation signal according to the pre-reservation resource pattern for the packet transmission when a congestion level or packet drop ratio satisfies a threshold (e.g., is above a threshold). Otherwise, UE115-amay refrain from using configured resources associated with reservation signaling for packet transmission. By way of example, UE115-amay determine that a congestion level or a packet drop ratio, or both, are below a first threshold, and refrain from allocating resources for the packet transmission from resources dedicated to the reservation signal in the shared (or dedicated) resource pool, based in part on the congestion level or the packet drop ratio, or both, being below the first threshold. Alternatively, UE115-amay determine that the congestion level or the packet drop ratio, or both, are above the first threshold, and allocate resources for the packet transmission from resources dedicated to the reservation signal in the shared (or dedicated) resource pool. In further examples, UE115-amay determine that a congestion level or a packet drop ratio, or both, are above a first threshold and below a second threshold, and allocate resources for packet transmission from resources dedicated to the reservation signal in the shared (or dedicated) resource pool. In this example, the allocated resources may be reserved based in part on a reservation signal or a preceding transmission. For example, UE115-amay determine and select a set of available resources during a transmission time interval for transmitting the reservation signals215-a,215-b, the set of available resources may follow the pre-reservation resource pattern or may be from a shared (or dedicated) resource pool. Additionally, UE115-amay determine and reserve a set of available resources during a same or different transmission time interval for the packet transmission, the set of available resources may also follow the pre-reservation resource pattern or may be from a shared (or dedicated) resource pool. UE115-amay transmit one or more packets to base station105-a, UE115-b, or one or more other UEs (not shown) using the preceding resource reservation scheme. Hence, pre-reservation resource management in the wireless communications system200may provide benefits and enhancements to the operation of UEs115-a,115-b. For example, by enabling UEs115-a,115-bto reliably determine when to transmit a reservation signal and reserve resources for the reservation signal using resources from a same resource pool as resources for normal packet transmission, operational characteristics, such as power consumption, processor utilization, and memory usage related to packet transmission may be reduced. The pre-reservation resource management may also provide efficiency to UEs115-a,115-bby reducing latency associated with processes related to scheduling resources for packet transmission or packet re-transmission, and more specifically avoiding unexploited resources in the wireless communications system200, by allocating resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool based at least in part on a pre-reservation resource pattern. FIG.3illustrates an example of a process flow300that supports pre-reservation resource management for wireless communications in accordance with one or more aspects of the present disclosure. In some examples, the process flow300may implement aspects of wireless communications systems100or200. The process flow300may include a base station105-b, a UE115-c, and a UE115-d, which may be examples of the corresponding devices described with reference toFIGS.1and2. For example, process flow300may enable the UE115-cto reliably determine when to transmit a reservation signal and reserve resources for the reservation signal using resources from a same resource pool as resources for normal packet transmission. As a result, the UE115-cmay experience improved efficiency by reducing latency associated with processes related to scheduling resources for packet transmission or packet re-transmission. In the following description of the process flow300, the operations between the base station105-b, the UE115-c, and the UE115-dmay be transmitted in a different order than the exemplary order shown, or the operations performed by the base station105-b, the UE115-c, and the UE115-dmay be performed in different orders or at different times. Certain operations may also be omitted from the process flow300, and/or other operations may be added to the process flow300. At305, the process flow300may (optionally) commence with the base station105-band the UE115-cperforming a connection procedure (e.g., an RRC procedure, such as a cell acquisition procedure, random access procedure, an RRC connection procedure, an RRC (re-configuration procedure) to establish a wired or wireless connection. At310, the process flow300may (optionally) commence with the UE115-cand the UE115-dperforming a connection procedure to establish direction communication. Examples of direct communications may include, but are not limited to, D2D communications, vehicle-based communications, which may also be referred to as V2X networks, V2V networks, C-V2X networks, and the like. At315, the UE115-cmay determine a packet for transmission. At320, the UE115-cmay determine whether to transmit a reservation signal prior to the transmission of the packet. For example, to improve pre-reservation resource management, the UE115-cmay be configured to enable or disable reservation signaling. In some examples, UE115-amay enable or disable reservation signaling according to a congestion level, which may be based in part on a packet size, a priority of a packet, a number of available resources for reservation signaling, a number of reserved resources by other UEs, a number of reserved resources for a packet, a reliability requirement of a packet (e.g., a QoS), a number of re-transmissions of a packet, or a combination thereof. UE115-cmay therefore enable or disable reservation signaling based in part on the congestion level being below (or equal to) or above a threshold. For example, if a congestion level is equal to or below a threshold, UE115-cmay enable reservation signaling. Otherwise, if the congestion level is (equal to) or above the threshold, UE115-cmay disable reservation signaling. At325, UE115-cmay allocate resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool (e.g., for reservation signaling). For example, UE115-cmay determine a set of available resources during a transmission time interval, the set of available resources following a resource pattern or the set of available resources being from the dedicated resource pool (or shared resource pool). At330, the UE115-cmay transmit signaling including the reservation signal. For example, the UE115-ctransmit a reservation signal to the UE115-dvia direct communications (e.g., D2D). Therefore, the present disclosure may provide improvements to pre-reservation resource management. Furthermore, the techniques described herein may provide benefits and enhancements to the operation of the UEs115-c,115-d. For example, by enabling UEs115-c,115-dto reliably determine when to transmit a reservation signal and select resources for the reservation signal using resources from a same resource pool as resources for normal packet transmission, operational characteristics, such as power consumption, processor utilization, etc. related to packet transmission may be reduced. The pre-reservation resource management may also provide efficiency to UEs115-c,115-dby reducing latency associated with processes related to scheduling resources for packet transmission or packet re-transmission, and more specifically avoiding unexploited resources in the wireless communications system by allocating resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool based at least in part on a pre-reservation resource pattern. For example, UEs115may improve latency when packet transmissions can be transmitted directly with a reservation signal, or improve reliability when there is pre-reservation and the packet transmission is protected from interference. FIG.4shows a block diagram400of a device405that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The device405may be an example of aspects of a device as described herein. The device405may include a receiver410, a communications manager415, and a transmitter420. The device405may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver410may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to pre-reservation resource management, etc.). Information may be passed on to other components of the device405. The receiver410may be an example of aspects of the transceiver720described with reference toFIG.7. The receiver410may utilize a single antenna or a set of antennas. The communications manager415may determine a packet for transmission, determine whether to transmit a reservation signal prior to the transmission of the packet based on a condition, the reservation signal reserving resources for the transmission of the packet, and the reservation signal sharing resources from a same resource pool as the resources for the transmission of the packet, and refrain from transmitting the reservation signal based on determining whether to transmit the reservation signal prior to the transmission of the packet based on the condition. The communications manager415may also determine whether to transmit a reservation signal prior to a transmission of a packet based on a condition and allocate, based on the determining, resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool according to a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet. The communications manager415may be an example of aspects of the communications manager710described herein. The communications manager415, or its sub-components, may be implemented in hardware, code (e.g., software or firmware) executed by a processor, or any combination thereof. If implemented in code executed by a processor, the functions of the communications manager415, or its sub-components may be executed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in the present disclosure. The communications manager415, or its sub-components, may be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations by one or more physical components. In some examples, the communications manager415, or its sub-components, may be a separate and distinct component in accordance with various aspects of the present disclosure. In some examples, the communications manager415, or its sub-components, may be combined with one or more other hardware components, including but not limited to an input/output (I/O) component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof in accordance with various aspects of the present disclosure. In some examples, the communications manager415may be implemented as an integrated circuit or chipset for a mobile device modem, and the receiver410and transmitter420may be implemented as analog components (e.g., amplifiers, filters, antennas) coupled with the mobile device modem to enable wireless transmission and reception over one or more bands. The communications manager415as described herein may be implemented to realize one or more potential advantages. One implementation may allow the device to transmit a packet directly with a reservation signal or use a pre-reservation signal to protect a packet from interference, which may result in increased processing efficiency as the device405may improve latency in some cases while avoiding potentially necessary retransmissions with a pre-reservation signal in other cases. Based on techniques for efficiently exploiting potential resources in the wireless communications system as described herein, a processor of a UE115(e.g., controlling the receiver410, the transmitter420, or a transceiver720as described with respect toFIG.7) may increase system efficiency and decrease unnecessary processing at a device, which may result in increased power savings and longer battery life. The transmitter420may transmit signals generated by other components of the device405. In some examples, the transmitter420may be collocated with a receiver410in a transceiver module. For example, the transmitter420may be an example of aspects of the transceiver720described with reference toFIG.7. The transmitter420may utilize a single antenna or a set of antennas. FIG.5shows a block diagram500of a device505that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The device505may be an example of aspects of a device405or a device115as described herein. The device505may include a receiver510, a communications manager515, and a transmitter540. The device505may also include a processor. Each of these components may be in communication with one another (e.g., via one or more buses). The receiver510may receive information such as packets, user data, or control information associated with various information channels (e.g., control channels, data channels, and information related to pre-reservation resource management, etc.). Information may be passed on to other components of the device505. The receiver510may be an example of aspects of the transceiver720described with reference toFIG.7. The receiver510may utilize a single antenna or a set of antennas. The communications manager515may be an example of aspects of the communications manager415as described herein. The communications manager515may include a packet component520, a condition component525, a signal component530, and a resource component535. The communications manager515may be an example of aspects of the communications manager710described herein. The packet component520may determine a packet for transmission. The condition component525may determine whether to transmit a reservation signal prior to the transmission of the packet based on a condition, the reservation signal reserving resources for the transmission of the packet, and the reservation signal sharing resources from a same resource pool as the resources for the transmission of the packet. The signal component530may refrain from transmitting the reservation signal based on determining whether to transmit the reservation signal prior to the transmission of the packet based on the condition. The condition component525may determine whether to transmit a reservation signal prior to a transmission of a packet based on a condition. The resource component535may allocate, based on the determining, resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool according to a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet. The transmitter540may transmit signals generated by other components of the device505. In some examples, the transmitter540may be collocated with a receiver510in a transceiver module. For example, the transmitter540may be an example of aspects of the transceiver720described with reference toFIG.7. The transmitter540may utilize a single antenna or a set of antennas. FIG.6shows a block diagram600of a communications manager605that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The communications manager605may be an example of aspects of a communications manager415, a communications manager515, or a communications manager710described herein. The communications manager605may include a packet component610, a condition component615, a signal component620, a threshold component625, a resource component630, and a mapping component635. Each of these modules may communicate, directly or indirectly, with one another (e.g., via one or more buses). The packet component610may determine a packet for transmission. The condition component615may determine whether to transmit a reservation signal prior to the transmission of the packet based on a condition, the reservation signal reserving resources for the transmission of the packet, and the reservation signal sharing resources from a same resource pool as the resources for the transmission of the packet. In some examples, the condition component615may determine whether to transmit a reservation signal prior to a transmission of a packet based on a condition. The condition component615may determine a congestion level related to traffic load in the wireless communications system, where the congestion level is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof. In some examples, the condition component615may monitor a packet drop ratio by the device in the wireless communications system, where the packet drop ratio is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof, where refraining from transmitting the reservation signal is based on the congestion level or the packet drop ratio, or a combination thereof. In some examples, the condition component615may determine that the congestion level or the packet drop ratio, or both, are below a first threshold. In some examples, the condition component615may determine that the congestion level or the packet drop ratio, or both, are above a first threshold and below a second threshold. In some examples, the condition component615may determine that the congestion level or the packet drop ratio, or both, are above a first threshold. The signal component620may refrain from transmitting the reservation signal based on the determining. In some examples, the signal component620may disable the transmission of the reservation signal prior to the transmission of the packet based on the congestion level or the packet drop ratio, or both, satisfying the threshold, where refraining from transmitting the reservation signal is based on the disabling. In some examples, the signal component620may include, in the reservation signal, information associated with the second set of available resources for the transmission of the packet. In some examples, the signal component620may transmit the reservation signal based on enabling the reservation signal. In some examples, the signal component620may refrain from transmitting the reservation signal based on disabling the reservation signal. In some examples, the signal component620may determine to perform the transmission of the packet using resources from the dedicated resource pool based on disabling the reservation signal. The resource component630may allocate, based on the determining, resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool according to a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet. In some examples, the resource component630may determine a first set of available resources during a transmission time interval, the first set of available resources following the pre-reservation resource pattern or the first set of available resources being from the dedicated resource pool. The resource component630may select the first set of available resources to transmit the reservation signal during the transmission time interval and prior to the transmission of the packet. In some examples, the resource component630may determine a second set of available resource during the transmission time interval or a subsequent transmission time interval. The resource component630may reserve the second set of available resources for the transmission of the packet. In some examples, the resource component630may determine an absence of available resources during a transmission time interval. In some examples, the resource component630may determine available resources during a subsequent transmission time interval, the available resources following the pre-reservation resource pattern, and the available resources being from the dedicated resource pool. The resource component630may select the available resources to transmit the reservation signal during the transmission time interval and prior to the transmission of the packet. In some examples, the resource component630may allocate resources for the transmission of the packet from the dedicated resource pool associated with the reservation signal based on the congestion level or the packet drop ratio, or both, being above the first threshold and below the second threshold, where the allocated resources are reserved based on a reservation signal or a preceding transmission. In some examples, the resource component630may allocate resources for the transmission of the packet from the dedicated resource pool associated with the reservation signal based on the congestion level or the packet drop ratio, or both, being above the first threshold. The threshold component625may determine that the congestion level or the packet drop ratio, or both, satisfy a threshold. The mapping component635may map the congestion level to a congestion index value in a table including a set of congestion index values, where each congestion index value correlates to a packet size, a QoS requirement, a packet priority, or a combination thereof. In some examples, the mapping component635may determine to enable the reservation signal based on the congestion index value. In some examples, the mapping component635may determine to disable the reservation signal based on the congestion index value. FIG.7shows a diagram of a system700including a device705that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The device705may be an example of or include the components of device405, device505, or a device as described herein. The device705may include components for bi-directional voice and data communications including components for transmitting and receiving communications, including a communications manager710, an I/O controller715, a transceiver720, an antenna725, memory730, and a processor740. These components may be in electronic communication via one or more buses (e.g., bus745). The communications manager710may determine a packet for transmission, determine whether to transmit a reservation signal prior to the transmission of the packet based on a condition, the reservation signal reserving resources for the transmission of the packet, and the reservation signal sharing resources from a same resource pool as the resources for the transmission of the packet, and refrain from transmitting the reservation signal based on determining whether to transmit the reservation signal prior to the transmission of the packet based on the condition. The communications manager710may also determine whether to transmit a reservation signal prior to a transmission of a packet based on a condition and allocate, based on the determining, resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool according to a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet. The I/O controller715may manage input and output signals for the device705. The I/O controller715may also manage peripherals not integrated into the device705. In some cases, the I/O controller715may represent a physical connection or port to an external peripheral. In some cases, the I/O controller715may utilize an operating system such as iOS, ANDROID, MS-DOS, MS-WINDOWS, OS/2, UNIX, LINUX, or another known operating system. In other cases, the I/O controller715may represent or interact with a modem, a keyboard, a mouse, a touchscreen, or a similar device. In some cases, the I/O controller715may be implemented as part of a processor. In some cases, a user may interact with the device705via the I/O controller715or via hardware components controlled by the I/O controller715. The transceiver720may communicate bi-directionally, via one or more antennas, wired, or wireless links as described herein. For example, the transceiver720may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver. The transceiver720may also include a modem to modulate the packets and provide the modulated packets to the antennas for transmission, and to demodulate packets received from the antennas. In some cases, the device705may include a single antenna725. However, in some cases the device705may have more than one antenna725, which may be capable of concurrently transmitting or receiving multiple wireless transmissions. The memory730may include random-access memory (RAM) and read-only memory (ROM). The memory730may store computer-readable, computer-executable code735including instructions that, when executed, cause the processor to perform various functions described herein. In some cases, the memory730may contain, among other things, a basic input/output system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices. The code735may include instructions to implement aspects of the present disclosure, including instructions to support wireless communications. The code735may be stored in a non-transitory computer-readable medium such as system memory or other type of memory. In some cases, the code735may not be directly executable by the processor740but may cause a computer (e.g., when compiled and executed) to perform functions described herein. The processor740may include an intelligent hardware device, (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, a FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof). In some cases, the processor740may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into the processor740. The processor740may be configured to execute computer-readable instructions stored in a memory (e.g., the memory730) to cause the device705to perform various functions (e.g., functions or tasks supporting pre-reservation resource management). FIG.8shows a flowchart illustrating a method800that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The operations of method800may be implemented by a device or its components as described herein. For example, the operations of method800may be performed by a communications manager as described with reference toFIGS.4through7. In some examples, a device may execute a set of instructions to control the functional elements of the device to perform the functions described herein. Additionally or alternatively, a device may perform aspects of the functions described herein using special-purpose hardware. At805, the device may determine a packet for transmission. The operations of805may be performed according to the methods described herein. In some examples, aspects of the operations of805may be performed by a packet component as described with reference toFIGS.4through7. At810, the device may determine whether to transmit a reservation signal prior to the transmission of the packet based on a condition, the reservation signal reserving resources for the transmission of the packet, and the reservation signal sharing resources from a same resource pool as the resources for the transmission of the packet. The operations of810may be performed according to the methods described herein. In some examples, aspects of the operations of810may be performed by a condition component as described with reference toFIGS.4through7. At815, the device may refrain from transmitting the reservation signal based on determining whether to transmit the reservation signal prior to the transmission of the packet based on the condition. The operations of815may be performed according to the methods described herein. In some examples, aspects of the operations of815may be performed by a signal component as described with reference toFIGS.4through7. FIG.9shows a flowchart illustrating a method900that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The operations of method900may be implemented by a device or its components as described herein. For example, the operations of method900may be performed by a communications manager as described with reference toFIGS.4through7. In some examples, a device may execute a set of instructions to control the functional elements of the device to perform the functions described herein. Additionally or alternatively, a device may perform aspects of the functions described herein using special-purpose hardware. At905, the device may determine a packet for transmission. The operations of905may be performed according to the methods described herein. In some examples, aspects of the operations of905may be performed by a packet component as described with reference toFIGS.4through7. At910, the device may optionally determine a congestion level related to traffic load in a wireless communications system, where the congestion level is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof. The operations of910may be performed according to the methods described herein. In some examples, aspects of the operations of910may be performed by a condition component as described with reference toFIGS.4through7. At915, the device may optionally monitor a packet drop ratio by the device in the wireless communications system, where the packet drop ratio is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof. The operations of915may be performed according to the methods described herein. In some examples, aspects of the operations of915may be performed by a condition component as described with reference toFIGS.4through7. At920, the device may determine that the congestion level or the packet drop ratio, or both, satisfy a threshold. The operations of920may be performed according to the methods described herein. In some examples, aspects of the operations of920may be performed by a threshold component as described with reference toFIGS.4through7. At925, the device may disable the transmission of the reservation signal prior to the transmission of the packet based on the congestion level or the packet drop ratio, or both, satisfying the threshold. The operations of925may be performed according to the methods described herein. In some examples, aspects of the operations of925may be performed by a signal component as described with reference toFIGS.4through7. At930, the device may refrain from transmitting the reservation signal based on the disabling. The operations of930may be performed according to the methods described herein. In some examples, aspects of the operations of930may be performed by a signal component as described with reference toFIGS.4through7. FIG.10shows a flowchart illustrating a method1000that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The operations of method1000may be implemented by a device or its components as described herein. For example, the operations of method1000may be performed by a communications manager as described with reference toFIGS.4through7. In some examples, a device may execute a set of instructions to control the functional elements of the device to perform the functions described herein. Additionally or alternatively, a device may perform aspects of the functions described herein using special-purpose hardware. At1005, the device may determine whether to transmit a reservation signal prior to a transmission of a packet based on a condition. The operations of1005may be performed according to the methods described herein. In some examples, aspects of the operations of1005may be performed by a condition component as described with reference toFIGS.4through7. At1010, the device may allocate, based on the determining, resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool according to a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet. The operations of1010may be performed according to the methods described herein. In some examples, aspects of the operations of1010may be performed by a resource component as described with reference toFIGS.4through7. FIG.11shows a flowchart illustrating a method1100that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The operations of method1100may be implemented by a device or its components as described herein. For example, the operations of method1100may be performed by a communications manager as described with reference toFIGS.4through7. In some examples, a device may execute a set of instructions to control the functional elements of the device to perform the functions described herein. Additionally or alternatively, a device may perform aspects of the functions described herein using special-purpose hardware. At1105, the device may determine a congestion level related to traffic load in a wireless communications system, where the congestion level is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof. The operations of1105may be performed according to the methods described herein. In some examples, aspects of the operations of1105may be performed by a condition component as described with reference toFIGS.4through7. At1110, the device may optionally monitor a packet drop ratio by the device in the wireless communications system, where the packet drop ratio is based on the resource unavailability, the packet size, or the packet priority, or a combination thereof. The operations of1110may be performed according to the methods described herein. In some examples, aspects of the operations of1110may be performed by a condition component as described with reference toFIGS.4through7. At1115, the device may map the congestion level to a congestion index value in a table including a set of congestion index values, where each congestion index value correlates to a packet size, a QoS requirement, a packet priority, or a combination thereof. The operations of1115may be performed according to the methods described herein. In some examples, aspects of the operations of1115may be performed by a mapping component as described with reference toFIGS.4through7. At1120, the device may determine to enable the reservation signal based on the congestion index value. The operations of1120may be performed according to the methods described herein. In some examples, aspects of the operations of1120may be performed by a mapping component as described with reference toFIGS.4through7. At1125, the device may allocate, based on the determining, resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool according to a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet. The operations of1125may be performed according to the methods described herein. In some examples, aspects of the operations of1125may be performed by a resource component as described with reference toFIGS.4through7. At1130, the device may transmit the reservation signal based on the allocation. The operations of1130may be performed according to the methods described herein. In some examples, aspects of the operations of1130may be performed by a signal component as described with reference toFIGS.4through7. FIG.12shows a flowchart illustrating a method1200that supports pre-reservation resource management in accordance with one or more aspects of the present disclosure. The operations of method1200may be implemented by a device or its components as described herein. For example, the operations of method1200may be performed by a communications manager as described with reference toFIGS.4through7. In some examples, a device may execute a set of instructions to control the functional elements of the device to perform the functions described herein. Additionally or alternatively, a device may perform aspects of the functions described herein using special-purpose hardware. At1205, the device may determine a congestion level related to traffic load in a wireless communications system, where the congestion level is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof. The operations of1205may be performed according to the methods described herein. In some examples, aspects of the operations of1205may be performed by a condition component as described with reference toFIGS.4through7. At1210, the device may optionally monitor a packet drop ratio by the device in the wireless communications system, where the packet drop ratio is based on the resource unavailability, the packet size, or the packet priority, or a combination thereof. The operations of1210may be performed according to the methods described herein. In some examples, aspects of the operations of1210may be performed by a condition component as described with reference toFIGS.4through7. At1215, the device may map the congestion level to a congestion index value in a table including a set of congestion index values, where each congestion index value correlates to a packet size, a QoS requirement, a packet priority, or a combination thereof. The operations of1215may be performed according to the methods described herein. In some examples, aspects of the operations of1215may be performed by a mapping component as described with reference toFIGS.4through7. At1220, the device may determine to disable the reservation signal based on the congestion index value. The operations of1220may be performed according to the methods described herein. In some examples, aspects of the operations of1220may be performed by a mapping component as described with reference toFIGS.4through7. At1225, the device may refrain from transmitting the reservation signal based on disabling the reservation signal. The operations of1225may be performed according to the methods described herein. In some examples, aspects of the operations of1225may be performed by a signal component as described with reference toFIGS.4through7. It should be noted that the methods described herein describe possible implementations, and that the operations may be rearranged or otherwise modified and that other implementations are possible. Further, aspects from two or more of the methods may be combined. Example 1 A method for wireless communication, comprising: determining a packet for transmission; determining whether to transmit a reservation signal prior to the transmission of the packet based on a condition, the reservation signal reserving resources for the transmission of the packet, and the reservation signal sharing resources from a same resource pool as the resources for the transmission of the packet; and refraining from transmitting the reservation signal based on determining whether to transmit the reservation signal prior to the transmission of the packet. Example 2 The method of example 1, further comprising: determining a congestion level related to traffic load in the wireless communications system, wherein the congestion level is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof; and monitoring a packet drop ratio by the device in the wireless communications system, wherein the packet drop ratio is based on a resource unavailability, a packet size, or a packet priority, or a combination thereof, wherein refraining from transmitting the reservation signal is based on the congestion level or the packet drop ratio, or a combination thereof. Example 3 The method of example 2, further comprising: determining that the congestion level or the packet drop ratio, or both, satisfy a threshold; and disabling the transmission of the reservation signal prior to the transmission of the packet based on the congestion level or the packet drop ratio, or both, satisfying the threshold, wherein refraining from transmitting the reservation signal is based on the disabling. Example 4 A method of wireless communication, comprising: determining whether to transmit a reservation signal prior to a transmission of a packet based on a condition; and allocating, based on the determining, resources for the reservation signal from a same resource pool associated with the transmission of the packet or a dedicated resource pool according to a pre-reservation resource pattern, the reservation signal reserving one or more resources for the transmission of the packet. Example 5 The method of example 4, further comprising: determining a first set of available resources during a transmission time interval, the first set of available resources following the pre-reservation resource pattern or the first set of available resources being from the dedicated resource pool; and selecting the first set of available resources to transmit the reservation signal during the transmission time interval and prior to the transmission of the packet. Example 6 The method of any of examples 4 or 5, further comprising: determining a second set of available resource during the transmission time interval or a subsequent transmission time interval; reserving the second set of available resources for the transmission of the packet; and including, in the reservation signal, information associated with the second set of available resources for the transmission of the packet. Example 7 The method of any of examples 4 to 6, further comprising: determining an absence of available resources during a transmission time interval; determining available resources during a subsequent transmission time interval, the available resources following the pre-reservation resource pattern, and the available resources being from the dedicated resource pool; and selecting the available resources to transmit the reservation signal during the transmission time interval and prior to the transmission of the packet. Example 8 The method of example 7, further comprising: determining a second set of available resource during the transmission time interval or a subsequent transmission time interval; reserving the second set of available resources for the transmission of the packet; and including, in the reservation signal, information associated with the second set of available resources for the transmission of the packet. Example 9 The method of any of examples 4 to 8, further comprising: determining a congestion level related to traffic load in the wireless communications system, wherein the congestion level is based at least in part on a resource unavailability, a packet size, or a packet priority, or a combination thereof; and monitoring a packet drop ratio by the device in the wireless communications system, wherein the packet drop ratio is based on the resource unavailability, the packet size, or the packet priority, or a combination thereof. Example 10 The method of example 9, further comprising: mapping the congestion level to a congestion index value in a table comprising a set of congestion index values, wherein each congestion index value correlates to a packet size, a QoS requirement, a packet priority, or a combination thereof determining to enable the reservation signal based on the congestion index value; and transmitting the reservation signal based on enabling the reservation signal. Example 11 The method of example 9, further comprising: mapping the congestion level to a congestion index value in a table comprising a set of congestion index values, wherein each congestion index value correlates to a packet size, a QoS requirement, a packet priority, or a combination thereof; determining to disable the reservation signal based on the congestion index value; and refraining from transmitting the reservation signal based on disabling the reservation signal. Example 12 The method of any of examples 9 to 11, further comprising: determining to perform the transmission of the packet using resources from the dedicated resource pool based on disabling the reservation signal, wherein the dedicated resource pool comprises one or more resources dedicated for pre-reservation associated with the reservation signal. Example 13 The method of example 9: further comprising: determining that the congestion level or the packet drop ratio, or both, are below a first threshold; and refraining from allocating one or more resources for the transmission of the packet from the dedicated resource pool associated with the reservation signal based on the congestion level or the packet drop ratio, or both, being below the first threshold. Example 14 The method of example 9: further comprising: determining that the congestion level or the packet drop ratio, or both, are above a first threshold and below a second threshold; and allocating resources for the transmission of the packet from the dedicated resource pool associated with the reservation signal based on the congestion level or the packet drop ratio, or both, being above the first threshold and below the second threshold, wherein the allocated resources are reserved based on a reservation signal or a preceding transmission. Example 15 The method of example 9, further comprising: determining that the congestion level or the packet drop ratio, or both, are above a first threshold; and allocating resources for the transmission of the packet from the dedicated resource pool associated with the reservation signal based on the congestion level or the packet drop ratio, or both, being above the first threshold. Example 16 An apparatus for wireless communications comprising a processor, memory coupled to the processor, the processor and memory configured to perform a method of any of examples 1 to 3. Example 17 An apparatus for wireless communications comprising a processor, memory coupled to the processor, the processor and memory configured to perform a method of any of examples 4 to 15. Example 18 An apparatus comprising at least one means for performing a method of any of examples 1 to 3. Example 19 An apparatus comprising at least one means for performing a method of any of examples 4 to 15. Example 20 A non-transitory computer-readable medium storing code for wireless communications, the code comprising instructions executable by a processor to perform a method of any of examples 1 to 3. Example 21 A non-transitory computer-readable medium storing code for wireless communications, the code comprising instructions executable by a processor to perform a method of any of examples 4 to 15. Techniques described herein may be used for various wireless communications systems such as CDMA, TDMA, FDMA, OFDMA, single carrier frequency division multiple access (SC-FDMA), and other systems. A CDMA system may implement a radio technology such as CDMA2000, Universal Terrestrial Radio Access (UTRA), etc. CDMA2000 covers IS-2000, IS-95, and IS-856 standards. IS-2000 Releases may be commonly referred to as CDMA2000 1×, 1×, etc. IS-856 (TIA-856) is commonly referred to as CDMA2000 1×EV-DO, High Rate Packet Data (HRPD), etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. A TDMA system may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA system may implement a radio technology such as Ultra Mobile Broadband (UMB), E-UTRA, Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunications System (UMTS). LTE, LTE-A, and LTE-A Pro are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A, LTE-A Pro, NR, and GSM are described in documents from the organization named “3rd Generation Partnership Project” (3GPP). CDMA2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the systems and radio technologies mentioned herein as well as other systems and radio technologies. While aspects of an LTE, LTE-A, LTE-A Pro, or NR system may be described for purposes of example, and LTE, LTE-A, LTE-A Pro, or NR terminology may be used in much of the description, the techniques described herein are applicable beyond LTE, LTE-A, LTE-A Pro, or NR applications. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscriptions with the network provider. A small cell may be associated with a lower-powered base station, as compared with a macro cell, and a small cell may operate in the same or different (e.g., licensed, unlicensed, etc.) frequency bands as macro cells. Small cells may include pico cells, femto cells, and micro cells according to various examples. A pico cell, for example, may cover a small geographic area and may allow unrestricted access by UEs with service subscriptions with the network provider. A femto cell may also cover a small geographic area (e.g., a home) and may provide restricted access by UEs having an association with the femto cell (e.g., UEs in a closed subscriber group (CSG), UEs for users in the home, and the like). An eNB for a macro cell may be referred to as a macro eNB. An eNB for a small cell may be referred to as a small cell eNB, a pico eNB, a femto eNB, or a home eNB. An eNB may support one or multiple (e.g., two, three, four, and the like) cells, and may also support communications using one or multiple component carriers. A gNB for a macro cell may be referred to as a macro gNB. A gNB for a small cell may be referred to as a small cell gNB, a pico gNB, a femto gNB, or a home gNB. A gNB may support one or multiple (e.g., two, three, four, and the like) cells (e.g., component carriers). The wireless communications systems described herein may support synchronous or asynchronous operation. For synchronous operation, the base stations may have similar frame timing, and transmissions from different base stations may be approximately aligned in time. For asynchronous operation, the base stations may have different frame timing, and transmissions from different base stations may not be aligned in time. The techniques described herein may be used for either synchronous or asynchronous operations. Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA, or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration). The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM), flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of” or “one or more of”) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary operation that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.” In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label, or other subsequent reference label. The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term “exemplary” used herein means “serving as an example, instance, or illustration,” and not “preferred” or “advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples. The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
109,650
11943669
It should be appreciated by those skilled in the art that any block diagram herein represents conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown. DETAILED DESCRIPTION In the present document, the word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or implementation of the present subject matter described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises . . . a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus. FIG.1Aillustrates existing Voice over Long-Term Evolution (VoLTE) architecture100for a VoLTE communication between a User Agent Client (UAC)101and User Agent Server (UAS)103over an IP Multimedia System (IMS) network102. The IMS network102is a standardised architectural framework for delivering Internet Protocol (IP) multimedia services. The IMS network102uses Session Initiation Protocol (SIP) as a core protocol. The SIP is a signaling protocol used for initiating, maintaining, and terminating real-time sessions that include voice, video, messaging applications, and the like. The UAC101is an entity that sends SIP request for initiating the session and receives SIP response. For example, the UAC101is a terminal associated with a sender in the VoLTE communication. The UAS103is an entity that receives the SIP request and sends the SIP response. For example, the UAS103is a terminal associated with a receiver in the VoLTE communication. Proxy Call Session Control Function (P-CSCF) acts as end user (also referred as UAC101) entry point to the IMS network102for registration and call setup signaling messages. IMS Application-Level Gateway (ALG) is part of the P-CSCF and provides control of IMS Access Gateway (AGW) for media plane handling, monitoring and management of media flows, and the like. Serving Call Session Control Function (S-CSCF) acts as SIP server in the IMS network102. The S-CSCF provides session management functionality and routes SIP messages between various endpoints. Home Subscriber Server (HSS) stores subscriber's IMS profiles database including security parameters located in end user's Subscriber Identity Module (SIM). Interrogating Call Session Control Function (I-CSCF) interrogates the HSS and routes the SIP requests to appropriate S-CSCF. Domain Name System (DNS) assigns domain names and maps the names to Internet resources by designating authoritative name servers for each domain. A Telephony Application Server (TAS) is a component used in core network of a telecom network operator to provide telephony applications and additional multimedia functions. IP Short Message Gateway (IP-SMGW) is an IMS Application Server which handles SIP based messaging services for IMS subscribers. FIG.1Bshows a flow diagram illustrating an existing VoLTE call signaling flow in a VoLTE communication network. The VoLTE architecture comprises a Long-Term Evolution (LTE)/Evolved Packet Core (EPC) network104and the IMS server102. The LTE network104includes eNodeB, Mobility Management Entity (MME)/Serving Gateway (SGW), and Packet data network Gateway (PGW). The IMS server102includes the TAS, the P-CSCF, and the S-CSCF. The UAC101initiates the VoLTE call to communicate with the UAS103by sending SIP INVITE. The UAS103responds to the UAC101by sending183Session Progress notification through Policy and Charging Rules Function (PCRF) of the LTE network104for setting up bearer for voice media. Furthermore, the UAS103sends Provisional Response Acknowledgement (PRACK) along with 200 OK notification of codec selected to UAC101. Upon completion of this process, the UAS103starts ringing and sends the notification of completion confirmation of setting up the bearer for voice media. The UAS103answers the call and sends the notification 200 OK acknowledgement, by which the session gets established. Real Time Transport Protocol (RTP) codes multimedia data streams such as audio or video, divides them into packets and transmit them over the IMS network102. 200 OK is the response generated soon after a receiver at the UAS103answers the call. The RTP packets (conversations) start flowing from both the ends. In the figures and corresponding description of the present invention, the VoLTE communication network is considered as an example to explain the voice communication networks, for understanding purposes only and this should not be considered as limiting. The present disclosure is applicable for other voice communication networks such as Voice over New Radio (VoNR) with a similar implementation in which voice is packetized over Internet Protocol. Embodiments of the present disclosure relate to a method of preventing call drop in a voice communication network, a terminal, and a server. A session is established for a voice call initiated between a first terminal and a second terminal. Media inactivity timers monitor inactivity of packet transmission when the voice call is active. The inactivity of packet transmission in a low-signal area causes timeout of the media inactivity timers. This leads to call drop. The present disclosure overcomes the problem by notifying the first and second terminals that the first terminal is entering the low-signal area. The media inactivity timers are disabled to avoid timeout for retaining the session, when a user associated with the first terminal is in the low-signal area. Network parameters associated with the first terminal are stored. Further, the first and second terminals are notified when the first terminal is exiting the low-signal area. The voice call is resumed on the same session using the stored network parameters. Hence, the user can control call status in the low-signal area, to avoid call drop. With such a pause-resume kind of feature, user experience is boosted. Further, network resources are saved since the voice call is resumed on the same session. Also, the network parameters are stored to avoid re-configuration of the network parameters when the user exits the low-signal area. When the session is released, the voice call is re-initiated on the new session based on the stored network parameters. This ensures that the call is not dropped even when the session is released. Hence, the user can continue the call when the user exists the low-signal area and experience is further enhanced. Further, this will avoid users associated with the first terminal and the second terminal calling together same time when re-attempting to make the call, which can lead call fail permanently. Further embodiments of the present disclosure relate to a method of preventing call drop in a voice communication network, in which a first terminal supports dual Subscriber Identity Modules (SIMs). A first call is initiated between the first terminal and a second terminal. The first terminal may detect a second call on the second SIM from a third terminal. The parallel active calls or active call and internet data session on two different SIMs may lead to degradation of call quality and the call may eventually get dropped. In the present disclosure one or more media inactivity timers associated with the first call are disabled when the second call is detected on the second SIM. Furthermore, The network parameters associated with the first call are stored. The first call is resumed or re-initiated on the existing session or new session, respectively through quick service recovery using the stored network parameters. FIG.2illustrates an exemplary environment200for preventing call drop in a voice communication network, in accordance with some embodiments of the present disclosure. The exemplary environment200comprises a first terminal201, a server203, and a second terminal202. The first terminal201may refer to a User Equipment (UE) associated with a sender. The sender may be a user initiating a VoLTE call to a receiver. The first terminal201is also referred as the UAC in the present description. The second terminal202may refer to a UE associated with the receiver. The receiver may be a user receiving the call from the sender. The second terminal202is also referred as the UAS in the present description. The server203is a IMS server. The server203uses the SIP for initiating, maintaining, and terminating sessions for the voice call between the first terminal201and the second terminal202. The voice call initiated between the first terminal201and the second terminal202may be a voice call, video call, and the like. The first terminal201and the second terminal202may be a handheld device such as a smartphone associated with the sender and the receiver, respectively. The first terminal201and the second terminal202may be any computing device such as a laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a smartphone, a tablet, e-book readers, a server, a network server, a cloud-based server, and the like. The terminologies associated with the VoLTE communication network in the present description are used for understanding purposes only and should not be considered as limiting. The voice call between the first terminal201and the second terminal202may drop due to various reasons. The call drop is defined as a fraction of calls which, due to technical reasons, were cut off before users had finished their conversational tone and before one of them had hung up. The present disclosure relates to preventing the call drop in the voice communication network. FIG.3shows a flow diagram300illustrating call drop scenario in the voice communication network, in accordance with some embodiments of the present disclosure. InFIG.3, VoLTE communication is considered as an example. In step1, the VoLTE call is active between the first terminal201and the second terminal202through the IMS server203. The IMS server203has established a session for the VoLTE call between the first terminal201and the second terminal202. The VoLTE call initiated between the first terminal201and the second terminal202may be a voice call, video call, and the like. In step2. RTP packets are flowing from the first terminal201to the second terminal202on Uplink (UL) channel. RTP stands for Real-Time Transport Protocol used to code multimedia data streams such as audio or video, divide them into packets and transmit them over an IP network. At transport level, the RTP typically uses connectionless User Datagram Protocol (UDP). The RTP packets are flowing from the second terminal202to the first terminal201on Downlink (DL) channel. The RTP packets are coded multimedia packets such as voice/video packets i.e., the first terminal201and the second terminal202are exchanging voice/video packets. In step3, a first user (sender) associated with the first terminal201enters a low-signal area such as elevator, basement, underground, and the like when the VoLTE call is active or in progress. The transmission and reception of the RTP packets fail due to non-availability of good signal conditions as shown in step4and5. The RTP packets are not received at the second terminal202. Media inactivity timer associated with the second terminal202is activated. For example, the media inactivity timer is a RTP timer. The media inactivity timer is used to indicate that the RTP packets have stopped flowing for configured amount of time or silent suppression packets are not sent. When the RTP packets stop flowing for the configured amount of time, RTP timeout occurs. In an example, the RTP timeout may occur after 10 seconds or 20 seconds. Since, the RTP packets are not received at the second terminal202, the RTP timeout occurs at the second terminal202as shown in step6. Similarly, the RTP packets are not received at the first terminal201, which causes the RTP timeout at the first terminal201. This leads to call drop when the first user associated with the first terminal201enters the low-signal area i.e., the first terminal201is in Out-of-Service (OOS) area. Further, the first user may continue to attempt the call by transmitting the SIP INVITE to the second terminal202, which fails due to non-availability of good signal as shown in step9and step10. While the first terminal201is in no service state, the first terminal201reinitiates protocol procedures to recover lost signal as shown in step11. Further, recovery depends on network conditions, availability of different Radio Access technology (RAT). As part of recovery, the first terminal201may be searching for network on different RATs and frequencies. Also, during recovery, if user exits the elevator, any call attempt by the first user will also lead to failure, as UE has not yet recovered. The said operations are shown in step12and step13. Further, in step14, the first terminal201may detect a different network or RAT depending on the RAT and the frequency scanned at that moment. Since it is a fresh registration and camping attempt, the first terminal201might even encounter a reject from network and further initiate protocol procedures as per the reject handling. This would further delay the recovery. Finally, the first terminal201recovers on the RAT at step15. The first user associated with the first terminal201establishes the VoLTE call with the second terminal202as shown in steps16-18. Several call attempts by the first user in the low-signal area leads to bad user experience and is annoying to both the first user and second user (receiver). FIG.4Ashows an exemplary flow chart illustrating method steps for preventing call drop in the voice communication network, in accordance with some embodiments of the present disclosure. As illustrated inFIG.4A, the method400may comprise one or more steps. The method400may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types. The order in which the method400is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. At step401, the first terminal201and the second terminal202receives a first indication that the first terminal201is entering the low-signal area. The voice call is active between the first terminal201and the second terminal202. A session is established for the voice call initiated between the first terminal201and the second terminal202. The first terminal201may enter the low-signal area when the voice call is active. For example, the low-signal area may be elevator. The first user associated with the first terminal201may enter the elevator when in the voice call with the second user associated with the second terminal202. In an embodiment, the first terminal201may receive the first indication from a user interface of the first terminal201. For example, an option may be provided on a dialer window on the first terminal201along with other call options such as speaker, mute, hold, record, and the like. The option may be displayed as “Elevator mode ON”. The option may be displayed in any other forms. For example, an elevator icon may be displayed. In another example, a low-signal icon may be displayed. The first user may select the option to provide the first indication to the first terminal201. In another embodiment, the first indication may be generated automatically based on different sensor and peripheral inputs-location services over Global Positioning System (GPS), barometer, and other vertical location detectors. Referring to flow diagram407, steps1-3show the active call established between the first terminal201and the second terminal202and transmission and reception of the RTP packets at the first terminal201and the second terminal202. At step4, the first terminal201receives the first indication that the first terminal201is entering the low-signal area. In an embodiment, the second terminal202may receive the first indication that the first terminal201is entering the low-signal area, from the first terminal201via the server203. The voice communication network may be the VoLTE communication network. The server203(IMS server) supports SIP INFO method. The SIP INFO method is designed to transmit application-level control information along the SIP signaling path. The first indication may be provided by the first terminal201to the second terminal202using a SIP INFO message. For example, the first indication may be set as content type application/text message and information such as “I'm entering a low-signal area”. The text message is included in the SIP INFO message. Referring to the flow diagram407, step6shows the first indication provided to the second terminal202by the first terminal201using the SIP INFO method. Referring back toFIG.4A, at step402, the first terminal201and the second terminal202disables respective one or more media inactivity timers, based on the first indication, for retaining the session. The first terminal201and the second terminal202may be associated with one or more media inactivity timers to monitor inactivity of the packet transmission during the active voice call. The one or more media inactivity timers may be RTP timer and Real-time Transport Control Protocol (RTCP) timer. RTCP is used to send control packets to participants in a call. The primary function of the RTCP is to provide feedback on the quality of service being provided by the RTP. When the first terminal201is in the low-signal area, the RTP packets are not transmitted or received which causes RTP/RTCP timeout. The timeout leads to release of the session and hence call drop. In the embodiment of the present disclosure, the one or more media inactivity timers associated with the first terminal201and the second terminal202are disabled, for retaining the session. Further, monitoring timers are implemented in the first terminal201and the second terminal202. The one or more media inactivity timers are disabled for a time duration of the respective monitoring timer. The respective monitoring timers are enabled when the respective one or more media inactivity timers are disabled. For example, the time duration may be a minute. Referring again to the flow diagram407ofFIG.4B, at step5, the media inactivity timers are disabled, and the monitoring timer (referred as proprietary timer in Figures) is enabled at the first terminal201. Similarly, the media inactivity timers are disabled, and the monitoring timer (referred as proprietary timer in Figures) is enabled at the second terminal202, upon receiving the first indication as shown in step7. The steps5-7may occur in any order. For example, the first terminal201may disable the media inactivity timers and enable the monitoring timer, and then transmit the SIP INFO message to the second terminal202. The second terminal202may disable the one or more media inactivity timers and enable the monitoring timer, upon receiving the SIP INFO message. In another example, the first terminal201may first transmit the SIP INFO message to the second terminal202. Then, the first terminal201and the second terminal202may disable respective media inactivity timers and enable respective monitoring timers. The second terminal202may transmit 200 OK response after disabling the one or more media inactivity timers and enabling the monitoring timer. The session is retained, and the voice call is suspended when the first terminal201is in the low-signal area as shown in step9. Referring back toFIG.4A, at step403, the first terminal201stores one or more network parameters associated with the first terminal201and the second terminal202when one of, the session is retained and when the session is released. The one or more network parameters may comprise a Registered Public Land Mobile Network (RPLMN), a Radio Access Technology (RAT), and a frequency of respective camped cell associated with the first terminal201and the second terminal202. Each of the first terminal201and second terminal202is camped on a cell to access the voice communication network, before entering the low-signal area. The RPLMN is a Public Land Mobile Network (PLMN) on which a user terminal has performed a location registration successfully. The RAT is the underlying physical connection method for a radio-based communication network. For example, the first terminal201and the second terminal202may be connected to LTE network. Multiple frequencies or frequency bands may be associated with each RAT. The first terminal201and the second terminal202typically constantly searches for RAT and all supported bands to connect to the voice communication network. The first terminal201and the second terminal202stores the one or more network parameters to avoid different network selection and speedy recovery. Also, delay and processing complexity in searching the RAT and the frequency is reduced. Referring again to the flow diagram409, the first terminal201and the second terminal202stores the one or more network parameters when the session is retained. In an embodiment, the session may be released by the first terminal201or the server203. For example, the server203may release the session upon detecting inactivity of the packet transmission. The server203may remove bearer and the first terminal201may recover services by making several call attempts and connect to different RAT. To avoid this, call information associated with the voice call is stored by the first terminal201and the second terminal202, prior to the session is released. The dialer session and display on the UE are also maintained for the user to know that session and the call is maintained. The call information is stored to re-initiate the voice call with same contact. Flow diagram408ofFIG.4Cillustrates a scenario of session release by the first terminal201. Steps1-4illustrate the active voice call between the first terminal201and the second terminal202and the first terminal201entering the low-signal area. Steps1-4are not explained in detail again. As shown in steps5-7, the call information is stored at the first terminal201and the second terminal202. In step8, the second terminal202sends 200 OK response upon storing the call information. The first terminal201releases the session using SIP BYE method as shown in step9. The SIP BYE method is used to terminate an established session in the VoLTE communication network. At step11, the session is released from lower layers of the voice communication network. However, the dialer window on the user interface (UX) will be in a same state. This improves the user experience since a pause-resume kind of experience is provided to the first user and the second user, when the session is released and re-established in the lower layers of the voice communication network. Referring back toFIG.4A, at step404, the first terminal201and the second terminal202receives a second indication that the first terminal201has exited the low-signal area. For example, the first user associated with the first terminal201may exit the elevator. In an embodiment, the first terminal201may receive the second indication from the user interface of the first terminal201. For example, an option may be provided on a dialer window on the first terminal201. The option may be displayed as “Elevator mode OFF”. In another example, the option may be displayed as “Resume call”. The option may be displayed in any other forms. The first user may select the option to provide the second indication to the first terminal201. In another embodiment, the second indication may be generated automatically based on different sensor and peripheral inputs-location services over Global Positioning System (GPS), barometer, and other vertical location detectors. Referring to flow diagram407ofFIG.4B, step10shows the second indication received by the first terminal201that the first terminal201has exited the low-signal area. Similarly, step12in the flow diagram408shows the second indication received by the first terminal201. In an embodiment, the second terminal202may receive the second indication that the first terminal201has exited the low-signal area, from the first terminal201via the server203. The second indication may be provided by the first terminal201to the second terminal202using the SIP INFO message. For example, the second indication may be a text message such as “I'm exiting from a low-signal area”. The text message may be included in the SIP INFO message. Referring to the flow diagram407, step11shows the second indication provided to the second terminal202by the first terminal201using the SIP INFO method. Referring back toFIG.4A, at step405, the first terminal201and the second terminal202may resume the voice call on the session, based on the stored network parameters, when the session is retained. When the first terminal201exits the low-signal area, the first terminal201enables the one or more media inactivity timers. The first terminal201resumes the voice call on the same session. The stored network parameters are used for quick recovery and the first terminal201is connected to the same RPLMN, RAT and frequency. Further, the second terminal202resumes the voice call on the same session upon receiving the SIP INFO message from the first terminal201. The first terminal201and the second terminal202disables the respective monitoring timers. Referring to the flow diagram407ofFIG.4B, at step12, the second terminal202disables the one or more media inactivity timers and enables the monitoring timer associated with the second terminal202. At step13, the second terminal202sends a 200 OK response to the first terminal201. AT step14, the first terminal201enables the one or more media inactivity timers and disables the monitoring timer associated with the first terminal201. The voice call is resumed using the stored network parameters as shown in step15. The first terminal201and the second terminal202will continue the call and transmit the RTP packets as shown steps16and17. Hence, the first user and the second user can resume the same call by selecting pause-resume options on respective user interface. This improves the user experience. Also, call attempts by the first user and the second user in the low-signal area is avoided. In an example, the first user may select the option to indicate entering in the low-signal area. The voice call may be suspended by disabling the one or more media inactivity timers and enabling the monitoring timers at the first terminal201and the second terminal202. The first terminal201may exit the low-signal area. The first user associated with the first terminal201may not select the resume option (second indication) to indicate exiting the low-signal area. The one or more media inactivity timers may remain in disabled state. To avoid this, the monitoring timer is implemented. When the one or more media inactivity timers remain in disabled state, the one or more media inactivity timers are enabled after the time duration of the monitoring timer. For example, the time duration may be two minutes. The one or more media inactivity timers may be enabled after two minutes, when the second indication is not received from the first user. Referring back toFIG.4A, at step406, the first terminal201and the second terminal202re-initiates the voice call on a new session, based on the stored network parameters, when the session is released. The first terminal201and the second terminal202may re-initiate the call with the same contact on a new session, using the stored call information. Each of the first terminal201and the second terminal202obtains a first call Identity (ID) associated with the session. The call information may comprise a name and a mobile number associated with the first user along with the first call ID. Further, a second call ID is assigned to the new session. The information related to the session and the new session may be mapped based on respective call IDs to re-initiate the voice call with the same contact. Further, the first terminal201may include SIP replaces header to indicate the second terminal202about re-initiating the call on the new session. The SIP replaces header is used to logically replace an existing SIP dialog with a new SIP dialog. On receiving the SIP INVITE message, the second terminal202checks the SIP replaces header and maps the information related to the session and the new session based on the respective call IDs. The voice call is re-initiated on the new session. Referring again to the flow diagram408, steps13-18illustrate re-initiating the voice call on the new session by mapping the information related to the session and the new session based on the respective call IDs. The first terminal201and the second terminal202will continue the call and transmit the RTP packets as shown steps19and20. Figure SA shows an exemplary flow chart illustrating method steps for preventing call drop in the voice communication network, in accordance with some embodiments of the present disclosure. As illustrated inFIG.5A, the method500may comprise one or more steps. The method500may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types. The order in which the method500is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. At step501, the server203receives the first indication that the first terminal201is entering the low-signal area, from the first terminal201. The server203has established a session for the voice call between the first terminal201and the second terminal202. The server comprises one or more processors configured to perform steps illustrates inFIG.5A. A new IMS feature tag may be supported by the server203. For example, the IMS feature tag may be called as “Elevator mode”. The first terminal201may add the said IMS feature tag in contact header of SIP register message during IMS registration procedure. The SIP register message is used to create bindings between addresses-of-record (AORs) and contact addresses where the first terminal201can be reached. When the server203supports the IMS feature, the server203will include the IMS feature tag in SIP register response. This confirms the support of the feature between terminal201and the server203. During the call, when the first terminal201receives the first indication due to moving to the low-signal area, the first terminal201disables the one or more media inactivity timers and enables the monitoring timer. The first terminal201transmits the SIP INFO message to the server203. The SIP INFO message includes the IMS feature tag and payload. The server203transmits the SIP INFO message to the second terminal202. Referring to flow diagram506ofFIG.5B, step1shows the IMS registration procedure and inclusion of the IMS feature tag in the contact header of the SIP REGISTER message. At step2, the voice call is active between the first terminal201and the second terminal202. At step3, the first terminal201enters the low-signal area. At step4, the first terminal201disables the one or more media inactivity timers and enables the monitoring timer. At step5, the first terminal201sends the SIP INFO message to the server203. Referring back toFIG.5A, at step502, the server203intercepts a handover procedure for the first terminal201, upon receiving the first indication. The server203holds dedicated bearers upon receiving the first indication. Further, the server203may intercept the handover procedure. The handover procedure may be Single Radio Voice Call Continuity (SRVCC)/Inter Radio Access Technology (IRAT) procedure. SRVCC is a scheme that enables Inter RAT handover as well as a handover from packet data to circuit switched data voice calls. This will ensure that the first terminal201is not connected to a different RAT when the call is resumed. Referring again to the flow diagram506ofFIG.5B, at step6, the server203decodes the SIP INFO message and holds the dedicated bearers. Referring back to Figure SA, at step503, the server203retains the session, when the first terminal201is disconnected from the RAT associated with the first terminal201. The server203may disable media inactivity timers associated with the server203for large value to avoid call drop. Further, the server203may transmit the SIP INFO message to the second terminal202. The second terminal202may disable the one or more media inactivity timers and enable the monitoring timer associated with the second terminal202. Referring again to the flow diagram506ofFIG.5B, at step7, the server203transmits the SIP INFO message to the second terminal202. At step8, the second terminal202disables the one or more media inactivity timers and enables the monitoring timer. At step9, the second terminal202transmits a 200 OK response to the server203. At step10, the server203forwards the 200 OK response to the first terminal201. At step11, the voice call is suspended. Referring back to Figure SA, at step504, the server203receives the second indication that the first terminal201has exited the low-signal area, from the first terminal201. The first terminal201may receive the second indication that the first terminal201has exited the low-signal area. The first terminal201may transmit the second indication in form of the SIP INFO message to the server203. The server203may transmit the SIP INFO message to the second terminal202. Referring to the flow diagram506ofFIG.5B, at step12, the first terminal201 receives the second indication that the first terminal201has exited the low-signal area. At step13, the first terminal201transmits the SIP INFO message to the server203. At step14, the server203transmits the SIP INFO message to the second terminal202. Referring back to Figure SA, at step505, the server203resumes the voice call on the session, based on the second indication. The first terminal201and the second terminal202enables respective one or more media inactivity timers and disable respective monitoring timer upon receiving the second indication. The voice call is resumed. Referring to the flow diagram506ofFIG.5B, at step15, the second terminal202enables the one or more media inactivity timers and disables the monitoring timer. At step16and17, the second terminal202transmits the 200 OK response to the first terminal201via the server203. At step18, the first terminal201enables the one or more media inactivity timers and disables the monitoring timer. At step19, the voice call is resumed. At step19and20, the first terminal201and second terminal202transmit the RTP packets and continue with the voice call. Reference is now made toFIG.5Cillustrating changes in user interface (UX) of the first terminal201and the second terminal202.507shows the UX of the first terminal201. An option to indicate the first terminal201entering the low-signal area is provided along with other options on the dialer window. For example, the option is displayed as “Elevator” with elevator icon. The first user selects the option.508shows the UX of the second terminal202. An additional text “Device entered into Elevator mode” may be displayed on the UX of the second terminal202. Further, a tone may be played on the second terminal202to indicate the first terminal201in the low-signal area. For example, an elevator tone may be played. FIG.6Ashows a flow diagram600illustrating call drop scenario in the voice communication network, in accordance with some embodiments of the present disclosure. In an embodiment, the first terminal201may comprise a first Subscriber Identity Module (SIM) and a second SIM. The first terminal201may be associated with the first user named “Alex”. Steps1-4shows the first terminal201establishing a first call with a second terminal202using the first SIM. A session is established for the first call between the first terminal201and the second terminal202. A second user named “Bob” is associated with the second terminal202. At steps5and6, the first terminal201receives a second call from a third user named “Kim” associated with a third terminal. The first terminal201accepts the second call. At steps7-9, the second user is kept on-hold by sending SIP RE-INVITE message to the second terminal202. At steps10-14, the second call is established on the second SIM between the first terminal201and a third terminal and the RTP packets are transmitted. Steps15-17illustrates the first call maintained on-hold by transmitting and receiving RTCP packets. For example, the RTCP packets may be silent suppression packets. The first terminal201may be a Dual Reception Dual Standby (DRDS)/Dual Receive Dual Volte Dual Standby (DR-DVDS). DRDS devices support dual reception for simultaneous downlink reception but share single transmission for uplink. Since there are two parallel active connections (first call and second call), users at each terminal face frequent problems with either quality of the active call or call drop of active/held call. When the first call is on-hold on the first SIM the voice quality of active call on the second SIM is degraded. This is due to frequent Transmission (TX) arbitration for RTCP packet transfer. This problem may increase in the low-signal areas, since TX arbitration may further increase between the two SIMs. This may lead to first call drop as shown in steps18-20. FIG.6Bshows an exemplary flow chart illustrating method steps for preventing call drop in the voice communication network to overcome the call drop scenario explained inFIG.6A, in accordance with some embodiments of the present disclosure. As illustrated inFIG.6B, the method601may comprise one or more steps. The method601may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types. The order in which the method601is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. At step602, the first terminal201detects the second call on the second SIM from a third terminal, when the session is established for the first call between the first terminal201and the second terminal202. The first terminal201and the second terminal202may be on the first call using the first SIM. The first terminal201may detect the second call from the third terminal on the second SIM. The first terminal201sends a SIP RE-INVITE message to the second terminal202to put the first call on-hold. The first terminal201includes a proprietary MIME (Multipurpose Internet Mail Extensions) body in the SIP RE-INVITE message to indicate the second terminal202to disable the one or more media inactivity timers. Optionally existing parameters in Session Description Protocol (SDP) body may be updated to value0to disable RTCP transmission during the on-hold condition. Referring to the flow diagram608, steps1and2illustrates the first call established between the first terminal201and the second terminal202. Step3shows the second call detected from the third terminal. At step4, the first user associated with the first terminal201accepts the second call from the third terminal. At step6, the first terminal201sends the SIP RE-INVITE message along with proprietary MIME body to the second terminal202. Referring back toFIG.61, at step603, the first terminal201and the second terminal202disables respective one or more media inactivity timers associated with the first call, based on the detection, for retaining the session as shown in steps5-8in the flow diagram608inFIG.6C. Further, the respective monitoring timers are enabled. Referring back toFIG.6B, at step604, the first terminal201and the second terminal202stores the one or more network parameters associated with the first call when one of, the session is retained and when the session is released as shown in step9in the flow diagram608inFIG.6C. The first call is in on-hold state. Steps10-12illustrates the second call active between the first terminal201and the third terminal. Step14illustrates no RTCP packets transmitted to the second terminal202from the first terminal201. Referring back toFIG.68, at step605, the first terminal201and the second terminal202detects termination of the second call on the second SIM. In an example, the first terminal201may put the second call with the third terminal on-hold to resume the first call with the first terminal201on the first SIM as shown in the steps15-18in the flow diagram608inFIG.6C. Referring back toFIG.6B, at step606, the first terminal201and the second terminal202resumes the first call on the session, based on the stored network parameters, when the session is retained. When the first terminal201terminates the second call or puts the second call on-hold state, the first terminal201enables the one or more media inactivity timers. The first terminal201resumes the first call on the same session. Further, the second terminal202resumes the first call on the same session upon receiving the SIP INFO message from the first terminal201. The first terminal201and the second terminal202disables the respective monitoring timers. Step19in the flow diagram608inFIG.6Cillustrates the resuming of the first call between the first terminal201and the second terminal202. Referring back toFIG.6B, at step607, the first terminal201and the second terminal202re-initiates the first call on a new session, based on the stored network parameters, when the session is released. The first terminal201and the second terminal202may re-initiate the call with the same contact on a new session, using the stored call information. The information related to the session and the new session may be mapped based on respective call IDs to re-initiate the voice call with the same contact. Further, the first terminal201may include SIP replaces header to indicate the second terminal202about re-initiating the call on the new session. The second terminal202maps the information related to the session and the new session based on the respective call IDs. The voice call is re-initiated on the new session. Referring again to the flow diagram608, steps20-28illustrate re-initiating the first call on the new session by mapping the information related to the session and the new session based on the respective call IDs. The first terminal201and the second terminal202will continue the call and transmit the RTP packets as shown step29. In an embodiment, a data session on the first SIM is switched to the second SIM when the second call is detected on the second SIM. In Dual Mobile Data (DMD) mode, Radio Frequency (RF) transmission arbitration is reduced by moving internet or general packet services to the stack with active call (second call). This aids in quick re-establishment when switching between held call to active call. When the active call in on the second SIM, the mobile data is switched to the second SIM when the DMD mode is ON. This will allow voice call and mobile data to continue on the second SIM, there by achieving minimal RF transmission arbitration. This allows switching of the data session to the second SIM during active call on the second SIM and switch back data to Default SIM (for example, first SIM) once the call is ended on the second SIM. The switching of the data session to the second SIM is based on one or more parameters associated with the second SIM. The one or more parameters may comprise, but not limited to, ability of the second SIM to provide similar throughputs and quality of service, services or applications (example: file download in RCS on internet PDN) are not disconnected, and data subscription is valid on the second SIM. FIG.7illustrates overview architecture and interfaces in the terminal for preventing voice call drop in the communication network, in accordance with some embodiments of the present disclosure. The terminal may comprise a communication interface, a processor, and a memory (not shown in Figures). In some embodiments, the memory may be communicatively coupled to the processor. The memory stores instructions executable by the one or more processors. The one or more processors may comprise at least one data processor for executing program components for executing user or system-generated requests. The memory may be communicatively coupled to the one or more processors. The memory stores instructions, executable by the one or more processors, which, on execution, may cause the one or more processors to prevent voice call drop in the communication network. The communication interface is configured to transmit and receive messages/signals/control signaling. For example, the communication interface may be configured to transmit and receive the SIP signaling messages.FIG.7explains the architecture of wireless protocol stack at high level. Sub modules of the terminal such as User Interface (UI), IMS Stack, IMS SVC, OOS recovery are enhanced to support the prevention of voice call drop. Further, UX changes are made on mobile operating system call-dialer interface (OS-Android, iOS, et al.) to support the prevention of voice call drop (for example, elevator mode). Inputs and configurations received on UX forwarded further to 3GPP protocol modules on the terminal. IMS stack constructs new header and payload using this information to communicate with peer entities. New IMS feature tag is included to indicate network about elevator mode support. The underlying IMS protocol may is modified to configure RTP/RTCP timer value and send the SIP INFO message to peer entity indicating network outage possibility leading to mute or voice distortion. Further, Non-access stratum (NAS) may take educated decision to defer handover procedures to other Radio Access Network (RAN)(3G/LTE) as it might fail during ongoing transient network outage conditions. On receiving indications of the terminal exiting the low-signal area, NAS can optionally resume deferred handover procedures. FIGS.8A and8Billustrates call setup and call drop comparison between existing systems and the present invention. The call setup time in the existing systems (A) is greater than the call setup time in the present invention (B).FIG.8Cshows a RTP graph in which an audio session resumed in 2.1 s while traditional method delays recovery and, user had to re-initiate the call. The present disclosure allows the user to control call status in the low-signal area, to avoid call drop. With such a pause-resume kind of feature, user experience is boosted. Further, network resources are saved since the voice call is resumed on the same session. Also, the network parameters are stored to avoid re-configuration of the network parameters when the user exits the low-signal area. When the session is released, the voice call is re-initiated on the new session based on the stored network parameters. This ensures that the call is not dropped even when the session is released. Hence, the user can continue the call when the user exists the low-signal area and experience is further enhanced. Further, this will avoid users associated with the first terminal and the second terminal calling together same time when re-attempting to make the call, which can lead call fail permanently. Also, multiple call attempts by users are avoided. The session is released from lower layers of the voice communication network. The dialer window on the UX will be in a same state. This improves the user experience since a pause-resume kind of experience is provided to the first user and the second user, when the session is released and re-established in the lower layers of the voice communication network. In DMD mode. RF transmission arbitration is reduced by moving internet or general packet services to the stack with active call (second call). This aids in quick re-establishment when switching between held call to active call. When the active call in on the second SIM, the mobile data is switched to the second SIM when the DMD Mode is ON. This will allow voice call and mobile data to continue on the second SIM, there by achieving minimal RF transmission arbitration. This allows switching of the data session to the second SIM during active call on the second SIM and switch back data to Default SIM once the call is ended on the second SIM. The switching of the data session to the second SIM is based on one or more parameters associated with the second SIM. The one or more parameters may comprise, but not limited to, ability of the second SIM to provide similar throughputs and quality of service, services or applications (example: file download in RCS on internet PDN) are not disconnected, and data subscription is valid on the second SIM. The terms “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean “one or more (but not all) embodiments of the invention(s)” unless expressly specified otherwise. The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise. The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise. A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be readily apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself. The illustrated operations ofFIGS.4A,5A, and6Bshow certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified, or removed. Moreover, steps may be added to the above-described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units. Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims. While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims. REFERRAL NUMERALS Referral numberDecription100VoLTE architecture101UAC102IMS103UAS104LTE network200Environment201First terminal202Second terminal203Server
54,588
11943670
DETAILED DESCRIPTION In the aforementioned situation, the UE may perform data transmission or reception (UL or DL data transmission and reception) to or from the source BS via the protocol layers of the plurality of first bearers and simultaneously perform a random access procedure on the target BS via a protocol layer (e.g., the MAC layer) of the plurality of second bearers. The random access procedure may include transmission of a preamble, reception of a random access response, transmission of a message3, reception of a message4(e.g., reception of a contention resolution MAC control element (CE) or a UL transmit resource), or the like. In the aforementioned situation, the UE may perform data transmission or reception to or from the source BS via the protocol layers of the plurality of first bearers and simultaneously complete the random access procedure on the target BS via the protocol layer (e.g., the MAC layer) of the plurality of second bearers and transmit a handover complete message to the target BS via the protocol layers of the plurality of second bearers. In the aforementioned situation, the UE may perform data transmission or reception to or from the source BS via the protocol layers of the plurality of first bearers and simultaneously complete the random access procedure on the target BS via the protocol layer (e.g., the MAC layer) of the plurality of second bearers, transmit the handover complete message to the target BS via the protocol layers of the plurality of second bearers, and perform data transmission and reception (UL or DL). In the aforementioned situation, when the UE successfully completes the random access procedure with respect to the target BS and then initially receives a UL transmit resource from the target BS, the UE may discontinue data transmission to the source BS via the protocol layers of the plurality of first bearers, switch UL transmission, and then transmit data to the target BS via the plurality of second bearers. In the aforementioned situation, when the UE receives a handover command message, the UE may continuously perform data transmission or reception (UL or DL data transmission and reception) to or from the source BS via the protocol layers of the plurality of first bearers, and perform a random access procedure on the target BS via the protocol layers of the plurality of second bearers, and, when the UE successfully completes the random access procedure and then initially receives a UL transmit resource from the target BS, the UE may discontinue data transmission to the source BS via the protocol layers of the plurality of first bearers, and perform UL data transmission to the target BS only via the protocol layers of the plurality of second bearers. Also, the UE may continuously receive DL data from the source BS via the protocol layers of the plurality of first bearers, and also continuously receive DL data from the target BS via the protocol layers of the plurality of second bearers. In the aforementioned situation, a first bearer and a second bearer may constitute a second PDCP layer architecture, and, in the second PDCP layer architecture, the first bearer (e.g., a RLC layer, a MAC layer, or a PHY layer) for the source BS and the second bearer (e.g., a RLC layer, a MAC layer, or a PHY layer) for the target BS may be all connected to one PDCP layer, and UL data may perform transmission via one bearer from among the first bearer or the second bearer of the PDCP layer. That is, before the UE performs a random access procedure on the target BS, successfully completes the random access procedure, and initially receives a UL transmit resource from the target BS, the UE may transmit UL data via the first bearer, and, when the UE performs a random access procedure on the target BS, successfully completes the random access procedure, and initially receives a UL transmit resource from the target BS, the UE may discontinue data transmission via the first bearer, may switch the data transmission, and thus may transmit UL data to the target BS via the second bearer. However, the UE in the second PDCP layer architecture may receive DL data from the source BS or the target BS via the first bearer or the second bearer. Hereinafter, in the disclosure, provided are efficient handover procedures without a data interruption time, based on the aforementioned features. FIG.1Ais a diagram of a structure of an LTE system according to an embodiment of the disclosure. Referring toFIG.1A, a radio access network (RAN) of the LTE system includes evolved node Bs (hereinafter, referred to as ENBs, node Bs or base stations)1a-05,1a-10,1a-15, and1a-20, a mobility management entity (MME)1a-25, and a serving-gateway (S-GW)1a-30. A user equipment (UE)1a-35(also referred to as a terminal) accesses an external network via the ENB1a-05,1a-10,1a-15, or1a-20and the S-GW1a-30. InFIG.1A, the ENB1a-05,1a-10,1a-15, or1a-20corresponds to an existing node B of a universal mobile telecommunication system (UMTS). The ENB1a-05,1a-10,1a-15, or1a-20is connected to the UE1a-35through a radio channel and performs complex functions compared to the existing node B. In the LTE system, because all user traffic including a real-time service such as voice over internet protocol (VoIP) is provided via a shared channel, an entity that schedules UEs1a-35by gathering state information such as buffer states, available transmit power states, and channel states of the UEs1a-35may be necessary, and the ENB1a-05,1a-10,1a-15, or1a-20may operate as the entity. A single ENB1a-05,1a-10,1a-15, or1a-20generally controls multiple cells. For example, the LTE system uses radio access technology such as Orthogonal Frequency Division Multiplexing (OFDM) at a bandwidth of 20 MHz to achieve a data rate of 100 Mbps. Also, the LTE system uses an Adaptive Modulation & Coding (AMC) scheme to determine a modulation scheme and a channel coding rate in accordance with a channel state of the UE1a-35. The S-GW1a-30is an entity for providing data bearers and generates or removes the data bearers under the control of the MME1a-25. The MME1a-25is an entity for performing a mobility management function and various control functions for the UE1a-35and is connected to the ENBs1a-05,1a-10,1a-15, and1a-20. FIG.1Bis a diagram of a radio protocol architecture in an LTE system, according to an embodiment of the disclosure. Referring toFIG.1B, the radio protocol architecture of the LTE system includes packet data convergence protocol (PDCP) layers1b-05and1b-40, radio link control (RLC) layers1b-10and1b-35, and media access control (MAC) layers1b-15and1b-30respectively for a UE and an eNB. The PDCP layer1b-05or1b-40is in charge of IP header compression/decompression, etc. Main functions of the PDCP layer1b-05or1b-40are summarized as below.Header compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUs at PDCP re-establishment procedure for RLC AMFor split bearers in DC (only support for RLC AM): PDCP PDU routing for transmission and PDCP PDU reordering for receptionDuplicate detection of lower layer SDUs at PDCP re-establishment procedure for RLC AMRetransmission of PDCP SDUs at handover and, for split bearers in DC, of PDCP PDUs at PDCP data-recovery procedure, for RLC AMCiphering and decipheringTimer-based SDU discard in uplink The RLC layer1b-10or1b-35performs an automatic repeat request (ARQ) operation by reconfiguring PDCP Packet Data Units (PDUs) to appropriate sizes. Main functions of the RLC layer1b-10or1b-35are summarized as below.Transfer of upper layer PDUsError Correction through ARQ (only for AM data transfer)Concatenation, segmentation and reassembly of RLC SDUs (only for UM and AM data transfer)Re-segmentation of RLC data PDUs (only for AM data transfer)Reordering of RLC data PDUs (only for UM and AM data transfer)Duplicate detection (only for UM and AM data transfer)Protocol error detection (only for AM data transfer)RLC SDU discard (only for UM and AM data transfer)RLC re-establishment The MAC layer1b-15or1b-30is connected to multiple RLC layers configured for a single UE and multiplexes RLC PDUs into a MAC PDU and demultiplexes the RLC PDUs from the MAC PDU. Main functions of the MAC layer1b-15or1b-30are summarized as below.Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUs belonging to one or different logical channels into/from transport blocks (TB) delivered to/from the physical layer on transport channelsScheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding A physical (PHY) layer1b-20or1b-25channel-codes and modulates upper layer data into OFDM symbols and transmits the OFDM symbols through a radio channel, or demodulates OFDM symbols received through a radio channel and channel-decodes and delivers the OFDM symbols to an upper layer. FIG.1Cis a diagram of a structure of a next-generation wireless communication system, according to an embodiment of the disclosure. Referring toFIG.1C, a RAN of the next-generation wireless communication system (e.g., a new radio (NR) or 5G system) includes a new radio node B (hereinafter, referred to as a NR gNB or an NR base station)1c-10and a new radio core network (NR CN)1c-05. A new radio user equipment (NR UE) or UE1c-15accesses an external network via the NR gNB1c-10and the NR CN1c-05. InFIG.1C, the NR gNB1c-10corresponds to an eNB of an existing LTE system. The NR gNB1c-10may be connected to the NR UE1c-15through radio channels and may provide superior services compared to an existing node B. In the next-generation wireless communication system, because all user traffic is provided via a shared channel, an entity that schedules UEs (e.g., NR UE1c-10) by gathering state information such as buffer states, available transmit power states, and channel states of the UEs (e.g., NR UE1c-10) is necessary, and the NR gNB1c-10operates as the entity. A single NR gNB1c-10generally controls multiple cells. The NR or 5G communication system may have a bandwidth greater than an existing maximum bandwidth to achieve an ultrahigh data rate, compared to a current LTE system, and may use OFDM as radio access technology and may additionally use beamforming technology. Also, the NR or 5G communication system uses an Adaptive Modulation & Coding (AMC) scheme to determine a modulation scheme and a channel coding rate in accordance with a channel state of the NR UE1c-15. The NR CN1c-05performs functions such as mobility support, bearer configuration, and quality of service (QoS) configuration. The NR CN1c-05is an entity for performing a mobility management function and various control functions for the NR UE1c-15, and is connected to multiple base stations. The next-generation wireless communication system may cooperate with the existing LTE system, and the NR CN1c-05is connected to an MME1c-25through a network interface. The MME1c-25is connected to an eNB1c-30as an existing base station. FIG.1Dis a diagram of a radio protocol architecture of a next-generation wireless communication system, according to an embodiment of the disclosure. Referring toFIG.1D, the radio protocol architecture of the next-generation wireless communication system includes NR service data adaptation protocol (SDAP) layers1d-01and1d-45, NR PDCP layers1d-05and1d-40, NR RLC layers1d-10and1d-35, and NR MAC layers1d-15and1d-30respectively for a UE and an NR gNB. Main functions of the NR SDAP layer1d-01or1d-45may include at least some of the following functions.Transfer of user plane dataMapping between a QoS flow and a DRB for both DL and ULMarking QoS flow ID in both DL and UL packetsReflective QoS flow to DRB mapping for the UL SDAP PDUs With regard to such an SDAP layer, the UE may be configured, via an RRC message, whether to use a header of the SDAP layer or use a function of the SDAP layer for each PDCP layer, each bearer, or each logical channel. In addition, with regard to such an SDAP layer, when an SDAP header is configured, a non-access stratum (NAS) reflective QoS 1-bit indicator and an access stratum (AS) reflective QoS 1-bit indicator of the SDAP header may indicate the UE to update or reconfigure mapping information regarding the data bearer and the QoS flow of UL and DL. The SDAP header may include QoS flow ID indicating QoS. The QoS information may be used as data processing priority information, scheduling information, etc. for supporting a smooth service. Main functions of the NR PDCP layer1d-05or1d-40may include at least some of the following functions.Header compression and decompression: ROHC onlyTransfer of user dataIn-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsPDCP PDU reordering for receptionDuplicate detection of lower layer SDUsRetransmission of PDCP SDUsCiphering and decipheringTimer-based SDU discard in uplink The reordering function of the NR PDCP layer1d-05or1d-40refers to a function of sequentially reordering PDCP PDUs received from a lower layer, on a PDCP sequence number (SN) basis. The reordering function of the NR PDCP layer1d-05or1d-40may include a function of delivering the reordered data to an upper layer in order or immediately delivering the reordered data out of order, a function of recording missing PDCP PDUs by reordering the PDCP PDUs, a function of reporting a status of the missing PDCP PDUs to a transmitter, or a function of requesting to retransmit the missing PDCP PDUs. Main functions of the NR RLC layer1d-10or1d-35may include at least some of the following functions.Transfer of upper layer PDUsIn-sequence delivery of upper layer PDUsOut-of-sequence delivery of upper layer PDUsError Correction through ARQConcatenation, segmentation and reassembly of RLC SDUsRe-segmentation of RLC data PDUsReordering of RLC data PDUsDuplicate detectionProtocol error detectionRLC SDU discardRLC re-establishment The in-sequence delivery of the NR RLC layer1d-10or1d-35refers to a function of sequentially delivering RLC SDUs received from a lower layer to an upper layer. When a single RLC SDU is segmented into multiple RLC SDUs and the multiple RLC SDUs are received, the in-sequence delivery of the NR RLC layer1d-10or1d-35may include a function of reassembling the multiple RLC SDUs and delivering the RLC SDUs, may include a function of reordering received RLC PDUs on an RLC SN or PDCP SN basis, may include a function of recording missing RLC PDUs by reordering the RLC PDUs, a function of reporting a status of the missing RLC PDUs to a transmitter, may include a function of requesting to retransmit the missing RLC PDUs, may include a function of delivering only RLC SDUs previous to a missing RLC SDU, to the upper layer in order, when the missing RLC SDU exists, may include a function of delivering all RLC SDUs received before a timer is started, to the upper layer in order, when a certain timer is expired although a missing RLC SDU exists, or may include a function of delivering all RLC SDUs received up to a current time, to the upper layer in order, when a certain timer is expired although a missing RLC SDU exists. The NR RLC layer1d-10or1d-35may process the RLC PDUs in order of reception regardless of the order of sequence numbers and deliver the RLC PDUs to the NR PDCP layer1d-05or1d-40out-of sequence delivery, and, when the NR RLC layer1d-10or1d-35receives segments, the NR RLC layer1d-10or1d-35may receive the segments received later or stored in a buffer, reconfigure the received segments into a whole RLC PDU, and then process and deliver the whole RLC PDU to the NR PDCP layer1d-05or1d-40. The NR RLC layer1d-10or1d-35may not have a concatenation function, and the concatenation function may be performed by the NR MAC layer1d-15or1d-30or be replaced with a multiplexing function of the NR MAC layer1d-15or1d-30. The out-of-sequence delivery of the NR RLC layer1d-10or1d-35refers to a function of directly delivering RLC SDUs received from a lower layer, to an upper layer out of order. The out-of-sequence delivery of the NR RLC layer1d-10or1d-35may include a function of, when a single RLC SDU is segmented into multiple RLC SDUs and the multiple RLC SDUs are received, reassembling and delivering the multiple RLC SDUs, or may include a function of storing RLC SNs or PDCP SNs of received RLC PDUs and recording missing RLC PDUs by ordering the RLC PDUs. The NR MAC layer1d-15or1d-30may be connected to multiple NR RLC layers configured for a single UE, and main functions of the NR MAC layer1d-15or1d-30may include at least some of the following functions.Mapping between logical channels and transport channelsMultiplexing/demultiplexing of MAC SDUsScheduling information reportingError correction through HARQPriority handling between logical channels of one UEPriority handling between UEs by means of dynamic schedulingMBMS service identificationTransport format selectionPadding An NR PHY layer1d-20or1d-25may channel-code and modulate upper layer data into OFDM symbols and transmit the OFDM symbols through a radio channel, or demodulate OFDM symbols received through a radio channel and channel-decode and deliver the OFDM symbols to an upper layer. FIG.1Eis a diagram illustrating a procedure in which a UE switches from an RRC idle mode to an RRC connected mode and configures a connection with a network, according to an embodiment of the disclosure. InFIG.1E, when the UE configured to transmit or receive data in the RRC connected mode does not transmit or receive data for a certain reason or for a certain time, a gNB may transmit an RRCConnectionRelease message to the UE such that the UE switches to the RRC idle mode (operation1e-01). Afterward, when the UE that is not currently configured for connection (hereinafter, also referred to as an idle-mode UE) has data to be transmitted, the UE may perform an RRC connection establishment procedure on the gNB. The UE establishes an inverse direction transmission synchronization with the gNB via a random access process and transmits an RRCSetupRequest message to the gNB (operation1e-05). The RRCSetupRequest message may include, for example, an identifier of the UE and an establishment cause for establishing a connection (e.g., establishmentCause). The gNB transmits an RRCConnectionSetup message to the UE so that the UE configures an RCC connection (operation1e-10). The RRCConnectionSetup message includes configuration information for each service/bearer/RLC layer or each logical channel or each bearer, and may include information about whether to use ROHC for each bearer/logical channel, ROHC configuration information (e.g., a ROHC version, initial information, etc.), statusReportRequired information (information with which the gNB indicates a PDCP Status report to the UE), and drb-ContinueROHC information (configuration information indicating to continue and changelessly use ROHC configuration information, which may be transmitted by being included in PDCP layer configuration information (pdcp-config)). The RRCConnectionSetup message may also include RRC connection configuration information. A bearer for RRC connection is called a signaling radio bearer (SRB) and is used in transmission and reception of an RRC message that is a control message between the UE and the gNB. The UE that has configured RRC connection transmits an RRCConnectionSetupComplete message to the gNB (operation1e-15). The RRCConnectionSetupComplete message may include a control message such as a SERVICE REQUEST message indicating that the UE requests an access mobility management function (AMF) or a mobility management entity (MME) for bearer configuration for a certain service. The gNB transmits the SERVICE REQUEST message included in the RRCConnectionSetupComplete message to the AMF or the MME (operation1e-20), and the AMF or the MME determines whether to provide the service requested by the UE. When it is determined that the service requested by the UE is to be provided, the MME or the AMF transmits an INITIAL CONTEXT SETUP REQUEST message to the gNB (operation1e-25). The INITIAL CONTEXT SETUP REQUEST message includes, for example, QoS information that is to be applied during DRB configuration, and security-related information (e.g., a Security Key and a Security Algorithm) that is to be applied to the DRB. When the gNB does not receive UE capability information from the MME or the AMF, the gNB may transmit a UE capability information request message to the UE so as to check the UE capability information (operation1e-26). When the UE receives the UE capability information request message, the UE may configure, generate, and report a UE capability information message to the gNB (operation1e-27). The UE capability information message may include information about which types of handover methods are supported by the UE. For example, the UE may report a UE capability to the gNB via an indicator indicating whether or not the UE supports an efficient handover method (i.e., a Dual Active Protocol Stack (DAPS) handover method) proposed in the disclosure. When the gNB checks the UE capability information and then indicates handover to the UE, the gNB may define an indicator indicating handover in a handover command message, according to each of the handover methods, and may indicate the handover to the UE. For example, the gNB may indicate the efficient handover method (the DAPS handover method) proposed in the disclosure to the UE, or may configure the DAPS handover method for each bearer (a DRB or a SRB) of the UE. When the gNB configures the DAPS handover method to the UE, the gNB also indicates other handover methods (e.g., a conditional handover method (method in which, when configurations of a plurality of target cells and a plurality of conditions are configured to the UE and the UE satisfies the conditions in a cell selection procedure or a cell reselection procedure, the UE performs a handover procedure on one target cell) or a handover method without a random access procedure), thereby preventing data loss or a transmission delay which may occur in handover. The UE may perform a handover procedure on a target gNB according to the handover method indicated in the handover command message. To configure security with the UE, the gNB exchanges a SecurityModeCommand message (operation1e-30) and a SecurityModeComplete message (operation1e-35). When security configuration is complete, the gNB transmits an RRCConnectionReconfiguration message to the UE (operation1e-40). The RRCConnectionReconfiguration message includes configuration information for each service/bearer/RLC layer or each logical channel or each bearer, and may include information about whether to use ROHC for each bearer/logical channel, ROHC configuration information (e.g., a ROHC version, initial information, etc.), statusReportRequired information (information with which the gNB indicates a PDCP Status report to the UE), and drb-ContinueROHC information (configuration information indicating to continue and changelessly use ROHC configuration information, which may be transmitted by being included in PDCP layer configuration information (pdcp-config)). The RRCConnectionReconfiguration message may also include RRC connection configuration information. A bearer for RRC connection is called a signaling radio bearer (SRB) and is used in transmission and reception of an RRC message that is a control message between the UE and the gNB. The RRCConnectionReconfiguration message may include configuration information of a DRB in which user data is to be processed, and the UE configures a DRB by applying the configuration information and transmits an RRCConnectionReconfigurationComplete message to the gNB (operation1e-45). When configuration of the DRB with the UE is complete, the gNB transmits an INITIAL CONTEXT SETUP COMPLETE message to the MME or the AMF (operation1e-50). In response to the INITIAL CONTEXT SETUP COMPLETE message, the MME or the AMF exchanges an S1BEARER SETUP message and an S1BEARER SETUP RESPONSE message with an S-GW to configure an S1bearer (operations1e-55and1e-60). The S1bearer is a connection for data transmission, which is configured between the S-GW and the gNB, and corresponds to the DRB in a one-to-one manner. When all of these processes are completed, the UE transmits or receives data to or from the gNB via the S-GW (operations1e-65and1e-70). As such, a general data transmission process roughly include three operations of RRC connection configuration, security configuration, and DRB configuration. The gNB may transmit an RRCConnectionReconfiguration message to the UE in order to renew, add, or change configuration for a certain reason (operation1e-75). In the disclosure, a bearer may include an SRB and a DRB, wherein the SRB means a signaling radio bearer and the DRB means a data radio bearer. The SRB is mainly used to transmit and receive an RRC message of an RRC layer, and the DRB is mainly used to transmit and receive a plurality of items of user layer data (or user plane data). An unacknowledged mode (UM) DRB means a DRB that uses an RLC layer operating in a UM, and an acknowledged mode (AM) DRB means a DRB that uses an RLC layer operating in an AM. FIG.1Fis a diagram illustrating signaling procedures for performing handover in a next-generation wireless communication system, according to an embodiment of the disclosure. A UE1f-01in an RRC connected mode state reports a cell measurement report to a current source gNB1f-02in a periodic manner or when a particular event is satisfied (operation1f-05). The source gNB1f-02determines, based on the cell measurement report, whether the UE1f-01is to perform handover to an adjacent cell. The handover refers to a technology of switching a source BS to another BS (or another cell in a same BS), the source BS providing a service to a UE in a connected mode state. When the source gNB1f-02determines handover, the source gNB1f-02requests the handover by transmitting a handover request message (e.g., a handover preparation information message) to a new BS to provide a service to the UE1f-01, that is, to a target gNB1f-03(operation1f-10). When the target gNB1f-03accepts the handover request, the target gNB1f-03transmits a handover request acknowledgement (Ack) message (e.g., a handover command message) to the source gNB1f-02(operation1f-15). In response to the handover request Ack message, the source gNB1f-02transmits, to the UE1f-01, the handover command message (an RRCReconfiguration message included in a Dedicated Control Channel (DCCH) of the handover request Ack message) (operation1f-20). The source gNB1f-02extracts the handover command message from the handover request Ack message received from the target gNB1f-03and transmits the handover command message to the UE1f-01by using an RRC Connection Reconfiguration message (operation if-20). In the disclosure, provided is a method of determining an efficient DAPS handover method by using the handover preparation information message (operation1f-10) and the handover command message (operation1f-15) when the source gNB1f-02transmits the handover preparation information message (operation If-10) and, in response thereto, the target gNB1f-03transmits the handover command message (operation1f-15) to the source gNB1f-02. Embodiment 1 of determining an efficient DAPS handover method, which is provided in the disclosure, will now be described. In Embodiment 1 of the disclosure, an entity for determining a DAPS handover method may be a source BS. Also, in Embodiment 1 of the disclosure, in a case where the source BS requests a target BS for the DAPS handover method, the target BS may always indicate or perform the DAPS handover method. The source BS may indicate, to the target BS and by defining a new indicator in the handover preparation information message, that the source BS is to perform the DAPS handover method proposed in the disclosure, and may request the DAPS handover method. The handover preparation information message may include current bearer configuration information of a UE, security key information, cell group configuration information, UE capability information, or the like. The source BS is configured to pre-share a capability of the target BS and thus may know in advance whether the target BS supports the DAPS handover method. The source BS may indicate, to the target BS, that the source BS is to perform the DAPS handover method, may indicate, to the target BS, that the source BS may perform data forwarding fast or early, and may indicate the target BS to prepare to receive data forwarding and fast process the data forwarding. The source BS may make a request for the DAPS handover method for each bearer (a DRB or an SRB). In a case where the target BS receives the handover preparation information message and identifies that an indicator requesting the DAPS handover method is included therein, when the target BS configures an RRCReconfiguration message to indicate handover to the UE, the target BS may add, to the RRCReconfiguration message, an indicator requesting the DAPS handover method, bearer configuration information required for the UE to perform the DAPS handover method, bearer configuration information, security key information, cell group configuration information, or system information. Also, the target BS may add the RRCReconfiguration message to a DL-DCCH message of a handover command message and may transmit the handover command message to the source BS. The target BS may perform indication of the DAPS handover method for each bearer (a DRB or an SRB). When the source BS receives the handover command message, the source BS may extract the RRCReconfiguration message included in the handover command message or may transmit the RRCReconfiguration message to the UE, and thus may indicate handover. The source BS may identify the indicated DAPS handover method for each bearer, and may perform the DAPS handover method for each bearer (a DRB or a SRB). Embodiment 2 of determining an efficient DAPS handover method, which is provided in the disclosure, will now be described. In Embodiment 2 of the disclosure, an entity for determining a DAPS handover method may be a target BS. Also, in Embodiment 2 of the disclosure, in a case where a source BS requests the target BS for the DAPS handover method through an indicator, the target BS may reject or accept the request or may indicate another handover method to the source BS via a handover command message indicating the other handover method. The source BS may indicate, to the target BS and by defining a new indicator in the handover preparation information message, that the source BS is to perform the DAPS handover method proposed in the disclosure, and may request the DAPS handover method. The handover preparation information message may include current bearer configuration information of a UE, security key information, cell group configuration information, UE capability information, or the like. The source BS is configured to pre-share a capability of the target BS and thus may know in advance whether the target BS supports the DAPS handover method. The source BS may indicate, to the target BS, that the source BS is to perform the DAPS handover method, may indicate, to the target BS, that the source BS may fast perform early data forwarding, and may indicate the target BS to prepare to receive data forwarding and fast process the data forwarding. The source BS may make a request for the DAPS handover method for each bearer (a DRB or an SRB). In a case where the target BS receives the handover preparation information message and identifies that an indicator requesting the DAPS handover method is included therein, the target BS may reject or accept the request for the DAPS handover method or may indicate another handover method to the source BS, based on whether the target BS can support the DAPS handover method, an amount of current transmit resources, or scheduling. The target BS may add, to a handover command message, an indicator to reject the request for the DAPS handover method, an indicator to accept the request for the DAPS handover method, or an indicator to indicate the other handover method, and may transmit the handover command message. In a case where the target BS accepts the DAPS handover request when configuring an RRCReconfiguration message to indicate handover to the UE, the target BS may configure the RRCReconfiguration message by including the indicator indicating the DAPS handover method in the RRCReconfiguration message. In a case where the target BS rejects the DAPS handover request, the target BS may configure the RRCReconfiguration message by including an indicator indicating another handover method in the RRCReconfiguration message and including, in the RRCReconfiguration message, bearer configuration information necessary for the UE to perform the DAPS handover method or the other handover method, bearer configuration information, security key information, cell group configuration information, or system information. Also, the target BS may add the RRCReconfiguration message to a DL-DCCH message of a handover command message and may transmit the handover command message to the source BS. The target BS may perform indication of the DAPS handover method for each bearer (a DRB or an SRB). When the source BS receives the handover command message, the source BS may check an indicator included in the handover command message and thus may identify whether the request for the DAPS handover method is accepted or rejected. When the request for the DAPS handover method is accepted, the source BS may also perform the DAPS handover method, may extract the RRCReconfiguration message included in the handover command message, may transmit the RRCReconfiguration message to the UE, and thus may indicate handover. When the source BS checks the indicator included in the handover command message, when the request for the DAPS handover method is rejected or the other handover message is indicated, the source BS may perform the other handover method indicated by the target BS. The source BS may extract the RRCReconfiguration message included in the handover command message or may transmit the RRCReconfiguration message to the UE, and thus may indicate handover. As another method, even when a separate indicator is not present in the handover command message, the source BS may check a type of a handover message indicated by the target BS by reading the RRCReconfiguration message included in the handover command message, and may identify whether the request for the DAPS handover method is accepted or rejected. The source BS may also perform a handover method (e.g., the DAPS handover method or the other handover method) indicated in the RRCReconfiguration message. The source BS may identify the indicated DAPS handover method for each bearer, and may perform the DAPS handover method for each bearer (a DRB or a SRB). Embodiment 3 of determining an efficient DAPS handover method, which is provided in the disclosure, will now be described. In Embodiment 3 of the disclosure, an entity for determining a DAPS handover method is a target BS. Also, in Embodiment 3 of the disclosure, the target BS may check the capability of a UE, and may determine a handover method (e.g., a DAPS handover method) according to whether the target BS can support the DAPS handover method, an amount of current transmit resources, or scheduling. A source BS may add, to the handover preparation information message, current bearer configuration information of a UE, security key information, cell group configuration information, UE capability information, or the like, and may transmit the handover preparation information message to request the target BS for handover. The source BS is configured to pre-share a capability of the target BS and thus may know in advance whether the target BS supports the DAPS handover method. When the target BS indicates to perform the DAPS handover method, the source BS may perform data forwarding fast or early. The target BS may receive the handover preparation information message, and may determine the handover method (e.g., the DAPS handover method) according to UE capability information, whether the target BS can support the DAPS handover method, an amount of current transmit resources, or scheduling. When the target BS determines the DAPS handover method for the handover command message, the target BS may add, to the handover command message, an indicator indicating the DAPS handover method, and may transmit the handover command message. In a case where the target BS determines the DAPS handover when configuring an RRCReconfiguration message to indicate handover to the UE, the target BS may configure the RRCReconfiguration message by including the indicator indicating the DAPS handover method in the RRCReconfiguration message. In a case where the target BS determines an another handover method different from the DAPS handover, the target BS may configure the RRCReconfiguration message by including an indicator indicating the other handover method in the RRCReconfiguration message and including, in the RRCReconfiguration message, bearer configuration information necessary for the UE to perform the DAPS handover method or the other handover method, bearer configuration information, security key information, cell group configuration information, or system information. Also, the target BS may add the RRCReconfiguration message to a DL-DCCH message of a handover command message and may transmit the handover command message to the source BS. The target BS may perform indication of the DAPS handover method for each bearer (a DRB or an SRB). When the source BS receives the handover command message, the source BS may check an indicator included in the handover command message and thus may identify whether the DAPS handover method is determined. When the DAPS handover method is indicated, the source BS may perform the DAPS handover method, may extract the RRCReconfiguration message included in the handover command message, may transmit the RRCReconfiguration message to the UE, and thus may indicate handover. When the source BS checks the indicator included in the handover command message, when the DAPS handover method is not determined or the other handover message is indicated, the source BS may perform the other handover method indicated by the target BS. The source BS may extract the RRCReconfiguration message included in the handover command message or may transmit the RRCReconfiguration message to the UE, and thus may indicate handover. As another method, even when a separate indicator is not present in the handover command message, the source BS may check a type of a handover message indicated by the target BS by reading the RRCReconfiguration message included in the handover command message, and may identify whether the DAPS handover method is determined. When the other handover method is indicated, the source BS may perform the indicated other handover method. The source BS may identify the indicated DAPS handover method for each bearer, and may perform the DAPS handover method for each bearer (a DRB or a SRB). A new embodiment may be derived by combining methods of Embodiment 1, Embodiment 2, or Embodiment 3 of determining an efficient DAPS handover method proposed in the disclosure. A BS may indicate, via the RRCReconfiguration message, an efficient handover method (DAPS handover method) proposed in the disclosure to the UE, or in another method, the BS may configure the DAPS handover method for each bearer (a DRB or an SRB) of the UE. When the BS configures the DAPS handover method to the UE, the BS also indicates other handover methods (e.g., a conditional handover method (method in which, when configurations of a plurality of target cells and a plurality of conditions are configured to the UE and the UE satisfies the conditions in a cell selection procedure or a cell reselection procedure, the UE performs a handover procedure on one target cell) or a handover method without a random access procedure), thereby preventing data loss or a transmission delay which may occur in handover. In response to the RRCReconfiguration message, the UE1f-01discontinues data transmission and reception to and from the source gNB1f-02and starts T304timer. When the UE1f-01cannot succeed in handover to the target gNB1f-03for a preset time, the UE1f-01returns to original configuration of the UE1f-01, and the UE1f-01is transited to an RRC idle state. The source gNB1f-02provides a sequence number (SN) status of ULDL data, and, when DL data is present, the source gNB1f-02transmits the DL data or the UL data to the target gNB1f-03(operations if-30and if-35). The UE1f-01attempts a random access to a target cell (e.g., target gNB1f-03) indicated by the source gNB1f-02(operation if-40). The UE1f-01performs the random access to notify switching of the UE1f-01to the target cell and simultaneously to match UL synchronization, via the handover. For the random access, the UE1f-01transmits, to the target cell, a preamble that corresponds to a preamble ID provided by the source gNB1f-02or corresponds to a randomly-selected preamble. After a certain number of subframes after the preamble is transmitted, the UE1f-01monitors whether a Random Access Response (RAR) message is transmitted from the target cell. A time interval for monitoring the RAR message is called a RAR window. When the RAR message is received during the RAR window (operation if-45), the UE1f-01transmits a handover complete message in an RRC Reconfiguration Complete message to the target gNB1f-03(operation1f-55). When the UE1f-01successfully receives the RAR message from the target gNB1f-03, the UE1f-01ends T304timer (operation if-50). To switch a path of bearers which is configured for the source gNB1f-02, the target gNB1f-03requests a core network1f-04(e.g., MME/S-GW/AMF) for a path switch of the bearers (operations if-60and if-65), and indicates the source gNB1f-02to discard UE context of the UE1f-01(operation if-70). The target gNB if-03may transmit an RRC message (e.g., an RRCReconfiguration message if-71) to the UE1f-01and may indicate, by using an indicator, the UE1f-01to release connection with the source gNB1f-02. As another method, the target gNB1f-03may transmit MAC control information, RLC control information, or PDCP control information to the UE1f-01and thus may indicate the UE1f-01to release connection with the source gNB1f-02. Accordingly, the UE1f-01attempts, at a start point of the RAR window, to receive data from the target gNB1f-03, and after the RAR message is received, the UE1f-01transmits an RRC Reconfiguration Complete message and receives a DL transmit resource or a UL transmit resource, thereby starting data transmission and reception to and from the target gNB1f-03. In the disclosure, provided are non-interruption handover methods capable of minimizing a data interruption time due to handover or making the data interruption time become 0 ms in the next-generation wireless communication system. A UE may configure a plurality of first bearers with a source BS and may perform data transmission and reception (UL or DL data transmission and reception) via protocol layers (a PHY layer, a MAC layer, a RLC layer, a PDCP layer or the like) of each of the plurality of first bearers. However, in the disclosure, for convenience of description, it is assumed, in drawings and descriptions, that the UE has one bearer. FIG.1Gillustrates particular operations of Embodiment 1 of the efficient handover method for minimizing a data interruption time due to handover, according to an embodiment of the disclosure. In Embodiment 1 of the efficient handover method ofFIG.1G, a UE1g-20may transmit or receive data to or from a source BS1g-05in first operation1g-01and then receive a handover command message from the source BS1g-05. When receiving the handover command message according to a handover method indicated by the handover command message (e.g., an RRCReconfiguration message), the UE1g-20may release connection with the source BS1g-05, may perform a random access procedure on a target BS1g-10, and may perform a handover procedure. According to an embodiment, to minimize a data interruption time occurring during handover based on the indicated handover method, the UE1g-20may continuously transmit and receive data to and from the source BS1g-05. According to Embodiment 1 of the efficient handover method ofFIG.1G, in second operation1g-02, when the UE1g-20performs the random access procedure on the target BS1g-10by using the handover method indicated by the handover command message received from the source BS1g-05, transmits a preamble to the target BS1g-10, or initially transmits data in a UL transmit resource by using a PUCCH or PUSCH transmit resource, the UE1g-20may discontinue data transmission and reception (UL data transmission and DL data reception) to and from the source BS1g-05. According to Embodiment 1 of the efficient handover method ofFIG.1G, in third operation1g-03, the UE1g-20may complete the random access procedure with respect to the target BS1g-10, may transmit a handover complete message to the target BS1g-10, and may start data transmission and reception (UL data transmission and DL data reception) to and from the target BS1g-10. FIG.1Hillustrates particular operations of Embodiment 2 of the efficient handover method for minimizing a data interruption time due to handover, according to an embodiment of the disclosure. In Embodiment 2 of the efficient handover method ofFIG.1H, a UE1h-20may transmit or receive data to or from a source BS1h-05in first operation1h-01and then receive a handover command message from the source BS1h-05. When the source BS1h-05indicates, in the handover command message, the efficient handover method according to Embodiment 2 of the disclosure (e.g., a DAPS handover method) or indicates the efficient handover method for each bearer, even when the UE1h-20has received the handover command message, the UE1h-20may continuously transmit and receive data to and from the source BS1h-05via protocol layers1h-22of a first bearer so as to minimize a data interruption time occurring during handover. When the RRC layer of the UE1h-20identifies, in the handover command message, an indication with respect to the efficient handover method according to Embodiment 2 of the disclosure (e.g., the DAPS handover method) or identifies an identifier with respect to the DAPS handover method for each bearer, the RRC layer may provide the indicator to a PDCP layer corresponding to each bearer or a bearer for which the DAPS handover method is indicated. In response to the indicator, the PDCP layer may switch a first PDCP layer architecture1i-11or1i-12(seeFIG.1I) to a second PDCP layer architecture1i-20(seeFIG.1I). First operation1h-01ofFIG.1Hmay be described as an operation in which the UE1h-20receives a handover command message (RRCReconfiguration message) from a BS. When the PDCP layer transits to the second PDCP layer architecture1i-20according to configuration included in the received handover command message, protocol layers (a PHY layer, a MAC layer, a RLC layer or a PDCP layer)1h-21of a second bearer for a target BS1h-10may be pre-configured or pre-established, a security key for the target BS1h-10may be derived and updated, and header (or data) compression context for the target BS1h-10may be configured. The UE1h-20may receive the handover command message from the source BS1h-05. When the handover command message indicates the DAPS handover method proposed in the disclosure, when the handover command message indicates a DAPS handover method for particular bearers, or when a PDCP realignment timer value is newly configured, the UE1h-20may switch a PDCP layer from the first PDCP layer architecture or function1i-11or1i-12to the second PDCP layer architecture or function1i-20proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated. In this case, the UE1h-20may update a variable for realignment to a PDCP SN or COUNT value which is predicted to be received next time, may stop a realignment timer, and may restart the realignment timer. The handover command message may be configured and established such that a second bearer has the same identifier as a first bearer so that a data interruption time does not occur in each bearer. In Embodiment 2 of the disclosure, a PDCP layer of a first bearer and a PDCP layer of a second bearer may logically operate as one PDCP layer, and detailed descriptions about the operation will now be provided with reference toFIG.1I. In Embodiment 2 of the disclosure, when the UE1h-20is configured to transmit UL data to both the source BS1h-05and the target BS1h-10, to avoid a coverage lessening problem due to insufficient transmission power of the UE1h-20or to prevent link selection by which, when the UE1h-20transmits UL data, the UE1h-20has to determine to which BS the UE1h-20has to request a transmit resource and to transmit the UL data, the transmission of the UL data in Embodiment 2 of the disclosure may be performed by only one of the source BS1h-05and the target BS1h-10. Therefore, the UE1h-20may perform a scheduling request to only one of the source BS1h-05or the target BS1h-10, may transmit a report (e.g., a buffer status report) about a size of a plurality of items of data to be transmitted by the PDCP layer to only one of the source BS1h-05or the target BS1h-10, may receive a UL transmit resource, and thus may transmit UL data to only one BS. Also, even when the UE1h-20receives a handover command message from the source BS1h-05, the UE1h-20may not initialize a MAC layer of a first bearer so as to prevent data loss by continuing data transmission and reception by retransmitting HARQ. Also, a RLC layer in an AM mode may continuously perform RLC retransmission. In Embodiment 2 of the efficient handover method ofFIG.1H, in second operation1h-02, even when performing the random access procedure on the target BS1h-10indicated by the handover command message via the protocol layers of the second bearer, the UE1h-20may continue data transmission or reception (UL data transmission or DL data reception) to or from the source BS1h-05via the protocol layers1h-22of the first bearer. Second operation1h-02may be described as an operation in which the UE1h-20performs a cell selection procedure or a cell reselection procedure, and performs a random access procedure on a target cell indicated by the handover command message (an RRCReconfiguration message) received from the source BS1h-05. in Embodiment 2 of the efficient handover method ofFIG.1H, when the first condition to be described below is satisfied in third operation1h-03, the UE1h-20may discontinue UL data transmission to the source BS1h-05via the protocol layers1h-22of the first bearer and may transmit the UL data to the target BS1h-10via the protocol layers1h-21of the second bearer. In this regard, the UE1h-20may continuously receive DL data from the source BS1h-05and the target BS1h-10via the protocol layers1h-22of the first bearer and the protocol layers1h-21of the second bearers. Third operation1h-03may be an operation in which the UE1h-20satisfies the first condition and thus switches UL transmission from the source BS1h-05to the target BS1h-10. In detail, the UE1h-20may transmit UL data to the source BS1h-05via the first bearer until the UE1h-20satisfies the first condition, and, when the UE1h-20satisfies the first condition, the UE1h-20may discontinue transmission of the UL data to the source BS1h-05via the first bearer, and start transmission of the UL data to the target BS1h-10via the second bearer. Also, as in the PDCP layer structure proposed with reference toFIG.1I, a reception PDCP layer1h-21of the second bearer and a reception PDCP layer1h-22of the first bearer may operate as one entity, and the PDCP layer structure proposed with reference toFIG.1Imay continuously receive data from the source BS1h-05or the target BS1h-10without interruption by using stored transceived data, SN information, or information such as header compression and decompression context. The first condition may be one of conditions below. The first conditions below propose a UL data transmission switching time point at which a transmit resource may be the most efficiently used, and a data interruption time may be minimized as much as possible.It may be determined that the first condition is satisfied in a case where the UE successfully completes the random access procedure on the target BS via the layers (e.g., the MAC layer) of the second bearer and receives allocation of a first UL transmit resource from the target BS, or a case where a UL transmit resource is first indicated to the UE. For example, in a case where the UE receives a handover command message from the source BS and receives an indication of a random access to the target BS, when the indicated random access is a Contention Free Random Access (CFRA) (e.g., when a predefined preamble or a UE-cell identifier (e.g., Cell-Radio Network Temporary Identifier (C-RNTI) is allocated), it may be determined that the random access procedure is successfully completed when the UE transmits the predefined preamble to a cell of the target BS and receives a RAR message. Therefore, when the UE receives a first UL transmit resource allocated, or included, or indicated in the RAR message, it may be determined that the first condition is satisfied. As another method, when the UE first receives a UL transmit resource after the UE receives the RAR message, it may be determined that the first condition is satisfied. In a case where the UE receives a handover command message from the source BS and receives an indication of a random access to the target BS, when the indicated random access is a Contention-Based Random Access (CBRA) (e.g., when a predefined preamble or a UE-cell identifier (e.g., C-RNTI) is not allocated), it may be determined that the random access procedure on the target BS is successfully completed when the UE transmits a preamble (e.g., a random preamble) to a cell of the target BS and receives a RAR message, transmits a message3(e.g., a handover complete message) to the target BS by using a UL transmit resource allocated, or included, or indicated in the RAR message, and receives, from the target BS and via a Contention resolution MAC CE indicating resolution of contention. Therefore, when the UE monitors the PDCCH and first receives or is first indicated with the UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, it may be determined that the first condition is satisfied. As another method, when a size of the UL transmit resource allocated in the RAR message is sufficient and thus the UE can transmit the message3and additionally transmit UL data, the UE may determine that the UE first receives a UL transmit resource and thus may determine that the first condition is satisfied. In other words, when the UE receives a RAR message, the UE may determine that the UE first receives the UL transmit resource and thus may determine that the first condition is satisfied.When a handover method (RACH-less handover) that does not require a random access procedure is also indicated in the handover command message received by the UE, and when the handover command message includes a UL transmit resource with respect to the target BS, the UE transmits a message3(e.g., a handover complete message or a RRCReconfigurationComplete message) by using the UL transmit resource with respect to the target BS, and when the UE receives, from the target BS, a UE Identity Confirmation MAC CE, it may be determined that a random access procedure is successfully completed and the first condition is satisfied. As another method, when the random access procedure is successfully completed and then the UE performs PDCCH monitoring and receives a first UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, it may be determined that the first condition is satisfied. When the handover command message does not include the UL transmit resources with respect to the target BS, the UE performs PDCCH monitoring on the target BS (or a cell) and when the UE receives a UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, or transmits a message3(e.g., a handover complete message or a RRCReconfigurationComplete message) by using the UL transmit resource, and receives a UE Identity Confirmation MAC CE from the target BS, it may be determined that a random access procedure is successfully completed and the first condition is satisfied. As another method, when the random access procedure is successfully completed and then the UE performs PDCCH monitoring and receives a first UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, it may be determined that the first condition is satisfied. Hereinafter, provided is a method of efficiently switching UL data from a source BS to a target BS, the method being performed when the DAPS handover method proposed in the disclosure is performed. A MAC layer of a second bearer for the target BS may check or identify whether the first condition corresponding to the second bearer is satisfied, by using one or a combination of the methods to be described below.First method: For example, when an RRCReconfiguration message received by the UE indicates DAPS handover, the UE may configure the MAC layer for the target BS corresponding to the second bearer, and the MAC layer may perform a random access procedure and may identify whether the first condition is satisfied. When the first condition is satisfied, the MAC layer may indicate, by using an indicator, an upper layer (e.g., a PDCP layer) to switch UL data transmission from the source BS via a first bearer to the target BS via the second bearer in the DAPS handover method proposed in the disclosure.Second method: As another method, for example, when an RRCReconfiguration message received by the UE indicates DAPS handover, the UE may configure the MAC layer for the target BS corresponding to the second bearer, and the MAC layer may perform a random access procedure and may identify whether the first condition is satisfied. When the first condition is satisfied, the MAC layer may indicate, to an upper layer (e.g., an RRC layer), that the first condition is satisfied. The upper layer (e.g., the RRC layer) may indicate, by using an indicator, a lower layer (e.g., a PDCP layer) to switch UL data transmission from the source BS via a first bearer to the target BS via the second bearer in the DAPS handover method proposed in the disclosure.Third method: When an RRCReconfiguration message received by the UE indicates DAPS handover, the UE may configure the MAC layer for the target BS corresponding to the second bearer, and when the RRC layer of the UE indicates, by using an indicator, a lower layer (e.g., the MAC layer) to perform the DAPS handover, the MAC layer may perform a random access procedure and may check whether the first condition is satisfied. When the first condition is satisfied, the MAC layer may indicate, by using an indicator, an upper layer (e.g., a PDCP layer) to switch UL data transmission from the source BS via a first bearer to the target BS via the second bearer in the DAPS handover method proposed in the disclosure.Fourth method: As another method, when an RRCReconfiguration message received by the UE indicates DAPS handover, the UE may configure the MAC layer for the target BS corresponding to the second bearer, and when the RRC layer of the UE indicates, by using an indicator, a lower layer (e.g., the MAC layer) to perform the DAPS handover, the MAC layer may perform a random access procedure and may check whether the first condition is satisfied. When the first condition is satisfied, the MAC layer may indicate, to an upper layer (e.g., an RRC layer), that the first condition is satisfied. The upper layer (e.g., the RRC layer) may indicate, by using an indicator, a lower layer (e.g., a PDCP layer) to switch UL data transmission from the source BS via a first bearer to the target BS via the second bearer in the DAPS handover method proposed in the disclosure. When the PDCP layer receives an indicator indicating that the first condition is satisfied or an indicator indicating switching UL data transmission from the source BS to the target BS, from the upper layer (e.g., the RRC layer) or the lower layer (e.g., the MAC layer) according to the first method, the second method, the third method, or the fourth method (e.g., when the DAPS handover method is indicated), the PDCP layer may perform a protocol layer operation proposed below so as to efficiently perform switching of UL data transmission, and may perform one or more operations from among operations below so as to prevent data loss due to the UL data transmission. The operations below may be applied to the PDCP layer connected to an AM DRB or a UM DRB (a RLC layer operating in an AM mode or a RLC layer operating in a UM mode). Before the first condition is satisfied or before the indicator indicating that the first condition is satisfied is received, the PDCP layer may indicate, to the MAC layer of the first bearer for the source BS, that there is data to be transmitted by indicating a size or amount (e.g., a PDCP data volume) of the data to be transmitted when a buffer stores the data to be transmitted, and may perform UL data transmission to the source BS. Then, the MAC layer of the first bearer for the source BS may perform a scheduling request or a buffer status report procedure to receive allocation of a UL transmit resource from the source BS. However, when the first condition is satisfied or the indicator indicating that the first condition is satisfied is received, UL data transmission may be switched to the target BS in a manner described below.To switch UL data transmission from the first bearer for the source BS to the second bearer for the target BS, the PDCP layer may indicate, to the MAC layer of the first bearer for the source BS, that a size or amount of data to be transmitted is 0 (or none). In other words, the PDCP layer may indicate, to the MAC layer of the first bearer, that a data volume (a PDCP data volume) of the PDCP layer is 0, thereby indicating that there is no more data to be transmitted (even when the buffer actually stores a plurality of items of data to be transmitted, in order to switch UL data transmission, the PDCP layer may indicate, to the MAC layer of the first bearer for the source BS, that there is no more data to be transmitted).The PDCP layer connected to an AM DRB (RLC layer operating in an AM mode) (all pre-stored PDCP PDUs are discarded (e.g., PDCP SDUs are not discarded to prevent loss of original data)) may perform, based on header context for the target BS, a new header compression procedure on a plurality of items of data (PDCP SDUs of the buffer) in ascending order of COUNT values (or PDCP SNs) allocated before the first condition is satisfied or the indicator indicating that the first condition is satisfied is received, the ascending order starting from first data (e.g., a PDCP SDU) for which successful delivery is not acknowledged by lower layers (e.g., the RLC layer corresponding to the first bearer for the source BS). The PDCP layer may re-perform, by applying security keys for the target BS, an integrity procedure or a ciphering procedure on the plurality of items of data on which the new header compression procedure has been performed, may configure a PDCP header, and may provide the PDCP header to a lower layer (RLC layer of the second bearer for the target BS), thereby performing retransmission or transmission. In other words, the PDCP layer performs accumulated retransmission on data starting from first data for which successful delivery is not acknowledged. As another method, when the PDCP layer performs retransmission, the PDCP layer may perform retransmission only on a plurality of items of data for which successful delivery is not acknowledged by lower layers (e.g., the RLC layers of the first bearer for the source BS). In detail, the PDCP layer connected to the AM DRB (RLC layer operating in the AM mode) (PDCP PDUs that are stored to be transmitted to the source BS via a first protocol layer previously connected to the PDCP layer are all discarded (e.g., PDCP SDUs may not be discarded to prevent loss of original data)) may perform, by applying header compression (or data compression) protocol context or security key corresponding to the target BS, a new header or data compression procedure on only a plurality of items of data (e.g., the PDCP SDUs) for which successful delivery is not acknowledged by lower layers (e.g., the RLC layers) that are the first protocol layer for the source BS, based on COUNT values (or PDCP SNs) allocated before the first condition is satisfied or the indicator indicating that the first condition is satisfied is received. The PDCP layer may re-perform an integrity procedure or a ciphering procedure on the plurality of items of data on which the new header or data compression procedure has been performed, may configure a PDCP header, and may provide the PDCP header to a lower layer that is a second protocol layer for transmission to the target BS, thereby performing retransmission or transmission. In other words, to prevent waste of transmit resources, the PDCP layer may perform selective retransmission only on the plurality of items of data for which successful delivery is not acknowledged. As another method, the transmission or the retransmission may be performed after lower layers (e.g., a transmission or reception RLC layer or MAC layer) that are the first protocol layer for transmitting data to the source BS are released.When the buffer stores data to be transmitted, the PDCP layer may indicate, to the MAC layer of the second bearer for the target BS, that there is the data to be transmitted by indicating a size or amount (e.g., a PDCP data volume) of the data to be transmitted, and may perform switching of UL data transmission to the target BS. Then, the MAC layer of the second bearer for the target BS may perform a scheduling request or a buffer status report procedure to receive allocation of a UL transmit resource from the target BS. According to Embodiment 2 of the efficient handover method (e.g., the DAPS handover method) proposed in the disclosure, even after the UE receives a handover command message (e.g., an RRCReconfiguration message), the UE may continuously receive DL data from the source BS or the target BS via the protocol layers of the first bearer for the source BS or the second bearer for the target BS. Also, according to Embodiment 2 of the disclosure, to allow the UE to smoothly receive DL data from the source BS (or the target BS) or to allow the source BS (or the target BS) to smoothly transmit DL data to the UE, for AM bearers, the UE may be allowed to continuously perform UL transmission of a RLC status report, not data, on the source BS (or the target BS) via the protocol layers of the first bearer (or the second bearer). In other words, even when the first condition is satisfied and thus the UE switches UL data transmission to the target BS, when the UE has to transmit the RLC status report, HARQ ACK or NACK, or PDCP control data (a PDCP ROHC feedback or a PDCP status report) to the source BS, the UE may be allowed to perform data transmission via the first bearer for the source BS. This is because, in a case of the AM bearers, when data is transmitted to a transmitting terminal and then successful delivery is not indicated by using a RLC status report (i.e., when the RLC status report is not received), data cannot be continuously transmitted thereafter. In detail, even when the first condition is satisfied in third operation1h-03in Embodiment 2 of the efficient handover method ofFIG.1H, the UE1h-20discontinues UL data transmission to the source BS1h-05via the protocol layers1h-22of the first bearer, performs switching, and then starts UL data transmission to the target BS1h-10via the protocol layers1h-21of the second bearer, the UE1h-20may continuously transmit HARQ ACK or HARQ NACK information, a RLC status report (ACK or NACK information), or PDCP control data (e.g., a PDCP status report or PDCP ROHC feedback information) via the protocol layers of the first bearer (or the second bearer) so as to smoothly receive DL data from the source BS1h-05(or the target BS1h-10) or to allow the source BS1h-05(or the target BS1h-10) to smoothly transmit DL data. In detail, in third operation1h-03in Embodiment 2 of the efficient handover method ofFIG.1H, even when the first condition is satisfied and thus the UE1h-20discontinues UL data transmission to the source BS1h-05via the protocol layers1h-22of the first bearer, performs switching, and then starts UL data transmission to the target BS1h-10via the protocol layers1h-21of the second bearer, the UE1h-20may continuously perform data transmission due to HARQ retransmission by the MAC layer or data transmission due to retransmission by the RLC layer in the AM mode so as to prevent loss of data to the source BS1h-05. In detail, in third operation1h-03in Embodiment 2 of the efficient handover method ofFIG.1H, when the first condition is satisfied and thus the UE1h-20discontinues UL data transmission to the source BS1h-05via the protocol layers1h-22of the first bearer, performs switching, and then starts UL data transmission to the target BS1h-10via the protocol layers1h-21of the second bearer, the source BS1h-05or the target BS1h-10may divide a time and may allocate a transmit resource to the UE1h-20so as to prevent collision between a UL transmit resource to the target BS1h-10and a UL transmit resource to the source BS1h-05. When the UL transmit resource to the target BS1h-10collides with and thus overlaps with the UL transmit resource to the source BS1h-05, the UE1h-20may perform data transmission to the source BS1h-05by giving priority to the UL transmit resource to the source BS1h-05so as to maintain transmission of DL data or continuously receive the DL data from the source BS1h-05without a problem. As another method, when the UL transmit resource to the target BS1h-10collides with and thus overlaps with the UL transmit resource to the source BS1h-05, the UE1h-20may perform data transmission to the target BS1h-10by giving priority to the UL transmit resource to the target BS so as to maintain transmission of DL data from the target BS1h-10. In detail, when the UE receives a handover command message in which handover (the DAPS handover method) corresponding to Embodiment 2 of the disclosure is indicated or is indicated for each bearer, the UE1h-20or a bearer for which the DAPS handover is indicated may perform a scheduling request via a first protocol layer, may receive a UL transmit resource by transmitting a buffer status report to the source BS1h-05, may transmit UL data, and may receive DL data from the source BS1h-05until the first condition is satisfied. However, when the first condition is satisfied, the UE1h-20does not transmit data to the source BS anymore, may perform a scheduling request via a second protocol layer by switching a UL, may receive a UL transmit resource by transmitting a buffer status report to the target BS1h-10, and may transmit UL data to the target BS1h-10. According to an embodiment of the disclosure, the UE1h-20may continuously receive DL data from the source BS1h-05, and, even after UL transmission switching, the UE1h-20may continuously transmit HARQ ACK or HARQ NACK, a RLC status report, or PDCP control data (e.g., a PDCP status report or ROHC feedback information) which corresponds to the DL data. Also, the UE1h-20may continuously receive DL data from the source BS1h-05or the target BS1h-10even when the first condition is satisfied. When a second condition is satisfied in fourth operation1h-04in Embodiment 2 of the efficient handover method ofFIG.1H, the UE1h-20may discontinue DL data reception from the source BS1h-05via the protocol layers1h-22of the first bearer or may release connection to the source BS1h-05. The second condition may be one of conditions below. Also, the PDCP layer1h-21of the second bearer may continuously perform data transmission or reception without interruption to or from the target BS by using data to be transmitted or data to be received, SN information, or header compression and decompression context, which is stored in the PDCP layer1h-22of the first bearer.When the UE1h-20performs a random access procedure on the target BS via layers1h-21of the second bearer and receives a RAR message, it may be determined that the second condition is satisfied.When the UE1h-20performs a random access procedure on the target BS via the layers1h-21of the second bearer, receives a RAR message, and configures and transmits a handover complete message to the target BS, it may be determined that the second condition is satisfied.When the UE1h-20performs a random access procedure on the target BS via the layers1h-21of the second bearer, and first transmits data by using a PUCCH or PUSCH UL transmit resource or first receives the PUCCH or PUSCH UL transmit resource, it may be determined that the second condition is satisfied.When a BS configures a separate timer to a UE by using an RRC message and the separate timer is expired, it may be determined that the second condition is satisfied. The separate timer may start when the UE receives a handover command message from a source BS, the UE starts a random access to a target BS (transmits a preamble), the UE receives a RAR message from the target BS, the UE transmits a handover complete message to the target BS, or the UE first transmits data by using a PUCCH or PUSCH UL transmit resource.When the UE performs a random access procedure on the target BS via protocol layers of a second bearer, receives a RAR message, configures and transmits a handover complete message to the target BS, and then receives acknowledgement with respect to successful delivery of the handover complete message via a MAC layer (HARQ ACK) or a RLC layer (RLC ACK), it may be determined that the second condition is satisfied.When the UE performs a random access procedure on the target BS via the protocol layers of the second bearer, receives a RAR message or configures and transmits a handover complete message to the target BS and then first receives allocation of a UL transmit resource from the target BS or first receives an indication of the UL transmit resource, it may be determined that the second condition is satisfied.When the source BS performs efficient handover proposed in the disclosure, the source BS may determine when to discontinue transmission of DL data to the UE or when to release connection to the UE. For example, the source BS may determine whether to discontinue transmission of DL data or when to release connection to the UE, according to a certain method (e.g., when a certain timer is expired (the timer can start after handover is indicated) or the source BS receives, from the target BS, an indication indicating that the UE has successfully performed handover to the target BS). When the UE does not receive DL data from the source BS for a certain time period, the UE may determine that the second condition is satisfied, and may determine that connection to the source BS is released and thus may release the connection.When the UE receives, from the target BS, an indication (e.g., an RRC message (e.g., an RRCReconfiguration message)) indicating a release of connection to the source BS, or receives, from the target BS, a MAC CE, a RLC control PDU, or a PDCP control PDU, the UE may determine that the second condition is satisfied.When the UE receives, from the source BS, an indication (e.g., an RRC message (e.g., an RRCReconfiguration message)) indicating a release of connection to the source BS, or receives, from the target BS, a MAC CE, a RLC control PDU, or a PDCP control PDU, the UE may determine that the second condition is satisfied.When the UE does not receive DL data from the source BS for a certain time period, the UE may determine that the second condition is satisfied.When the UE successfully completes a random access procedure on the target BS via the layers of the second bearer and then receives allocation of a first UL transmit resource from the target BS or first receives an indication of a UL transmit resource, it may be determined that the second condition is satisfied. For example, in a case where the UE receives a handover command message from the source BS and receives an indication of a random access to the target BS, when the indicated random access is a Contention Free Random Access (CFRA) (e.g., when a predefined preamble or a UE-cell identifier (e.g., Cell-Radio Network Temporary Identifier (C-RNTI) is allocated), it may be determined that the random access procedure is successfully completed when the UE transmits the predefined preamble to a cell of the target BS and receives a RAR message. Therefore, when the UE receives a first UL transmit resource allocated, or included, or indicated in the RAR message, it may be determined that the second condition is satisfied. As another method, when the UE first receives a UL transmit resource after the UE receives the RAR message, it may be determined that the second condition is satisfied. In a case where the UE receives a handover command message from the source BS and receives an indication of a random access to the target BS, when the indicated random access is a Contention-Based Random Access (CBRA) (e.g., when a predefined preamble or a UE-cell identifier (e.g., C-RNTI) is not allocated), it may be determined that the random access procedure on the target BS is successfully completed when the UE transmits a preamble (e.g., a random preamble) to a cell of the target BS and receives a RAR message, transmits a message3(e.g., a handover complete message) to the target BS by using a UL transmit resource allocated, or included, or indicated in the RAR message, and receives, from the target BS, a Contention resolution MAC CE indicating resolution of contention. Therefore, when the UE monitors the PDCCH and first receives or is first indicated with the UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, it may be determined that the second condition is satisfied. As another method, when a size of the UL transmit resource allocated in the RAR message is sufficient and thus the UE can transmit the message3and additionally transmit UL data, the UE may determine that the UE first receives a UL transmit resource and thus may determine that the second condition is satisfied. In other words, when the UE receives a RAR message, the UE may determine that the UE first receives the UL transmit resource and thus may determine that the second condition is satisfied.When a handover method (RACH-less handover) that does not require a random access procedure is also indicated in the handover command message received by the UE, and when the handover command message includes a UL transmit resource with respect to the target BS, the UE transmits a message3(e.g., a handover complete message or a RRCReconfigurationComplete message) by using the UL transmit resource with respect to the target BS, and when the UE receives, from the target BS, a UE Identity Confirmation MAC CE, it may be determined that a random access procedure is successfully completed and the second condition is satisfied. As another method, when the random access procedure is successfully completed and then the UE performs PDCCH monitoring and receives a first UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, it may be determined that the second condition is satisfied. When the handover command message does not include the UL transmit resources with respect to the target BS, the UE performs PDCCH monitoring on the target BS (or a cell) and when the UE receives a UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, or transmits a message3(e.g., a handover complete message or a RRCReconfigurationComplete message) by using the UL transmit resource, and receives a UE Identity Confirmation MAC CE from the target BS, it may be determined that a random access procedure is successfully completed and the second condition is satisfied. As another method, when the random access procedure is successfully completed and then the UE performs PDCCH monitoring and receives a first UL transmit resource via the PDCCH corresponding to the C-RNTI of the UE, it may be determined that the second condition is satisfied. In a case where the UE performs Embodiment 2 of the efficient handover method (e.g., the DAPS handover method) proposed in the disclosure, when it is identified that the RRC layer, the MAC layer, or the RLC layer of the first bearer of the UE for the source BS, and the RRC layer, the MAC layer, or the RLC layer of the second bearer of the UE for the target BS satisfy the second condition proposed in the disclosure, an indicator indicating that the second condition is satisfied may be indicated to a PDCP layer of the UE or a bearer which performs the DAPS handover method. When the PDCP layer of the UE receives, from a lower layer or an upper layer, the indicator indicating that the second condition is satisfied, the UE may perform one or more procedures below, thereby performing Embodiment 2 of the efficient handover method proposed in the disclosure.The UE may release the first bearer for the source BS and may release connection to the source BS.When the UE releases connection to the source BS, in order to report, to the target BS, a reception status of a plurality of items of DL data received from the source BS, the UE may trigger a PDCP status report procedure, may configure a PDCP status report, and may transmit the PDCP status report to the target BS.When the second condition is satisfied, the UE may switch a second PDCP layer architecture or function1i-20(seeFIG.1I) to a first PDCP layer architecture or function1i-11or1i-12(seeFIG.1I) proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated. The UE may initialize a variable for realignment, may stop and reset the realignment timer, may perform a deciphering procedure or header (or data) decompression by applying a security key or header decompression context for the source BS to a plurality of items of data (e.g., a plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. The UE may provide the plurality of items of processed data to the upper layer in ascending order. In other words, when the second condition is satisfied, the UE may perform the deciphering procedure or header (or data) decompression by applying the security key or header decompression context for the source BS to the plurality of items of data (e.g., the plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. As another method, when the second condition is satisfied, the UE may switch the second PDCP layer architecture or function1i-20to a third PDCP layer architecture or function1i-30(seeFIG.1I) proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated. Also, the UE may not stop nor initialize but may continuously use the variable for realignment and the realignment timer. However, the UE may perform the deciphering procedure or header (or data) decompression by applying the security key or header decompression context for the source BS to the plurality of items of data (e.g., the plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. The UE may provide the plurality of items of processed data to the upper layer in ascending order. In other words, when the second condition is satisfied, the UE may perform the deciphering procedure or header (or data) decompression by applying the security key or header decompression context for the source BS to the plurality of items of data (e.g., the plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. The UE may release QoS mapping information of the SDAP layer for the source BS, security key information of the PDCP layer for the source BS, header (or data) compression context information for the source BS, or the RLC layer or the MAC layer for the source BS. When the source gNB1f-02inFIG.1Fof the disclosure transmits the handover command message to the UE1f-01(operation1f-20), the source gNB1f-02may define indicators related to embodiments of the disclosure in the handover command message (e.g., an RRCReconfiguration message), and may indicate, to the UE1f-01, which handover procedure corresponding to which embodiment is to be triggered. The UE1f-01may perform a handover procedure according to a handover method indicated in the handover command message. For example, the UE1f-01may perform handover to the target gNB1f-03in a manner that the UE1f-01minimizes a data interruption time by performing Embodiment 2 (the DAPS handover method) of the efficient handover method proposed in the disclosure. As another method, the source gNB1f-02may define indicators for respective bearers, the indicators being related to embodiments of the disclosure, in the handover command message, and may further particularly indicate which embodiment is to be applied to which bearer in handover. For example, the source gNB1f-02may indicate, via the handover command message, to apply Embodiment 2 of the disclosure only to the AM bearer in which the RLC layer operating in the AM mode is active, or may extensively apply Embodiment 2 to the UM bearer in which the RLC layer operating in the UM mode is active. It is assumed that embodiments of the disclosure are applied to a DRB. However, when required (e.g., in a case where the UE fails to perform handover to the target BS while the UE maintains a SRB with respect to the source BS, and thus the UE can report a handover failure message via the SRB with respect to the source BS or can recover a connection to the source BS, embodiments of the disclosure may be extensively applied to the SRB. In embodiments of the disclosure, when the UE performs data transmission and reception to and from the source BS via the protocol layers of the first bearer and performs data transmission and reception to and from the target BS via the protocol layers of the second bearer, the MAC layer of the first bearer and the MAC layer of the second bearer may each operate a discontinuous reception (DRX) period, thereby reducing battery consumption in the UE. In other words, even after the UE receives the handover command message, the UE may continuously apply the DRX period of the MAC layer that was applied when transmitting and receiving data via the protocol layers of the first bearer, and may discontinue the DRX period according to the first condition or the second condition. Also, the UE may manage, in response to indication from the target BS, whether to separately apply the DRX period to the MAC layer of the second bearer. In the disclosure, the UE discontinuing UL transmission to the source BS via the protocol layers of the first bearer and discontinuing DL data reception from the source BS may mean that the UE re-establishes, initializes, or releases the protocol layers (the PHY layer, the MAC layer, the RLC layer, or the PDCP layer) of the first bearer. In embodiments of the disclosure, for convenience of description, it is described that the UE configures the first bearer for the source BS or the second bearer for the target BS, and embodiments of the disclosure may be easily extended and equally applied to a case in which the UE configures a plurality of first bearers for the source BS or a plurality of second bearers for the target BS. Also, embodiments of the disclosure may be extended and equally applied to a case in which a plurality of bearers for a plurality of target BSs are configured. For example, the UE may configure second bearers while performing a handover procedure on a first target BS, and when handover fails, the UE configures second bearers while performing a handover procedure on a second target BS, such that the UE may autonomously detect and determine cells satisfying a certain condition (e.g., a signal whose strength being equal to or greater than a certain value) from among a plurality of cells, may select one cell and then may perform a handover procedure on the cell.. FIG.1Iillustrates architectures of an efficient PDCP layer which are to be applied to the DAPS handover method that is Embodiment 2 of the efficient handover method proposed in the disclosure, and a method of applying the architectures, according to an embodiment of the disclosure. InFIG.1I, the disclosure proposes particular architectures and functions of the efficient PDCP layer which are to be applied to the DAPS handover method that is Embodiment 2 of the efficient handover method proposed in the disclosure, and when a DAPS handover procedure is performed, different PDCP layer architectures as the architectures of the PDCP layer to be proposed below may be applied to each bearer at different time points. For example, before the UE receives a handover command message from a BS, the UE may process and transmit or receive data by applying the first PDCP layer architecture and functions1i-11or1i-12proposed in the disclosure to each bearer (operation1i-01). However, when the UE receives a handover command message from the BS, and the DAPS handover method proposed in the disclosure is indicated in the handover command message or the DAPS handover method is indicated for particular bearers, the UE may process and transmit or receive data by applying the second PDCP layer architecture and function1i-20proposed in the disclosure with respect to each bearer or bearers for which the DAPS handover method is indicated (operation1i-02). In other words, when the UE receives the handover command message from the BS, and the DAPS handover method proposed in the disclosure is indicated in the handover command message or the DAPS handover method is indicated for particular bearers, the UE may switch the first PDCP layer architecture or function1i-11or1i-12, which is used for each bearer, to the second PDCP layer architecture or function1i-20proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated. As another method, when the first condition proposed in the disclosure is satisfied, the UE may switch the first PDCP layer architecture or function1i-11or1i-12, which is used for each bearer, to the second PDCP layer architecture or function1i-20proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated (operation1i-02). Also, in a case where the UE receives the handover command message from the BS, and the DAPS handover method proposed in the disclosure is indicated in the handover command message, the DAPS handover method is indicated for particular bearers, or a PDCP realignment timer value is newly set, when the UE switches the first PDCP layer architecture or function1i-11or1i-12to the second PDCP layer architecture or function1i-20proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated, the UE may update a variable for realignment to a PDCP SN or a COUNT value, which is predicted to be received next, and may stop and restart a realignment timer. When the second condition proposed in the disclosure is satisfied when the UE performs the DAPS handover method proposed in the disclosure, the UE may release, from first bearers for the source BS, the second PDCP layer architecture and function1i-20applied to each bearer or a bearer for which the DAPS handover method is indicated, and may switch back to the first PDCP layer architecture and function1i-11or1i-12and may apply the first PDCP layer architecture and function1i-11or1i-12to each bearer. In a case where the second condition is satisfied, when the UE switches the second PDCP layer architecture or function1i-20(seeFIG.1I) to the first PDCP layer architecture or function1i-11or1i-12(seeFIG.1I) proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated, the UE may initialize a variable for realignment, may stop and reset the realignment timer, may perform a deciphering procedure or header (or data) decompression by applying a security key or header decompression context for the source BS to a plurality of items of data (e.g., a plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. The UE may provide the plurality of items of processed data to the upper layer in ascending order. In other words, when the second condition is satisfied, the UE may perform the deciphering procedure or header (or data) decompression by applying the security key or header decompression context for the source BS to the plurality of items of data (e.g., the plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. As another method, when the second condition proposed in the disclosure is satisfied when the UE performs the DAPS handover method proposed in the disclosure, the UE may release, from bearers for the source BS, the second PDCP layer architecture and function1i-20applied to each bearer or a bearer for which the DAPS handover method is indicated, and may switch to the third PDCP layer architecture and function1i-30and may apply the third PDCP layer architecture and function1i-30to each bearer. In a case where the second condition is satisfied, when the UE switches the second PDCP layer architecture or function1i-20to the third PDCP layer architecture or function1i-30proposed in the disclosure with respect to each bearer or a bearer for which the DAPS handover method is indicated, the UE may not stop nor initialize but may continuously use the variable for realignment and the realignment timer. However, the UE may perform the deciphering procedure or header (or data) decompression by applying the security key or header decompression context for the source BS to the plurality of items of data (e.g., the plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. The UE may provide the plurality of items of processed data to the upper layer in ascending order. In other words, when the second condition is satisfied, the UE may perform the deciphering procedure or header (or data) decompression by applying the security key or header decompression context for the source BS to the plurality of items of data (e.g., the plurality of items of data received from the source BS) stored for reordering in the buffer, and may discard the security key or the header decompression context for the source BS. As proposed inFIG.1Iof the disclosure, the UE may apply, to each bearer, the first PDCP layer architecture or function1i-11or1i-12, the second PDCP layer architecture or function1i-20, or the third PDCP layer architecture or function1i-30, which are different from each other, at different time points, such that data loss may be prevented and a data interruption time may be minimized when handover is performed. The first PDCP layer architecture1i-i1or1i-12proposed inFIG.1Imay have a1-1PDCP layer architecture, a1-2PDCP layer architecture, a1-3PDCP layer architecture, or a1-4PDCP layer architecture, which are proposed in the disclosure, and may have characteristics to be described below. -1>(When it is the1-1PDCP layer architecture) for example, when the UE applies the first PDCP layer architecture and function1i-11to a PDCP layer (e.g., E-UTRA PDCP layer or LTE PDCP layer) connected to an AM RLC layer (e.g., E-UTRA AM RLC layer), the PDCP layer may have characteristics below. *2>The reception PDCP layer may first perform detection of out-of-window data or duplicate data on a plurality of items of received data. (Retransmission may occur in RLC AM, and sizes of LTE RLC SN and PDCP SN may be different, such that the duplicate data or the out-of-window data may be received. In the above, window indicates a range of PDCP SNs or COUNT values, in which valid data is received.) #3>Before the UE discards the out-of-window data or the duplicate data, the UE performs a deciphering procedure and a header decompression procedure and then performs a discard operation. (Because the data may include useful information (e.g., initialization and refresh (IR) packet or header compression information) for the header decompression procedure, the UE may check and then discard the data. *2>The PDCP layer may immediately decipher a plurality of items of data without ordering, the data being received without being discarded, and may perform a header decompression procedure. This is because the E-UTRA AM RLC layer performs ordering on the plurality of items of data and provides the plurality of items of data to the PDCP layer. *2>Then, the PDCP layer provides the plurality of items of data to an upper layer in ascending order of COUNT values. -1>(When it is the1-2PDCP layer architecture) for example, when the UE applies the first PDCP layer architecture and function1i-11to a PDCP layer (e.g., E-UTRA PDCP layer or LTE PDCP layer) connected to a UM RLC layer (e.g., E-UTRA UM RLC layer), the PDCP layer may have characteristics below. *2>The PDCP layer may not perform a procedure of detecting out-of-window data or duplicate data. This is because the UM E-UTRA RLC layer does not perform a retransmission procedure. *2>Then, the PDCP layer may immediately perform a deciphering procedure and then a header decompression procedure on the plurality of items of received data. *2>Then, the PDCP layer may perform a reordering procedure and may provide the plurality of items of data to the upper layer (e.g., in ascending order). -1>(When it is the1-3PDCP layer architecture) for example, when the UE applies the first PDCP layer architecture1i-11to the PDCP layer (e.g., the E-UTRA PDCP layer or the LTE PDCP layer) configured with a split bearer, a packet duplication bearer, or a LTE WLAN Aggregation (LWA) bearer, a reordering procedure and a realignment timer may always be applied and the PDCP layer may have characteristics below. *2>The PDCP layer may first perform detection of out-of-window data or duplicate data on a plurality of items of received data. (Retransmission may occur in RLC AM, data may be received at different time points from different RLC layers, and sizes of LTE RLC SN and PDCP SN may be different, such that the out-of-window data or the duplicate data may be received.) #3>The PDCP layer performs a deciphering procedure. However, the PDCP layer may not perform a header decompression procedure. (This is because the E-UTRA PDCP layer cannot configure a header compression protocol to the split bearer or the LWA bearer). #3>When an integrity protection or verification procedure has been performed, the PDCP layer may perform the integrity protection or verification procedure and then may discard data. When the integrity verification procedure fails, the PDCP layer may discard the data and may report the failure to an upper layer. #3>The PDCP layer discards the out-of-window data or the duplicate data. *2>When the data is not discarded, the PDCP layer may immediately perform a deciphering procedure without reordering on a plurality of items of received data. Afterward, when the integrity protection or verification procedure is configured, the PDCP layer may perform integrity verification. When an integrity protection or verification procedure has been performed, the PDCP layer may perform the integrity protection or verification procedure and then may discard data. When the integrity verification procedure fails, the PDCP layer may discard the data and may report the failure to an upper layer. *2>Afterward, the PDCP layer may perform reordering on a plurality of items of received data, and when PDCP SNs or COUNT values are sequentially aligned in ascending order without a gap therebetween, the PDCP layer may perform a header compression procedure (when the header compression procedure or a header decompression procedure is configured) and may provide the data to the upper layer in ascending order. *2>In a case where a realignment timer is running, #3>when data is provided to the upper layer, the data corresponding to COUNT value having the same value as a value obtained by subtracting 1 from a value a variable for realignment maintains, or when the plurality of items of data are all provided to the upper layer without a gap between PDCP SNs (COUNT values), 4>the PDCP layer stops and resets the realignment timer. *2>In a case where the realignment timer is not running, #3>when a buffer stores data that is not provided to the upper layer, or when there is a gap between PDCP SNs (COUNT values), 4>the PDCP layer starts the realignment timer. 4>Then, the PDCP layer updates the variable for realignment to a PDCP SN or a COUNT value which is predicted to be received next time. *2>In a case where the realignment timer is expired, #3>when a header decompression procedure is configured to values of a plurality of items of stored data, the values being smaller than the variable for realignment, in ascending order of PDCP SNs or COUNT values, the PDCP layer performs the header decompression procedure and provides the data to the upper layer. #3>When a header decompression procedure is configured to values of a plurality of items of stored data, the values being equal to or greater than the variable for realignment, in ascending order of PDCP SNs or COUNT values, the PDCP layer performs the header decompression procedure and provides the data to the upper layer. #3>Then, the PDCP layer updates a variable value of data, which is most recently provided to the upper layer, to a PDCP SN or a COUNT value of the data most recently provided to the upper layer. #3>When a buffer stores data that is not provided to the upper layer, or when there is a gap between PDCP SNs (COUNT values), 4>the PDCP layer starts the realignment timer. 4>Then, the PDCP layer updates the variable for realignment to a PDCP SN or a COUNT value which is predicted to be received next time. -1>(When it is the1-4PDCP layer architecture) for example, when the UE applies the first PDCP layer architecture and function1i-12to an NR PDCP layer, the PDCP layer may always apply a reordering procedure and a realignment timer and may have characteristics below. *2>The PDCP layer may first perform a deciphering procedure on a plurality of items of received data. *2>When an integrity protection or verification procedure is configured, the PDCP layer may perform the integrity protection or verification procedure on the received data, and when the integrity verification procedure fails, the PDCP layer may discard the data and may report the failure to an upper layer. *2>The PDCP layer performs detection of out-of-window data or duplicate data on the received data. (The disclosure may be characterized in that the deciphering procedure is first performed and then the detection of out-of-window data or duplicate data is performed. As another method, the deciphering procedure may be performed only when the integrity protection or verification procedure is configured. In a case where the detection of out-of-window data or duplicate data is performed but the integrity protection or verification procedure is not configured, the deciphering procedure may be performed only on a plurality of items of data on which the detection of out-of-window data or duplicate data is performed and that are not discarded.) #3>The PDCP layer discards the out-of-window data or the duplicate data. *2>When the data is not discarded, the PDCP layer may perform reordering on a plurality of items of received data, and when PDCP SNs or COUNT values are sequentially aligned in ascending order without a gap therebetween, the PDCP layer may perform a header compression procedure (when the header compression procedure or a header decompression procedure is configured) and may provide the data to the upper layer in ascending order. *2>Then, the UE provides the plurality of items of data to an upper layer in ascending order of COUNT values. *2>In a case where a realignment timer is running, #3>when data is provided to the upper layer, the data corresponding to COUNT value having the same value as a value obtained by subtracting 1 from a value a variable for realignment maintains, when the plurality of items of data are all provided to the upper layer without a gap between PDCP SNs (COUNT values), or when a value of a variable storing a PDCP SN or a COUNT value of data to be provided to the upper layer is equal to or greater than a value of a variable for realignment, 4>the PDCP layer stops and resets the realignment timer. *2>In a case where the realignment timer is not running, #3>when a buffer stores data that is not provided to the upper layer, when there is a gap between PDCP SNs (COUNT values), or when a value of a variable storing a COUNT value of first data that is not provided to the upper layer is smaller than a value of a variable for realignment, 4>the PDCP layer updates the variable for realignment to a PDCP SN or a COUNT value which is predicted to be received next time. 4>The PDCP layer starts the realignment timer. *2>In a case where the realignment timer is expired, #3>when a header decompression procedure is configured to values of a plurality of items of stored data, the values being smaller than the variable for realignment, in ascending order of PDCP SNs or COUNT values, the PDCP layer performs the header decompression procedure and provides the data to the upper layer. #3>When a header decompression procedure is configured to values of a plurality of items of stored data, the values being equal to or greater than the variable for realignment, in ascending order of PDCP SNs or COUNT values, the PDCP layer performs the header decompression procedure and provides the data to the upper layer. #3>Then, the PDCP layer updates a variable value of first data, which is not provided to the upper layer, to a PDCP SN or a COUNT value of the first data that is not provided to the upper layer. #3>When a buffer stores data that is not provided to the upper layer, when there is a gap between PDCP SNs (COUNT values), or when a value of a variable storing a COUNT value of first data that is not provided to the upper layer is smaller than a value of a variable for realignment, 4>the PDCP layer updates the variable for realignment to a PDCP SN or a COUNT value which is predicted to be received next time. 4>The PDCP layer starts the realignment timer. The second PDCP layer architecture or function1i-20proposed inFIG.1Imay have a2-1PDCP layer architecture or a2-2PDCP layer architecture, which are proposed in the disclosure, and may have characteristics to be described below. In the disclosure, provided is the second PDCP layer architecture1i-20which is efficient in handover. The second PDCP layer architecture may be applied to Embodiment 2 of the efficient handover method for minimizing a data interruption time, which is proposed in the disclosure. In the second PDCP layer architecture, the UE may perform data transmission or reception from or to a source BS1i-21via protocol layers (e.g., a SDAP layer, a PDCP layer, a RLC layer, or a MAC layer) of a first bearer, and may perform data transmission or reception from or to a target BS1i-22via protocol layers (e.g., a SDAP layer, a PDCP layer, a RLC layer, or a MAC layer) of a second bearer. The PDCP layer of the first bearer and the PDCP layer of the second bearer may each be configured in the UE but may logically operate as one PDCP layer as shown in1i-20. In detail, by distinguishing between functions of a PDCP layer, the one PDCP layer may be implemented as functions (e.g., an SN allocation function, a realignment function, an in-sequence delivery function, or a duplicate detection function) of an upper PDCP layer and functions (e.g., a deciphering or ciphering function, a header (or data) compression or decompression function, an integrity protection or verification function, or a duplicate detection function) of two lower PDCP layers respectively for the source BS and the target BS. Also, as proposed above, when the DAPS handover method is performed, the UE may transmit UL data transmission to the source BS, and when the first condition is satisfied, the UE may switch to the target BS and may continuously receive DL data from the source BS and the target BS. Therefore, only one header (or data) compression protocol context for the source BS or the target BS may be maintained and applied to a UL, and two contexts for the source BS or the target BS may be maintained and applied to a DL. The2-1PDCP layer architecture (e.g., an E-UTRA PDCP layer for the DAPS handover method) proposed in the disclosure, based on the second PDCP layer architecture, may have characteristics below. An upper transmit PDCP layer function may serve to allocate PDCP SNs to a plurality of items of data received from an upper layer. Two lower transmit PDCP layer functions1i-21and1i-22respectively for the source BS and the target BS may apply, to data to be transmitted to the source BS, header (or data) compression context or security key configured with the source BS, by using a separate security key configured with each of the source BS and the target BS, and apply, to data to be transmitted to the target BS, header (or data) compression context or security key configured with the target BS, and may apply a header (or data) compression procedure when the header (or data) compression procedure is configured. Also, when integrity protection is configured, the lower transmit PDCP layer functions1i-21and1i-22may apply a ciphering procedure by applying an integrity protection procedure to a PDCP header and data (PDCP SDU), may provide the data to be transmitted to the source BS to a transmit RLC layer of the first bearer, and may provide the data to be transmitted to the target BS to a transmit RLC layer of the second bearer, thereby performing transmission. In order to accelerate a data processing speed, the two lower transmit PDCP layer functions1i-21and1i-22may perform parallel processing to perform header compression, integrity protection, or a ciphering procedure in parallel. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform the integrity protection or the ciphering procedure by using different security keys. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform compression, integrity protection, or a ciphering procedure on different data by applying different compression contexts, different security keys, or different security algorithms in a logically-one transmit PDCP layer. A receive PDCP layer function, namely, the lower receive PDCP layer functions1i-21and1i-22for the source BS or the target BS, may each independently perform an out-of-window data detection or duplicate detection procedure on data respectively received from lower layers, in particular, a plurality of items of data received from two RLC layers for each of the source BS and the target BS, based on PDCP SNs or COUNT values. As another method, for convenience of implementation, the receive PDCP layer function may perform the out-of-window data detection or duplicate detection procedure on all received data, without distinguishing between the RLC layers, based on PDCP SNs or COUNT values. As another method, for more accurate duplicate detection, the receive PDCP layer function may perform the out-of-window data detection based on PDCP SNs or COUNT values on all received data, without distinguishing between the RLC layers, and may separately perform the duplicate detection procedure on a plurality of items of data received from each of the RLC layers. As another method, when data received from different BSs are overlapped with each other, in order to prevent data loss for a header compression protocol, the receive PDCP layer function may perform the out-of-window data detection based on PDCP SNs or COUNT values on all received data without distinguishing between the RLC layers, and may perform the duplicate detection procedure on all data after a deciphering procedure, an integrity protection procedure, or a header (or data) decompression procedure is received with respect to data received from each of the RLC layers. When a deciphering procedure is immediately applied to a plurality of items of received data by using separate header (or data) compression context or security key separately configured with the source BS and the target BS and integrity protection is configured, sub-functions of the receive PDCP layer may apply an integrity protection procedure to the PDCP header and the data (PDCP SDU). In the2-1PDCP layer architecture, a header (or data) decompression procedure may be immediately performed, without reordering, on a plurality of items of data received from RLC layers of the first bearer for the source BS, and a header (or data) decompression procedure may be immediately performed, without reordering, on a plurality of items of data received from RLC layers of the second bearer for the target BS. Also, to distinguish between the data received from the RLC layers of the first bearer for the source BS and the data received from the RLC layers of the second bearer for the target BS, an indicator is defined for each of the received data such that it is possible to identify whether the PDCP layer received data from the source BS or received data from the target BS. As another method, a 1-bit indicator is defined in a PDCP header, a SDAP header, or a RLC header, such that it is possible to identify whether the PDCP layer received data from the source BS or received data from the target BS. Also, the PDCP layer may perform the duplicate detection procedure based on a PDCP SN or a COUNT value (a procedure in which only one data (including pre-received data or data provided to the upper layer) is allocated for each PDCP SN or each COUNT value and the others are all discarded) on all of the data received from RLC layers of the first bearer for the source BS and the data received from RLC layers of the second bearer for the target BS, wherein the header (or data) compression procedure has been completed with respect to the data. Then, the PDCP layer may perform a realignment procedure on all of the data received from RLC layers of the first bearer for the source BS and the data received from RLC layers of the second bearer for the target BS, in ascending order, based on PDCP SNs or COUNT values, and may sequentially provide the data to the upper layer. Because one PDCP layer can receive data in any order from different BSs, i.e., from the first bearer or the second bearer, the PDCP layer may have to always perform the realignment procedure. As described above, to accelerate a data processing speed, the two lower transmit PDCP layer functions1i-21and1i-22may perform parallel processing to perform header compression, integrity protection, or a ciphering procedure in parallel, based on each PDCP SN or each COUNT value. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform the integrity protection, the ciphering procedure, or the header decompression procedure by using different header (or data) compression contexts or different security keys. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform the integrity protection, the ciphering procedure, or the decompression procedure on different data by applying different header (or data) compression contexts, different security keys, or different security algorithms in a logically-one transmit PDCP layer. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform out-of-sequence deciphering or integrity verification procedure on each of a plurality of items of data received without relation to the order of PDCP SNs or COUNT values. When the one PDCP layer distinguishes layers of the first bearer from layers of the second bearer, the PDCP layer may distinguish the layers (or a first RLC layer) of the first bearer from the layers (or a second RLC layer) of the second bearer, by taking into consideration that the layer of the first bearer and the layer of the second bearer are connected to different MAC layers, that the layers have different logical channel identifiers or are different RLC layers connected to different MAC layers, or that the layers use different ciphering keys. Accordingly, the PDCP layer may perform a ciphering procedure or a deciphering procedure on UL data and DL data by using different security keys, and may compress or decompress the UL data and the DL data by using different compression protocol contexts. The2-2PDCP layer architecture (e.g., an NR PDCP layer for the DAPS handover method) proposed in the disclosure, based on the second PDCP layer architecture, may have characteristics below. An upper transmit PDCP layer function may serve to allocate PDCP SNs to a plurality of items of data received from an upper layer. Two lower transmit PDCP layer functions1i-21and1i-22respectively for the source BS and the target BS may apply, to data to be transmitted to the source BS, header (or data) compression context or security key configured with the source BS, by using a separate security key configured with each of the source BS and the target BS, and apply, to data to be transmitted to the target BS, header (or data) compression context or security key configured with the target BS, and may apply a header (or data) compression procedure when the header (or data) compression procedure is configured. Also, when integrity protection is configured, the lower transmit PDCP layer functions1i-21and1i-22may apply a ciphering procedure by applying an integrity protection procedure to a PDCP header and data (PDCP SDU), may provide the data to be transmitted to the source BS to a transmit RLC layer of the first bearer, and may provide the data to be transmitted to the target BS to a transmit RLC layer of the second bearer, thereby performing transmission. In order to accelerate a data processing speed, the two lower transmit PDCP layer functions1i-21and1i-22may perform parallel processing to perform header compression, integrity protection, or a ciphering procedure in parallel. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform the integrity protection or the ciphering procedure by using different security keys. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform compression, integrity protection, or a ciphering procedure on different data by applying different compression contexts, different security keys, or different security algorithms in a logically-one transmit PDCP layer. A receive PDCP layer function, namely, the lower receive PDCP layer functions1i-21and1i-22for the source BS or the target BS, may each independently perform an out-of-window data detection or duplicate detection procedure on data respectively received from lower layers, in particular, a plurality of items of data received from two RLC layers for each of the source BS and the target BS, based on PDCP SNs or COUNT values. As another method, for convenience of implementation, the receive PDCP layer function may perform the out-of-window data detection or duplicate detection procedure on all received data, without distinguishing between the RLC layers, based on PDCP SNs or COUNT values. As another method, for more accurate duplicate detection, the receive PDCP layer function may perform the out-of-window data detection based on PDCP SNs or COUNT values on all received data, without distinguishing between the RLC layers, and may separately perform the duplicate detection procedure on a plurality of items of data received from each of the RLC layers. As another method, when data received from different BSs are overlapped with each other, in order to prevent data loss for a header compression protocol, the receive PDCP layer function may perform the out-of-window data detection based on PDCP SNs or COUNT values on all received data without distinguishing between the RLC layers, and may perform the duplicate detection procedure on all data after a deciphering procedure, an integrity protection procedure, or a header (or data) decompression procedure is received with respect to data received from each of the RLC layers. When a deciphering procedure is immediately applied to a plurality of items of received data by using separate header (or data) compression context or security key separately configured with the source BS and the target BS and integrity protection is configured, sub-functions of the receive PDCP layer may apply an integrity protection procedure to the PDCP header and the data (PDCP SDU). In the2-2PDCP layer architecture, a reordering procedure may be performed on a plurality of items of data received from RLC layers of the first bearer for the source BS and a plurality of items of data received from RLC layers of the second bearer for the target BS, and then may perform a header (or data) decompression procedure on the plurality of items of data received from each BS (the source BS or the target BS) in ascending order of PDCP SNs or COUNT values, by applying header (or data) compression context of each BS (the source BS or the target BS). Also, to distinguish between the data received from the RLC layers of the first bearer for the source BS and the data received from the RLC layers of the second bearer for the target BS, an indicator is defined for each of the received data such that it is possible to identify whether the PDCP layer received data from the source BS or received data from the target BS. As another method, a 1-bit indicator is defined in a PDCP header, a SDAP header, or a RLC header, such that it is possible to identify whether the PDCP layer received data from the source BS or received data from the target BS. Also, the PDCP layer may perform the duplicate detection procedure based on a PDCP SN or a COUNT value (a procedure in which only one data (including pre-received data or data provided to the upper layer) is allocated for each PDCP SN or each COUNT value and the others are all discarded) on all of the data received from RLC layers of the first bearer for the source BS and the data received from RLC layers of the second bearer for the target BS, wherein the header (or data) compression procedure has been completed with respect to the data. Then, the PDCP layer may sequentially provide, to the upper layer, all of the data received from RLC layers of the first bearer for the source BS and the data received from RLC layers of the second bearer for the target BS, in ascending order, based on PDCP SNs or COUNT values. Because one PDCP layer can receive data in any order from different BSs, i.e., from the first bearer or the second bearer, the PDCP layer may have to always perform the realignment procedure. As described above, to accelerate a data processing speed, the two lower transmit PDCP layer functions1i-21and1i-22may perform parallel processing to perform header compression, integrity protection, or a ciphering procedure in parallel, based on each PDCP SN or each COUNT value. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform the integrity protection, the ciphering procedure, or the header decompression procedure by using different header (or data) compression contexts or different security keys. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform the integrity protection, the ciphering procedure, or the decompression procedure on different data by applying different header (or data) compression contexts, different security keys, or different security algorithms in a logically-one transmit PDCP layer. Also, the two lower transmit PDCP layer functions1i-21and1i-22may perform out-of-sequence deciphering or integrity verification procedure on each of a plurality of items of data received without relation to the order of PDCP SNs or COUNT values. When the one PDCP layer distinguishes layers of the first bearer from layers of the second bearer, the PDCP layer may distinguish the layers (or a first RLC layer) of the first bearer from the layers (or a second RLC layer) of the second bearer, by taking into consideration that the layer of the first bearer and the layer of the second bearer are connected to different MAC layers, that the layers have different logical channel identifiers or are different RLC layers connected to different MAC layers, or that the layers use different ciphering keys. Accordingly, the PDCP layer may perform a ciphering procedure or a deciphering procedure on UL data and DL data by using different security keys, and may compress or decompress the UL data and the DL data by using different compression protocol contexts. In the disclosure, provided is the third PDCP layer architecture1i-30which is efficient in handover. The third PDCP layer architecture may be applied to Embodiment 2 of the efficient handover method for minimizing a data interruption time, which is proposed in the disclosure. A PDCP layer function in the third PDCP layer architecture proposed in the disclosure may be equal to that in the second PDCP layer architecture proposed in the disclosure. However, the third PDCP layer architecture may correspond to an architecture from which the first bearer for the source BS in the second PDCP layer architecture is released. In detail, the third PDCP layer architecture proposed in the disclosure may have the same functions as those of the second PDCP layer architecture but may have an architecture from which the first bearer (e.g., the SDAP layer, the PDCP layer, the RLC layer, or the MAC layer) for the source BS in the second PDCP layer architecture is released. Therefore, the third PDCP layer architecture may be characterized in that QoS mapping information of the SDAP layer for the source BS, security key information for the PDCP layer for the source BS, header (or data) compression context information for the source BS, or the RLC layer or the MAC layer for the source BS is released. FIG.1Jis a flowchart of operations of a UE, according to embodiments of the disclosure. InFIG.1J, the UE may perform data transmission or reception to or from a source BS for each bearer via a first PDCP layer architecture (operation1j-01). However, the UE receives a handover command message (operation1j-50). When the handover command message indicates the DAPS handover method of Embodiment 2 proposed in the disclosure or indicates the DAPS handover method for each bearer, the UE may switch the architecture of a target BS indicated in the handover command message to a second PDCP layer architecture for each bearer or bearers for which the DAPS handover method is indicated. Even when the UE configures and establishes protocol layers of a second bearer, and performs a random access procedure on the target BS via the established protocol layers (operations1j-10and1j-15), the UE may continuously perform data transmission or reception (UL data transmission and DL data reception) to or from the source BS via protocol layers of a first bearer (operation1j-20). When the terminal satisfies the first condition of the disclosure (operation1j-25), the UE may discontinue UL data transmission to the source BS via the protocol layers of the first bearer, may transmit UL data to the target BS via the protocol layers of the second bearer by switching the UL data transmission, and may continuously receive DL data from the source BS and the target BS via the protocol layers of the first and second bearers (operation1j-30). Also, a PDCP layer of the second bearer may continuously perform data transmission or reception without interruption to or from the target BS by using data to be transmitted or data to be received, SN information, or header compression and decompression context, which is stored in a PDCP layer of the first bearer. When the terminal does not satisfy the first condition, the UE may continuously check the first condition while continuously performing an ongoing procedure (operation1j-35). When the terminal satisfies the second condition (operation1j-40), the UE may discontinue DL data reception from the source BS via the protocol layers of the first bearer (operation1j-45). Also, the PDCP layer of the second bearer may continuously perform data transmission or reception without interruption to or from the target BS by using data to be transmitted or data to be received, SN information, or header compression and decompression context, which is stored in the PDCP layer of the first bearer. When the terminal does not satisfy the second condition (operation1j-40), the UE may continuously check the second condition while continuously performing an ongoing procedure (operation1j-50). According to an embodiment of the disclosure, a PDCP layer proposed in the disclosure may perform different procedures according to types of handover indicated in a handover command message received by a UE.When handover indicated in the handover command message the UE receives from a source BS is handover (e.g., a normal handover method) of Embodiment 1, the UE may perform a PDCP re-establishment procedure on the PDCP layer according to each bearer.When the handover indicated in the handover command message the UE receives from a source BS is handover of Embodiment 2 (or the handover is indicated in the handover command message for each bearer), The UE may perform procedures on each bearer (or a bearer for which Embodiment 2 is indicated), the procedures being proposed in the disclosure on condition that the first condition is satisfied. When the source BS indicates, to the UE, handover to which embodiments proposed in the disclosure are applied, the source BS may start data forwarding to a target BS when a third condition below is satisfied. The third condition may mean that one or a plurality of conditions from among the conditions below is satisfied.In a case where the UE receives, from the target BS, an indication indicating that handover is successfully completedIn a case where the source BS transmits a handover command message to the UEIn a case where the source BS transmits a handover command message to the UE and acknowledges successful delivery (HARQ ACK or NACK or RLC ACK or NACK) of the handover command messageIn a case where the source BS receives, from the UE, an indication (e.g., an RRC message (e.g., an RRCReconfiguration message)) indicating that connection to the source BS is to be release, or receives a MAC CE, an RLC control PDU, or PDCP control PDU from the UEIn a case where the source BS transmits a handover command message to the UE and drives a certain timer, and then the timer is expiredIn a case where acknowledgement (HARQ ACK or NACK or RLC ACK or NACK) with respect to successful delivery of DL data is not received from the UE for a certain time FIG.1Kis a block diagram of a configuration of a UE to which an embodiment of the disclosure is applicable. Referring toFIG.1K, the UE may include a radio frequency (RF) processor1k-10, a baseband processor1k-20, a storage1k-30, and a controller1k-40. However, this is only an embodiment, and the components included in the UE are not limited thereto. The RF processor1k-10may perform functions for transmitting and receiving a signal via a radio channel, such as a band conversion, amplification, and the like of the signal. In other words, the RF processor1k-10may up-convert a baseband signal provided from the baseband processor1k-20, to an RF band signal and transmit the RF band signal through an antenna, and down-convert an RF band signal received through an antenna, to a baseband signal. For example, the RF processor1k-10may include a transmit filter, a receive filter, an amplifier, a mixer, an oscillator, a digital-to-analog converter (DAC), and an analog-to-digital converter (ADC). Although only a single antenna is illustrated inFIG.1K, the UE may include multiple antennas. The RF processor1k-10may include a plurality of RF chains. The RF processor1k-10may perform beamforming. For the beamforming, the RF processor1k-10may adjust respective phases and amplitudes of signals transmitted or received through multiple antennas or antenna elements. The RF processor1k-10may perform an MIMO operation and may receive several layers in the MIMO operation. The RF processor1k-10may perform received beam sweeping by appropriately configuring multiple antennas or antenna elements, or may adjust a direction and a beam width of the received beam such that the received beam coordinates with a transmit beam, under the control of the controller1k-40. The baseband processor1k-20may perform conversion between a baseband signal and a bitstream, based on physical layer specifications of a system. For example, for data transmission, the baseband processor1k-20may generate complex symbols by encoding and modulating a transmit bitstream. For data reception, the baseband processor1k-20may reconstruct a received bitstream by demodulating and decoding a baseband signal provided by the RF processor1k-10. For example, according to an orthogonal frequency division multiplexing (OFDM) scheme, for data transmission, the baseband processor1k-20may generate complex symbols by encoding and modulating a transmit bitstream, map the complex symbols to subcarriers, and then configure OFDM symbols by performing inverse fast Fourier transform (IFFT) and cyclic prefix (CP) insertion. For data reception, the baseband processor1k-20may split a baseband signal provided from the RF processor1k-10, in OFDM symbol units, reconstruct signals mapped to subcarriers by performing fast Fourier transformation (FFT), and then reconstruct a received bitstream by demodulating and decoding the signals. The baseband processor1k-20and the RF processor1k-10transmit and receive signals as described above. Accordingly, each of the baseband processor1k-20and the RF processor1k-10may also be called a transmitter, a receiver, a transceiver, or a communicator. At least one of the baseband processor1k-20or the RF processor1k-10may include multiple communication modules to support multiple different radio access technologies. Also, at least one of the baseband processor1k-20or the RF processor1k-10may include multiple different communication modules to process signals of different frequency bands. For example, the different radio access technologies may include an LTE network, an NR network, etc. The different frequency bands may include a super high frequency (SHF) (e.g., 2.5 GHz and 5 GHz) band and a millimeter wave (mmWave) (e.g., 60 GHz) band. The storage1k-30stores data for operations of the UE, e.g., basic programs, application programs, and configuration information. The storage1k-30provides the stored data upon request by the controller1k-40. The controller1k-40controls all operations of the UE. For example, the controller1k-40may transmit and receive signals through the baseband processor1k-20and the RF processor1k-10. The controller1k-40writes and reads data to and from the storage1k-30. To this end, the controller1k-40may include at least one processor. For example, the controller1k-40may include a communication processor (CP) performing control for communication, and an application processor (AP) controlling an upper layer, such as an application program. FIG.1Lis a block diagram of a configuration of a BS in a wireless communication system, to which an embodiment of the disclosure is applicable. Referring toFIG.1L, the BS may include an RF processor1I-10, a baseband processor1I-20, a communicator1I-30, a storage1I-40, and a controller1I-50. However, this is only an embodiment, and the components included in the BS are not limited thereto. The RF processor1I-10may perform functions for transmitting and receiving a signal via a radio channel, such as a band conversion, amplification, and the like of the signal. In other words, the RF processor1I-10may up-convert a baseband signal provided from the baseband processor1I-20, to an RF band signal and transmit the RF band signal through an antenna, and down-convert an RF band signal received through an antenna, to a baseband signal. For example, the RF processor1I-10may include a transmit filter, a receive filter, an amplifier, a mixer, an oscillator, a DAC, an ADC, or the like. Although only a single antenna is illustrated inFIG.1L, the BS may include multiple antennas. The RF processor1I-10may include a plurality of RF chains. The RF processor1I-10may perform beamforming. For beamforming, the RF processor1I-10may adjust phases and amplitudes of signals transmitted or received through multiple antennas or antenna elements. The RF processor1I-10may perform a downlink (DL) MIMO operation by transmitting at least one layer. The baseband processor1I-20may perform conversion between a baseband signal and a bitstream, based on physical layer specifications of a first radio access technology. For example, for data transmission, the baseband processor1I-20may generate complex symbols by encoding and modulating a transmit bitstream. For data reception, the baseband processor1I-20may reconstruct a received bitstream by demodulating and decoding a baseband signal provided by the RF processor1I-10. For example, according to an OFDM scheme, for data transmission, the baseband processor1I-20may generate complex symbols by encoding and modulating a transmit bitstream, map the complex symbols to subcarriers, and then configure OFDM symbols by performing IFFT and CP insertion. For data reception, the baseband processor1I-20may split a baseband signal provided from the RF processor1I-10, in OFDM symbol units, reconstruct signals mapped to subcarriers by performing FFT, and then reconstruct a received bitstream by demodulating and decoding the signals. The baseband processor1I-20and the RF processor1I-10transmit and receive signals as described above. Accordingly, each of the baseband processor1I-20and the RF processor1I-10may also be called a transmitter, a receiver, a transceiver, a communicator, or a wireless communicator. The communicator1I-30may provide an interface for communicating with other nodes in a network. The storage1I-40stores data for operations of the BS, e.g., basic programs, application programs, and configuration information. In particular, the storage1I-40may store information about bearers allocated for a connected UE, a measurement report transmitted from the connected UE, etc. The storage1I-40may store criteria information used to determine whether to provide or release multi-connectivity to or from the UE. The storage1I-40provides the stored data upon request by the controller1I-50. The controller1I-50controls all operations of the BS. For example, the controller1I-50may transmit and receive signals through the baseband processor1I-20and the RF processor1I-10or through the communicator1I-30. The controller1I-50writes and reads data to and from the storage1I-40. To this end, the controller1I-50may include at least one processor. In the disclosure, provided are various efficient handover methods for preventing occurrence of a data interruption time due to handover when the handover is performed in a wireless communication system, such that a service without data interruption may be supported.
141,457
11943671
DETAILED DESCRIPTION Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In addition, in the description of the present invention, if it is determined that a detailed description of a related known function or configuration may unnecessarily obscure the gist of the present invention, the detailed description thereof will be omitted. In addition, the terms to be described later are terms defined in consideration of functions in the present invention, which may vary according to intentions or customs of users and operators. Therefore, the definition should be made based on the content throughout this specification. The terms used, in the following description, for indicating access nodes, network entities, messages, interfaces between network entities, and diverse identity information is provided for convenience of explanation. Accordingly, the terms used in the following description are not limited to specific meanings but may be replaced by other terms equivalent in technical meanings. In the following descriptions, the terms and definitions given in the 3GPP standards are used for convenience of explanation. However, the present disclosure is not limited by use of these terms and definitions and other arbitrary terms and definitions may be employed instead. Table 1 lists the acronyms used throughout the present disclosure. TABLE 1AcronymFull name5GC5G Core NetworkACKAcknowledgementAMAcknowledged ModeAMFAccess and Mobility Management FunctionARQAutomatic Repeat RequestASAccess StratumASN.1Abstract Syntax Notation OneBSRBuffer Status ReportBWPBandwidth PartCACarrier AggregationCAGClosed Access GroupCGCell GroupC-RNTICell RNTICSIChannel State InformationDCIDownlink Control InformationDRB(user) Data Radio BearerDRXDiscontinuous ReceptionHARQHybrid Automatic Repeat RequestIEInformation elementLCGLogical Channel GroupMACMedium Access ControlMIBMaster Information BlockNASNon-Access StratumNG-RANNG Radio Access NetworkNRNR Radio AccessPBRPrioritised Bit RatePCellPrimary CellPCIPhysical Cell IdentifierPDCCHPhysical Downlink Control ChannelPDCPPacket Data Convergence ProtocolPDSCHPhysical Downlink Shared ChannelPDUProtocol Data UnitPHRPower Headroom ReportPLMNPublic Land Mobile NetworkPRACHPhysical Random Access ChannelPRBPhysical Resource BlockPSSPrimary Synchronisation SignalPUCCHPhysical Uplink Control ChannelPUSCHPhysical Uplink Shared ChannelRACHRandom Access ChannelRANRadio Access NetworkRARRandom Access ResponseRA-RNTIRandom Access RNTIRATRadio Access TechnologyRBRadio BearerRLCRadio Link ControlRNARAN-based Notification AreaRNAURAN-based Notification Area UpdateRNTIRadio Network Temporary IdentifierRRCRadio Resource ControlRRMRadio Resource ManagementRSRPReference Signal Received PowerRSRQReference Signal Received QualityRSSIReceived Signal Strength IndicatorSCellSecondary CellSCSSubcarrier SpacingSDAPService Data Adaptation ProtocolSDUService Data UnitSFNSystem Frame NumberS-GWServing GatewaySISystem InformationSIBSystem Information BlockSpCellSpecial CellSRBSignalling Radio BearerSRSSounding Reference SignalSSSearch SpaceSSBSS/PBCH blockSSSSecondary Synchronisation SignalSULSupplementary UplinkTMTransparent ModeUCIUplink Control InformationUEUser EquipmentUMUnacknowledged ModeCRPCell Reselection Priority Table 2 lists the terminologies and their definition used throughout the present disclosure. TABLE 2TerminologyDefinitionCarrier frequencycenter frequency of the cell.Cellcombination of downlink and optionally uplink resources. Thelinking between the carrier frequency of the downlink resourcesand the carrier frequency of the uplink resources is indicated in thesystem information transmitted on the downlink resources.Cell Groupin dual connectivity, a group of serving cells associated with eitherthe MeNB or the SeNB.Cell reselectionA process to find a better suitable cell than the current serving cellbased on the system information received in the current serving cellCell selectionA process to find a suitable cell either blindly or based on thestored informationCell ReselectionPriority of a carrier frequency regarding cell reselection. SystemPriorityInformation Block 2 and System Information Block 3 provide theCRP of the serving frequency and CRPs of inter-frequenciesrespectively. UE consider higher priority frequency for cellreselection if channel condition of the frequency is better than aspecific threshold even if channel condition of a lower priorityfrequency is better than that of the higher priority frequency.DedicatedSignalling sent on DCCH logical channel between the network andsignallinga single UE.FieldThe individual contents of an information element are referred to asfields.Frequency layerset of cells with the same carrier frequency.Global cellAn identity to uniquely identifying an NR cell. It is consisted ofidentitycellIdentity and plmn-Identity of the first PLMN-Identity in plmn-IdentityList in SIB1.gNBnode providing NR user plane and control plane protocolterminations towards the UE, and connected via the NG interface tothe 5GC.Handoverprocedure that changes the serving cell of a UE inRRC_CONNECTED.InformationA structural element containing single or multiple fields is referredelementas information element.LThe Length field in MAC subheader indicates the length of thecorresponding MAC SDU or of the corresponding MAC CELCID6 bit logical channel identity in MAC subheader to denote whichlogical channel traffic or which MAC CE is included in the MACsubPDULogical channela logical path between a RLC entity and a MAC entity. There aremultiple logical channel types depending on what type ofinformation is transferred e.g. CCCH (Common Control Channel),DCCH (Dedicate Control Channel), DTCH (Dedicate TrafficChannel), PCCH (Paging Control Channel)NRNR radio accessPCellSpCell of a master cell group.registered PLMNPLMN which UE has registered toselected PLMNPLMN which UE has selected to perform registration procedureequivalent PLMNPLMN which is equivalent to registered PLMN. UE is informed oflist of EPLMNs by AMF during registration procedurePLMN ID Checkthe process that checks whether a PLMN ID is the RPLMN identityor an EPLMN identity of the UE.Primary CellThe MCG cell, operating on the primary frequency, in which theUE either performs the initial connection establishment procedureor initiates the connection re-establishment procedure.Radio BearerLogical path between a PDCP entity and upper layer (i.e. SDAPentity or RRC)RLC bearerRLC and MAC logical channel configuration of a radio bearer inone cell group.RLC bearerThe lower layer part of the radio bearer configuration comprisingconfigurationthe RLC and logical channel configurations.Serving CellFor a UE in RRC_CONNECTED not configured with CA/DCthere is only one serving cell comprising of the primary cell. For aUE in RRC_CONNECTED configured with CA/ DC the term'serving cells' is used to denote the set of cells comprising of theSpecial Cell(s) and all secondary cells.SpCellprimary cell of a master or secondary cell group.Special CellFor Dual Connectivity operation the term Special Cell refers to thePCell of the MCG or the PSCell of the SCG, otherwise the termSpecial Cell refers to the PCell.SRBSignalling Radio Bearers” (SRBs) are defined as Radio Bearers(RBs) that are used only for the transmission of RRC and NASmessages.SRB0SRB0 is for RRC messages using the CCCH logical channelSRB1SRB1 is for RRC messages (which may include a piggybackedNAS message) as well as for NAS messages prior to theestablishment of SRB2, all using DCCH logical channel;SRB2SRB2 is for NAS messages and for RRC messages which includelogged measurement information, all using DCCH logical channel.SRB2 has a lower priority than SRB1 and may be configured bythe network after AS security activation;SRB3SRB3 is for specific RRC messages when UE is in (NG)EN-DC orNR-DC, all using DCCH logical channelSRB4SRB4 is for RRC messages which include application layermeasurement reporting information, all using DCCH logicalchannel.DCCHDCCH is a logical channel to transfer RRC messages after RRCconnection establishmentSuitable cellA cell on which a UE may camp. Following criteria applyThe cell is part of either the selected PLMN or the registeredPLMN or PLMN of the Equivalent PLMN listThe cell is not barredThe cell is part of at least one TA that is not part of the list of“Forbidden Tracking Areas for Roaming” (TS 22.011 [18]), whichbelongs to a PLMN that fulfils the first bullet above.The cell selection criterion S is fulfilled (i.e. RSRP and RSRQ arebetter than specific values In the present invention, “trigger” or “triggered” and “initiate” or “initiated” may be used in the same meaning. In the present invention, a terminal with reduced capability and RedCap UE may be used in the same meaning. FIG.1Ais a diagram illustrating the architecture of an 5G system and a NG-RAN to which the disclosure may be applied. 5G system consists of NG-RAN1A-01and 5GC1A-02. An NG-RAN node is either:a gNB, providing NR user plane and control plane protocol terminations towards the UE; oran ng-eNB, providing E-UTRA user plane and control plane protocol terminations towards the UE. The gNBs1A-05or1A-06and ng-eNBs1A-03or1A-04are interconnected with each other by means of the Xn interface. The gNBs and ng-eNBs are also connected by means of the NG interfaces to the 5GC, more specifically to the AMF (Access and Mobility Management Function) and to the UPF (User Plane Function). AMF1A-07and UPF1A-08may be realized as a physical node or as separate physical nodes. A gNB1A-05or1A-06or an ng-eNBs1A-03or1A-04hosts the functions listed below. Functions for Radio Resource Management such as Radio Bearer Control, Radio Admission Control, Connection Mobility Control, Dynamic allocation of resources to UEs in uplink, downlink and sidelink (scheduling); and IP and Ethernet header compression, uplink data decompression and encryption of user data stream; and Selection of an AMF at UE attachment when no routing to an MME can be determined from the information provided by the UE; and Routing of User Plane data towards UPF; and Scheduling and transmission of paging messages; and Scheduling and transmission of broadcast information (originated from the AMF or O&M); and Measurement and measurement reporting configuration for mobility and scheduling; and Session Management; and QoS Flow management and mapping to data radio bearers; and Support of UEs in RRC_INACTIVE state; and Radio access network sharing; and Tight interworking between NR and E-UTRA; and Support of Network Slicing. The AMF1A-07hosts the functions such as NAS signaling, NAS signaling security, AS security control, SMF selection, Authentication, Mobility management and positioning management. The UPF1A-08hosts the functions such as packet routing and forwarding, transport level packet marking in the uplink, QoS handling and the downlink, mobility anchoring for mobility etc. FIG.1Bis a diagram illustrating a wireless protocol architecture in an 5G system to which the disclosure may be applied. User plane protocol stack consists of SDAP1B-01or1B-02, PDCP1B-03or1B-04, RLC1B-05or1B-06, MAC1B-07or1B-08and PHY1B-09or1B-10. Control plane protocol stack consists of NAS1B-11or1B-12, RRC1B-13or1B-14, PDCP, RLC, MAC and PHY. Each protocol sublayer performs functions related to the operations listed in the table 3. TABLE 3SublayerFunctionsNASauthentication, mobility management, security control etcRRCSystem Information, Paging, Establishment, maintenance and releaseof an RRC connection, Security functions, Establishment,configuration, maintenance and release of Signalling Radio Bearers(SRBs) and Data Radio Bearers (DRBs), Mobility, QoS management,Detection of and recovery from radio link failure, NAS messagetransfer etc.SDAPMapping between a QoS flow and a data radio bearer, Marking QoSflow ID (QFI) in both DL and UL packets.PDCPTransfer of data, Header compression and decompression, Cipheringand deciphering, Integrity protection and integrity verification,Duplication, Reordering and in-order delivery, Out-of-order deliveryetc.RLCTransfer of upper layer PDUs, Error Correction through ARQ,Segmentation and re-segmentation of RLC SDUs, Reassembly ofSDU, RLC re-establishment etc.MACMapping between logical channels and transport channels,Multiplexing/demultiplexing of MAC SDUs belonging to one ordifferent logical channels into/from transport blocks (TB) deliveredto/from the physical layer on transport channels, Schedulinginformation reporting, Priority handling between UEs, Priorityhandling between logical channels of one UE etc.PHYChannel coding, Physical-layer hybrid-ARQ processing, Ratematching, Scrambling, Modulation, Layer mapping, DownlinkControl Information, Uplink Control Information etc. A reduced capability UE or RedCap UE has lower performance than a general UE and is used in limited scenarios such as IOT. Compared to a typical terminal having a bandwidth of 100 MHz, a transmission/reception speed of several Gbps, and four or more Rx processing units (Rx branches), RedCap terminals have a bandwidth of 20 MHz, a transmission/reception speed of several tens of Mbps, and two or less Rx processing units. The present invention provides a method and apparatus for a RedCap UE to access a cell supporting RedCap, receive system information, and perform necessary operations. In particular, the terminal applies search space 0 (Search Space 0, hereinafter SS #0) and control resource set 0 (Control Resource Set 0, hereinafter CORESET #0) in the initial bandwidth part (IBWP) to obtain system information. FIG.2Ais a diagram illustrating an example of a bandwidth part. With Bandwidth Adaptation (BA), the receive and transmit bandwidth of a UE need not be as large as the bandwidth of the cell and can be adjusted: the width can be ordered to change (e.g. to shrink during period of low activity to save power); the location can move in the frequency domain (e.g. to increase scheduling flexibility); and the subcarrier spacing can be ordered to change (e.g. to allow different services). A subset of the total cell bandwidth of a cell is referred to as a Bandwidth Part (BWP) and BA is achieved by configuring the UE with BWP(s) and telling the UE which of the configured BWPs is currently the active one. FIG.2Adescribes a scenario where 3 different BWPs are configured:BWP1 with a width of 40 MHz and subcarrier spacing of 15 kHz;2A-11or2A-19BWP2 with a width of 10 MHz and subcarrier spacing of 15 kHz;2A-13or2A-17BWP3 with a width of 20 MHz and subcarrier spacing of 60 kHz.2A-15 FIG.2Bis a diagram illustrating an example of a search space and a control resource set. A plurality of SSs may be configured in one BWP. The UE monitors PDCCH candidates according to the SS configuration of the currently active BWP. One SS consists of an SS identifier, a CORESET identifier indicating the associated CORESET, the period and offset of the slot to be monitored, the slot unit duration, the symbol to be monitored in the slot, the SS type, and the like. The information may be explicitly and individually configured or may be configured by a predetermined index related to predetermined values. One CORESET consists of a CORESET identifier, frequency domain resource information, symbol unit duration, TCI state information, and the like. Basically, it can be understood that CORESET provides frequency domain information to be monitored by the UE, and SS provides time domain information to be monitored by the UE. CORESET #0 and SS #0 may be configured in the IBWP. One CORESET and a plurality of SSs may be additionally configured in the IBWP. Upon receiving the MIB2B-01, the UE recognizes CORESET #02B-02and SS #02B-03for receiving SIB1 using predetermined information included in the MIB. The UE receives SIB12B-05through CORESET #02B-02and SS #02B-03. In SIB1, information constituting CORESET #02B-06and SS #02B-07and information constituting another CORESET, for example, CORESET #n2B-11and SS #m2B-13may be included. The terminal receives necessary information from the base station before the terminal enters the RRC_CONNECTED state, such as SIB2 reception, paging reception, and random access response message reception by using the CORESETs and SSs configured in SIB1. CORESET #02B-02configured in MIB and CORESET #02B-06configured in SIB1 may be different from each other, and the former is called a first CORESET #0 and the latter is called a second CORESET #0. SS #02B-03configured in MIB and SS #02B-07configured in SIB1 may be different from each other, and the former is referred to as a first SS #0 and the latter is referred to as a second SS #0. SS #0 and CORESET #0 configured for the RedCap terminal are referred to as a third SS #0 and a third CORESET #0. The first SS #0, the second SS #0, and the third SS #0 may be the same as or different from each other. The first CORESET #0, the second CORESET #0, and the third CORESET #0 may be the same as or different from each other. SS #0 and CORESET #0 are each indicated by a 4-bit index. The 4-bit index indicates a configuration predetermined in the standard specification. Except for SS #0 and CORESET #0, the detailed configuration of the remaining SS and CORSESET is indicated by each individual information element. When the RRC connection is established, additional BWPs may be configured for the UE. FIG.3is a diagram illustrating operations of a terminal and a base station according to an embodiment of the present disclosure. In a network consisting of a RedCap UE3A-01, a base station3A-03and an AMF3A-05, the RedCap UE receives system information, determines whether to bar a cell, performs cell reselection, monitors a paging message, selects and applies cell common configuration information and transmits and receives RRC control messages. In step3A-11, the RedCap UE camps on a cell managed by the base station by performing cell selection or cell reselection. The RedCap UE selects a cell having a good reception signal from among cells of the highest priority frequency in consideration of cell reselection priority and the like. In step3A-13, the RedCap UE receives the MIB in the selected cell. The MIB includes controlResourceSetZero, which is a 4-bit index indicating the configuration of the first CORESET #0, and controlResourceSetZero, which is a 4-bit index, indicating the configuration of the first SS #0. The UE receives SIB1 by applying the frequency domain and time pattern indicated by the first CORESET #0 and the first SS #0. The MIB includes cellBarred, which is 1-bit information indicating whether or not the cell is barred. cellBarred indicates either barred or notBarred. The UE uses cellBarred to determine whether to bar the cell. The MIB includes a first intraFreqReselection that is 1-bit information for controlling intra-frequency cell reselection. The first intraFreqReselection is defined as Enumerated {allowed, notAllowed}. Also called IFRI_MIB. In steps3A-15, the RedCap UE receives SIB1. The RedCap UE stores the acquired SIB1. SIB1 includes ServingCellConfigCommon, which is common configuration information of a serving cell, and a second intraFreqReselection. The second intraFreqReselection is defined as enumerated with one of Allowed and notAllowed. It is also called IFRI_SIB. In step3A-16, the RedCap UE selects one of a plurality of common configuration information included in ServingCellConfigCommon. The servingCellConfigCommon of SIB1 includes the following information. TABLE 4DownlinkConfigCommonThis is a common downlink configuration of theserving cell. It consists of subfields such asfrequencyInfoDL, initialDownlinkBWP, bcch-Config, and pcch-Config.frequencyInfoDLIt is a basic parameter of a downlink carrier. Itconsists of subfields such as a frequency band list andcarrier bandwidth for each SCS.initialDownlinkBWPThis is the configuration of the second downlinkIBWP. It consists of subfields such as BWP, PDCCH-ConfigCommon, and PDSCH-ConfigCommon. Thefirst IBWP has a frequency domain corresponding tothe first CORESET#0 of the MIB and has subcarrierspacing indicated by the MIB. The first IBWP is theIBWP indicated by the MIB and used for receivingSIB1, the second IBWP is the IBWP indicated by theSIB1 and used for receiving the SIB2, paging, randomaccess response message, and the like.BWPIt is IE that configures general parameters of BWP. Itconsists of subfields such as locationAndBandwidthindicating the bandwidth and location of the BWP,and subcarrierSpacing indicating the SCS of theBWP.PDCCH-ConfigCommonIt is the cell-specific PDCCH parameters of the BWP.It consists of subfields such ascontrolResourceSetZero,commonControlResourceSet, searchSpaceZero,commonSearchSpaceList,searchSpaceOtherSystemInformation,pagingSearchSpace, and ra-SearchSpace.controlResourceSetZeroIt is defined as an integer between 0 and 15. Indicatesone of the predefined CORESET#0 configurations.The controlResourceSetZero included in the MIBcorresponds to the first CORESET#0, and thecontrolResourceSetZero included in the PDCCH-ConfigCommon of the servingCellConfigCommon ofSIB1 corresponds to the second CORESET#0.searchSpaceZeroIt is defined as an integer between 0 and 15. Indicatesone of the predefined SS#0 configurations. ThesearchSpaceZero included in the MIB corresponds tothe first SS#0, and the controlResourceSetZeroincluded in the PDCCH-ConfigCommon of theservingCellConfigCommon of SIB1 corresponds tothe second SS#0.commonControlResourceSetA common CORESET defined byControlResourceSet IE. Defines an additionalCORESET that can be used for paging reception,random access response reception, systeminformation reception, etc.commonSearchSpaceListList of common SSs. The common SS may be usedfor paging reception, random access responsereception, system information reception, and the like.searchSpaceOtherSystemInformationDefined by the SS identifier IE. If it is 0, the secondSS#0 is indicated, and if it is a value other than 0, oneof the SSs defined in commonSearchSpaceList isindicated.pagingSearchSpaceDefined by the SS identifier IE. If it is 0, the secondSS#0 is indicated, and if it is a value other than 0, oneof the SSs defined in commonSearchSpaceList isindicated.ra-SearchSpaceDefined by the SS identifier IE. If it is 0, the secondSS#0 is indicated. If it is a value other than 0, one ofthe SSs defined in the commonSearchSpaceList isindicated.PDSCH-ConfigCommonCell-specific PDSCH parameters of this BWP. Itconsists of a pdsch-TimeDomainAllocationList. Thepdsch-TimeDomainAllocationList is a list composedof a plurality of pdsch-TimeDomainAllocations.pdsch-TimeDomainAllocationA time domain relationship between the PDCCH andthe PDSCH. It consists of subfields such as K0 andstartSymbolAndLength. K0 is the slot offset betweenthe DCI and the scheduled PDSCH.startSymbolAndLength is an index indicating a validstart symbol and length combination.pcch-ConfigConfiguration related to paging. It consists of sub-fields such as the base station paging period, PF-related parameters, and PO-related parameters.bcch-configIt is a configuration related to system information. Itconsists of subfields such as modificationPeriodCoeffindicating the length of the modification period.UplinkConfigCommonSIBThis is a common uplink configuration of the servingcell. It consists of subfields such as frequencylnfoUL,initialUplinkBWP, andtimeAlignmentTimerCommon.frequencyInfoULIt is a basic parameter of the uplink carrier. It consistsof subfields such as a frequency band list and carrierbandwidth for each SCS.initialUplinkBWPThis is the configuration of the second uplink IBWP.It consists of subfields such as BWP, rach-ConfigCommon, pusch-ConfigCommon, and pucch-ConfigCommon.rach-ConfigCommonThis is the cell-specific random access parameter ofthe BWP. It consists of subfields such as prach-ConfigurationIndex, msg1-FrequencyStart,preambleReceivedTargetPower, ra-ResponseWindow, preambleTransMax, msg1-SubcarrierSpacing, rsrp-ThresholdSSB, and ra-ContentionResolutionTimer.prach-ConfigurationIndexPRACH configuration index. One PRACHconfiguration corresponds to pattern information on aPRACH transmission opportunity in the time domain(information indicating in which symbol in which slotof which radio frame PRACH transmission ispossible), a transmission format of a preamble, andthe like.msg1-FrequencyStartThe offset from PRB0 of the lowest PRACHtransmission opportunity. Information indicating aPRACH transmission resource in the frequencydomain. PRB0 is the lowest frequency PRB amongPRBs of the corresponding carrier.preambleReceivedTargetPowerThis is the target power level of the network receivingend. It is a parameter related to transmission powercontrol during the random access procedure.ra-ResponseWindowThe length of the random access response windowexpressed in the number of slots.preambleTransMaxThe maximum number of random access preambletransmissionsmsg1-SubcarrierSpacingIt is PRACH's SCS. It is commonly applied to generalterminals and RedCap UEs.rsrp-ThresholdSSBSSB selection criteria. The UE performs randomaccess by selecting a preamble corresponding to theselected SSB.ra-ContentionResolutionTimerThis is the initial value of the contention resolutiontimer. Indicates the number of subframes.pusch-ConfigCommonCell-specific PUSCH parameters of this BWP. Itconsists of subfields like pusch-TimeDomainAllocationList. The pusch-TimeDomainAllocationList is a list composed of aplurality of pusch-TimeDomainAllocations.pusch-TimeDomainAllocationA time domain relationship between the PDCCH andthe PUSCH. It consists of subfields such as K2 andstartSymbolAndLength. K2 is the slot offset betweenthe DCI and the scheduled PUSCH.startSymbolAndLength is an index indicating a validcombination of start symbol and length.pucch-ConfigCommonThis is the cell-specific PUCCH parameter of theBWP. It consists of subfields such as pucch-ResourceCommon and p0-norminal.pucch-ResourceCommonIt is an index corresponding to a cell-specific PUCCHresource parameters. One index corresponds to aPUCCH format, a PUCCH time period, a PUCCHfrequency period, a PUCCH code, and the like.p0-norminalThis is a power offset applied during PUCCHtransmission. Defined as an integer between -202 and24 in increments of 2. The unit is dBm.timeAlignmentTimerCommonThis is a timer applied when the UE performs randomaccess for RRC connection establishment procedureand RRC connection re-establishment procedure.When the UE receives the RAR, it starts the timer, andstops the timer when contention fails.tdd-UL-DL-ConfigurationCommonCell specific TDD UL/DL configuration. It consistsof subfields such as referenceSubcarrierSpacing,pattern1, and pattern2.referenceSubcarrierSpacingThis is the reference SCS used to determine the timedomain boundary in the UL-DL pattern.pattern1, pattern2TDD Uplink Downlink Pattern. It consists ofsubfields such as dl-UL-TransmissionPeriodicity,nrofDownlinkSlots, nrofDownlinkSymbols,nrofUplinkSlots, and nrofUplinkSymbols.dl-UL-TransmissionPeriodicityIndicates the period of the DL-UL pattern.nrofDownlinkSlotsIndicates the number of consecutive full DL slots ineach DL-UL pattern.nrofDownlinkSymbolsIndicates the number of consecutive DL symbolsfrom the beginning of the slot following the last fullDL slot.nrofUplinkSlotsIndicates the number of consecutive full UL slots ineach DL-UL pattern.nrofUplinkSymbolsIndicates the number of consecutive UL symbols atthe last time point of a slot preceding the first full ULslot ServingCellConfigCommon may also include the following information for RedCap UE. TABLE 5controlResourceSetZero_RedCapIt is defined as an integer between 0 and 15.Indicates one of the predefined CORESET#0configurations. It corresponds to the thirdCORESET #0.searchSpaceZero_RedCapIt is defined as an integer between 0 and 15.Indicates one of the predefined SS#0configurations. It corresponds to the third SS#0.searchSpaceOtherSystemInformation_RedCapDefined by the SS identifier IE. If it is 0, the thirdSS#0 is indicateed, if not 0, one of the SSs definedin commonSearchSpaceList is indicateed.ra-SearchSpace_RedCapDefined by the SS identifier IE. If it is 0, the thirdSS#0 is indicateed, if not 0, one of the SSs definedin commonSearchSpaceList is indicateed. .prach-ConfigurationIndex_RedCapPRACH configuration index for RedCap.msg1-FrequencyStart_RedCapPRACH transmission resource information on thefrequency domain for RedCappreambleReceivedTargetPower_RedCapThe target power level of the network receiver forRedCap.ra-ResponseWindow_RedCapLength of the random access response window forRedCap.preambleTransMax_RedCapMaximum number of random access preambletransmissions for RedCaprsrp-ThresholdSSB_RedCapSSB selection criteria for RedCap.ra-Initial value of the contention resolution timer forContentionResolutionTimer_RedCapRedCap.intraFreqReselection_RedCapControls cell selection/reselection within thefrequency of RedCap UE when the highest-prioritycell is barred. It is 1-bit information and is definedas Enumerated {Allowed, notAllowed}. Alsocalled IFRI_SIB1. IFRI_MIB is defined to be present mandatorily and IFRI_SIB1 is defined to be present optionally. This is to ensure backward compatibility of SIB1. Instead of defining IEs for RedCap UEs in units of individual IEs, it is also possible to define configuration information related to RedCap UEs in units of IE sets as follows. ServingCellConfigCommon of SIB1 includes downlink IBWP configuration information and uplink IBWP configuration information. Downlink IBWP configuration information includes PDCCH-ConfigCommon and PDCCH-ConfigCommon2. PDCCH-ConfigCommon is used by general terminals and RedCap UEs, and PDCCH-ConfigCommon2 is used by RedCap UEs. RedCap UE uses PDCCH-ConfigCommon when only PDCCH-ConfigCommon is included in downlink IBWP configuration information and uses PDCCH-ConfigCommon2 when both PDCCH-ConfigCommon and PDCCH-ConfigCommon2 are included. PDCCH-ConfigCommon includes controlResourceSetZero, commonControlResourceSet, searchSpaceZero, commonSearchSpaceList, searchSpaceOtherSystemInformation, pagingSearchSpace, and ra-SearchSpace. PDCCH-ConfigCommon2 includes controlResourceSetZero_RedCap, commonControlResourceSet_RedCap, searchSpaceZero_RedCap, commonSearchSpaceList_RedCap, ra-SearchSpace_RedCap. RedCap UE uses controlResourceSetZero and searchSpaceZero of PDCCH-ConfigCommon if controlResourceSetZero_RedCap and searchSpaceZero_RedCap are not included in PDCCH-ConfigCommon2. That is, it is considered that the same value as the second SS #0 is configured for the third SS #0 and the same value as the second CORESET #0 is configured for the third CORESET #0. The RedCap UE uses the values indicated in the MIB when controlResourceSetZero_RedCap and searchSpaceZero_RedCap are not included in PDCCH-ConfigCommon2 and controlResourceSetZero and searchSpaceZero are not included in PDCCH-ConfigCommon. That is, it is considered that the same value as the first SS #0 is configured for the third SS #0 and the same value as the first CORESET #0 is configured for the third CORESET #0. RedCap UE uses ra-SearchSpace of PDCCH-ConfigCommon if ra-SearchSpace_RedCap is not included in PDCCH-ConfigCommon2. That is, it is considered that the same value as ra-SearchSpace is set as ra-SearchSpace_RedCap. The RedCap UE performs a random access procedure by applying the third SS #0 and the third CORESET #0. The uplink IBWP configuration information includes PUCCH-ConfigCommon and PUCCH-ConfigCommon2. PUCCH-ConfigCommon is used by general UEs and RedCap UEs, and PUCCH-ConfigCommon2 is used by RedCap UEs. RedCap UE uses PUCCH-ConfigCommon when only PUCCH-ConfigCommon is included in uplink IBWP configuration information, and uses PUCCH-ConfigCommon2 when both PUCCH-ConfigCommon and PUCCH-ConfigCommon2 are included. PUCCH-ConfigCommon2 is used by RedCap UE. PUCCH-ConfigCommon contains pusch-TimeDomainAllocationList. PUCCH-ConfigCommon2 contains pusch-TimeDomainAllocationList_RedCap. The uplink IBWP configuration information includes RACH-ConfigCommon and RACH-ConfigCommon2. RACH-ConfigCommon is used by general terminals and RedCap UEs, and RACH-ConfigCommon2 are used by RedCap UEs. RedCap UE uses RACH-ConfigCommon when only RACH-ConfigCommon is included in uplink IBWP configuration information, and uses RACH-ConfigCommon2 when both RACH-ConfigCommon and RACH-ConfigCommon2 are included. RACH-ConfigCommon includes prach-ConfigurationIndex, msg1-FrequencyStart, preambleReceivedTargetPower, ra-ResponseWindow, preambleTransMax, msg1-SubcarrierSpacing, rsrp-ThresholdSSB, and ra-ContentionResolutionTimer. RACH-ConfigCommon2 includes prach-Configurationlndex_RedCap, msg1-FrequencyStart_RedCap, preambleReceivedTargetPower RedCap, ra-ResponseWindow_RedCap, preambleTransMax_RedCap, rsrp-ThresholdSSB_RedCap, ra-ContentionResolutionTimer_RedCap. msg1-SubcarrierSpacing included in RACH-ConfigCommon is applied to both normal UEs and RedCap UEs. In other words, the RedCap UE applies msg1-FrequencyStart included in RACH-ConfigCommon2 and msg1-SubcarrierSpacing included in RACH-ConfigCommon when applying msg1 frequency-related information. If RACH-ConfigCommon2 does not contain prach-Configurationlndex_RedCap, msg1-FrequencyStart_RedCap, preambleReceivedTargetPower RedCap, ra-ResponseWindow_RedCap, preambleTransMax_RedCap, msg1-SubcarrierSpacing_RedCap, rsrp-ThresholdSSB_RedCap, ra-ContentionResolutionTimer_RedCap, RedCap UE uses a same values of prach-ConfigurationIndex, a same values of msg1-FrequencyStart, a same values of preambleReceivedTargetPower, a same values of ra-ResponseWindow, a same values of preambleTransMax, a same values of msg1-SubcarrierSpacing, a same values of rsrp-ThresholdSSB, a same values of ra-ContentionResolutionTimer in RACH-ConfigCommon respectively. In another method, the ServingCellConfigCommon of SIB1 includes the first downlink IBWP configuration information, the first uplink IBWP configuration information, the second downlink IBWP configuration information, the second uplink IBWP configuration information, and tdd-UL-DL-ConfigurationCommon. The first downlink IBWP configuration information and the first uplink IBWP configuration information are information for a terminal with general capability, and the second downlink IBWP configuration information and the second uplink IBWP configuration information are information for a RedCap UE. tdd-UL-DL-ConfigurationCommon is information that is commonly applied to a UE with general capability and a RedCap UE. The first uplink IBWP configuration information includes pucch-ConfigCommon and timeAlignmentTimerCommon. The second uplink IBWP configuration information may include pucch-ConfigCommon_RedCap. The pucch-ConfigCommon may include a first pucch-ResourceCommon and a first p0-norminal. The pucch-ConfigCommon_RedCap may include a second pucch-ResourceCommon and a second p0-norminal. pucch-ConfigCommon is information for a normal UE. pucch-ConfigCommon_RedCap is information for RedCap UE. timeAlignmentTimerCommon is information commonly applied to normal UE and RedCap UE. The RedCap UE transmits the preamble and initiates timeAlignmentTimerCommon upon reception of the RAR. Upon receiving Msg 4, the UE transmits a HARQ ACK by applying a predetermined pucch-ResourceCommon and a predetermined p0-normal. If both the second pucch-ResourceCommon and the first pucch-ResourceCommon exist, the time/frequency/code resource for transmitting the HARQ ACK is determined by applying the second pucch-ResourceCommon. If only the first pucch-ResourceCommon exists, the time/frequency/code resource for transmitting the HARQ ACK is determined by applying the first pucch-ResourceCommon. When both the second p0-norminal and the first p0-norminal exist, the second p0-norminal is applied to determine a power offset to be applied to the HARQ ACK. If only the first p0-norminal exists, the power offset to be applied to the HARQ ACK is determined by applying the first p0-norminal. If neither the second p0-norminal nor the p0-norminal exist, a power offset to be applied to the HARQ ACK is determined by applying a predetermined value. The predetermined value may be, for example, 2 dBm. In step3A-17, the RedCap UE determines whether the current cell is a barred cell or an allowed cell, in consideration of MIB and SIB1. Regarding cell barring, the RedCap UE determines that the current cell is not barred if all of the following conditions are satisfied. The conditions below are defined so that the RedCap UE camps on the cell only when it can operate properly in the cell. <Cell Allowance Conditions>0: The received MIB's cellBarred is set to notBarred.1: IFRI_SIB1 exists (or is included) in the received SIB1. This is because the absence of IFRI_SIB1 means that the corresponding cell does not consider the operation of the RedCap UE, and the presence of IFRI_SIB1 means that the corresponding cell is a cell that considers the operation of the RedCap UE.2: If the current cell is TDD cell, the UE supports one or more of the frequency bands indicated in the frequencyBandList for downlink in the received SIB1 for TDD, or one or more of the frequency bands indicated in the frequencyBandList for uplink in the received SIB1 for FDD, and they are not downlink only bands, and3: The UE supports an uplink channel bandwidth with a maximum transmission bandwidth configuration fulfilling following conditions: It is smaller than or equal to the uplink carrierBandwidth indicated in SIB1 and it is wider than or equal to the bandwidth of the initial uplink BWP4: the UE supports a downlink channel bandwidth with a maximum transmission bandwidth configuration fulfilling following conditions: It is smaller than or equal to the downlink carrierBandwidth indicated in SIB1 it is wider than or equal to the bandwidth of the initial downlink BWP5: trackingAreaCode is provided in SIB1 for the selected PLMN or the registered PLMN or PLMN of the equivalent PLMN list For example, if trackingAreaCode x is included in SIB1 and trackingAreaCode related to the registered PLMN of the terminal is also x, condition 5 is satisfied. The trackingAreaCode related to the PLMN is provided to the terminal by the AMF during the registration procedure with the terminal. The RedCap UE, which determines that the current cell is not barred, performs the following operation. <Operation of Terminal after Receiving SIB1 in Non-Prohibited Cell>1: Apply the configuration included in the servingCellConfigCommon. More specifically, the UE applies the TDD-UL-DL configuration to determine a downlink slot, an uplink slot, a downlink symbol, and an uplink symbol, and applies a PDSCH configuration selected from among a plurality of PDSCH-ConfigCommon to receive a PDSCH, and applies a PUSCH configuration selected from among a plurality of PUSCH-ConfigCommon to transmit a PUSCH.2: A specified PCCH configuration is applied. The specified PCCH configuration is no SDAP, no PDCP, and RLC™. A paging message is received by applying the PCCH configuration.3: If a valid SIB is stored, the stored SIB is used, and if a valid SIB is not stored, a related system information message (SI message) is acquired The UE also receives subsequent system information, for example, SIB2, SIB3, SIB4, etc. in the not barred cell. SIB2 includes parameters for intra-frequency cell reselection. SIB3 includes other parameters for intra-frequency cell reselection. SIB4 contains parameters for cell reselection between frequencies. The RedCap UE regards the current serving cell as a barred cell in the cases listed in the table below and performs an appropriate operation according to the situation. TABLE 6CaseSituationRedCap UE operation1MIB reception failureThe current cell is considered as a barred cell.The current cell is excluded from cellselection/cell reselection candidates for 300seconds.It is assumed that both IFRI_MIB andIFRI_SIB1 are allowed. That is, neighboringcells of the corresponding frequency may beincluded in the cell selection/cell reselectioncandidates.2Successful reception ofThe current cell is considered as a barred cell.MIB with cellBarred set toExcludes the current cell from candidates fornotBarred.cell selection/cell reselection for 300 seconds.SIB1 reception failureIf the received IFRI_MIB is allowed,IFRI_SIB1 is considered as allowed, andneighboring cells of the correspondingfrequency may be included in the cellselection/cell reselection candidates.If the received IFRI_MIB is NotAllowed,IFRI_SIB1 is also considered as NotAllowed,and neighboring cells of the correspondingfrequency are excluded from cell selection/cellreselection candidates.3Successful reception ofThe current cell is considered a barred cell.MIB with cellBarred set toExcludes the current cell from candidates forBarred.cell selection/cell reselection for 300 seconds.If the received IFRI_MIB is allowed,IFRI_SIB1 is considered as allowed, andneighboring cells of the correspondingfrequency may be included in the cellselection/cell reselection candidates.If the received IFRI_MIB is NotAllowed,IFRI_SIB1 is also considered as NotAllowed,and neighboring cells of the correspondingfrequency are excluded from the cellselection/cell reselection candidates.The general terminal does not receive SIB1.The RedCap UE may receive SIB1 instead ofreferring to IFRI_MIB, and may exclude orinclude neighboring cells of the correspondingfrequency from cell selection/cell reselectioncandidates according to the received value ofIFRI_SIB1.4Successful MIB receptionThe current cell is considered as a barred cell.with cellBarred set toExcludes the current cell from candidates fornotBarred. SIB1 receptioncell selection/cell reselection for 300 seconds.without IFRI_SIB1Regardless of the value of the receivedIFRI_MIB, IFRI_SIB1 may be considered asNotAllowed and neighboring cells of thecorresponding frequency may be excluded fromcell selection/cell reselection candidates.5Successfully received MIBThe current cell is considered a barred cell.with cellBarred set toExcludes the current cell from candidates fornotBarred. Received SIB1cell selection/cell reselection for 300 seconds.with IFRI_SIB1According to the received IFRI_SIB value,The bandwidth supportedneighboring cells of the correspondingby the terminal is less thanfrequency are included or excluded from the cellthe bandwidth of the IBWP.selection/cell reselection candidates.6Successful reception ofThe current cell is considered a barred cell.MIB with cellBarred set toExcludes the current cell from candidates fornotBarred. Received SIB1cell selection/cell reselection for 300 seconds.with IFRI_SIB1Regardless of the received IFRI values, bothThe bandwidth supportedIFRI_MIB and IFRI_SIB1 are considered asby the terminal is greaterNotAllowed and neighboring cells of thethan or equal to thecorresponding frequency are excluded from cellbandwidth of the IBWP.selection/cell reselection candidates.There is noTrackingAreaCodematching theTrackingAreaCodereceived from SIB1. The reason why the RedCap UE operates as described above is to prevent camp-on in a cell that does not support the RedCap function and to appropriately control whether or not to reselect cells for cells of the same frequency. If there is no IFRI to be referred to as in case 1, both IFRIs may be assumed to be a predetermined value and may be operated accordingly. Alternatively, if reception of IFRI_SIB1 fails as in case 2, IFRI_MIB may be referred to. The RedCap UE may be given two IFRI parameters: IFRI_MIB and IFRI_SIB1. RedCap UE considers two parameters and determines whether to allow intra-frequency reselection as shown in the table below. TABLE 7IFRI_MIBIFRI_SIB1RedCap UE operationNoteReceptionReceptionIFRI_SIB1 is considered asfailurefailureAllowedAllowedReceptionIFRI_SIB1 is considered asIFRI_SIB1 considered as thefailureAllowedsame value as IFRI_MIBAllowedNotIFRI_SIB1 is considered asIt is determined that RedCapPresentNotAllowedis not supported in thecorresponding frequency.AllowedAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isAllowedapplied as it isAllowedNotAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isNotAllowedapplied as it isNotAllowedReceptionIFRI_SIB1 is considered asIFRI_SIB1 considered as thefailureNotAllowedsame value as IFRI_MIBNotAllowedNotIFRI_SIB1 is considered asIt is determined that RedCapPresentNotAllowedis not supported in thecorresponding frequency.NotAllowedAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isAllowedapplied as it isNotAllowedNotAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isNotAllowedapplied as it is The RedCap UE applies the received IFRI_SIB1, if both IFRI_MIB and IFRI_SIB1 are received. The RedCap UE considers that IFRI_SIB1 is Allowed If neither IFRI_MIB nor IFRI_SIB1 are received. If the RedCap UE receives IFRI_MIB but does not receive IFRI_SIB1, it determines IFRI_SIB1 by distinguishing whether SIB1 reception has failed or IFRI_SIB1 is not included in SIB1. If the reception of SIB1 is unsuccessful, the UE considers that IFRI_SIB1 is the same as IFRI_MIB. If SIB1 is received but IFRI_SIB1 is not included, the UE considers that IFRI_SIB1 is a predetermined value (eg, notAllowed). This is because, since cells of the same frequency in the same region are highly likely to be configured identically, if IFRI_SIB1 is not provided in the current cell, it is highly likely that IFRI_SIB1 is not provided in other cells as well. Alternatively, If IFRI_SIB1 is preconfigured to be considered as Allowed when UE has received SIB1 from the base station but IFRI_SIB1 is not included, IFRI_SIB1 is considered as Allowed. If MIB reception fails, IFRI_MIB cannot be received. If IFRI_SIB1 is Allowed, the RedCap UE may select or reselect other cells of the same frequency as the barred cell if the cell reselection selection criteria are fulfilled If IFRI_SIB1 is NotAllowed, for 300 seconds the RedCap UE does not select or reselect other cells of the same frequency as the barred cell, and excludes them from candidates for cell selection/reselection. If IFRI_SIB1 is NotAllowed, the RedCap UE sets the cell reselection priority of the frequency of the barred cell for 300 seconds to the lowest priority. The RedCap UE performs cell reselection for frequencies other than the barred cell frequency. At this time, the RedCap UE performs cell reselection by applying the cell reselection priority indicated in the system information received from an NR cell other than the first NR cell. A UE camped on a not barred cell and prepares to perform random access in order to perform a necessary procedure. The UE refers to the received ServingCellConfigCommon. In steps3A-21, the RedCap UE transmits a preamble to the base station. If both prach-ConfigurationIndex_RedCap and prach-ConfigurationIndex are included in rach-ConfigCommon (or ServingCellConfigCommon), the RedCap UE applies prach-ConfigurationIndex_RedCap to determine a radio frame, subframe, slot, symbol and preamble format in which preamble transmission is possible. If only prach-ConfigurationIndex is included in rach-ConfigCommon (or in ServingCellConfigCommon), RedCap UE determines radio frame, subframe, slot, symbol and preamble format in which preamble transmission is possible by applying prach-ConfigurationIndex. If both msg1-FrequencyStart_RedCap and msg1-FrequencyStart are included in rach-ConfigCommon (or ServingCellConfigCommon), the RedCap UE applies msg1-FrequencyStart_RedCap to determine a frequency region in which preamble transmission is possible. If only msg1-FrequencyStart is included in rach-ConfigCommon (or ServingCellConfigCommon), RedCap UE applies msg1-FrequencyStart to determine a frequency range in which preamble transmission is possible. RedCap UE selects SSB by applying rsrp-ThresholdSSB_RedCap if both rsrp-ThresholdSSB_RedCap and rsrp-ThresholdSSB are included in rach-ConfigCommon (or in ServingCellConfigCommon). RedCap UE selects SSB by applying rsrp-ThresholdSSB if only rsrp-ThresholdSSB is included in rach-ConfigCommon (or ServingCellConfigCommon). The terminal selects an SSB having the highest received signal strength among SSBs having a received signal strength higher than the threshold value. The UE selects a preamble/PRACH transmission opportunity (Occasion) corresponding to the selected SSB and transmits the preamble. After transmitting the preamble, the UE monitors whether a random access response message is received during the random access response window, and if not received, retransmits the preamble. As the maximum number of preamble retransmissions, the UE applies preambleTransMax_RedCap when both preambleTransMax_RedCap and preambleTransMax are included in ServingCellConfigCommon, and applies preambleTransMax when only preambleTransMax is included. The UE applies msg1-SubcarrierSpacing included in rach-ConfigCommon when transmitting the preamble. One ServingCellConfigCommon may include two prach-ConfigurationIndex, two msg1-FrequencyStart, two rsrp-ThresholdSSB, two preambleTransMax and one msg1-SubcarrierSpacing for Msg1 transmission. One of the two prach-ConfigurationIndex, one of the two msg1-FrequencyStart, one of the two rsrp-ThresholdSSB, and one of the two preambleTransMax apply only to RedCap UEs, and msg1-SubcarrierSpacing is applied to both RedCap UEs and non-RedCap UEs. Msg 1 is the preamble. In steps3A-23, a random access response message is received from the base station. The random access response message includes information such as an uplink grant for Msg 3 transmission, a time domain allocation indicator, and a temporary identifier of the terminal. The random access response message is addressed by the RA-RNTI. The terminal receives a random access response message by monitoring a predetermined SS in a predetermined CORESET in the random access window time period. If ServingCellConfigCommon includes controlResourceSetZero, searchSpaceZero, ra-SearchSpace, controlResourceSetZero_RedCap, searchSpaceZero_RedCap, and ra-SearchSpace_RedCap and If ra-SearchSpace_RedCap indicates 0, RedCap UE applies 3rd CORESET #0 and 3rd SS #0 to RA-Monitor the RNTI and receive a random access response message. If only controlResourceSetZero, searchSpaceZero, and ra-SearchSpace are included in servingCellConfigCommon and If ra-SearchSpace indicates 0, the RedCap UE applies the 2nd CORESET #0 and the 2nd SS #0 to monitor the RA-RNTI and receive a random access response message. If controlResourceSetZero, searchSpaceZero, ra-SearchSpace, controlResourceSetZero_RedCap, searchSpaceZero_RedCap, and ra-SearchSpace_RedCap are all included in servingCellConfigCommon and if ra-SearchSpace_RedCap indicates a value other than 0, the RedCap UE applies the SS having the indicated identifier and the CORESET associated with the SS to monitor RA-RNTI and receive a random access response message. If only controlResourceSetZero, searchSpaceZero and ra-SearchSpace are included in servingCellConfigCommon and if ra-SearchSpace indicates a value other than 0, the RedCap UE applies the SS having the indicated identifier and the CORESET associated with the SS to monitor RA-RNTI and receive a random access response message. If both ra-ResponseWindow and ra-ResponseWindow_RedCap are included in ServingCellConfigCommon, the RedCap UE determines the length of the random access response window by applying ra-ResponseWindow_RedCap. If only ra-ResponseWindow is included in ServingCellConfigCommon, RedCap UE determines the length of the random access response window by applying ra-ResponseWindow. Upon receiving the random access response, the RedCap UE starts tImeAlignmentTimer and generates a MAC PDU to transmit Msg 3 to the base station. The MAC PDU includes an uplink RRC control message such as RRCRequest. In step3A-25, the RedCap UE transmits Msg 3 to the base station and starts the contention resolution timer. If servingCellConfigCommon contains both ra-ConttentionResolutionTimer and ra-ContentionResolutionTimer_RedCap, the RedCap UE sets the contention resolution timer to ra-ContentionResolutionTimer_RedCap. If servingCellConfigCommon contains only ra-ConttentionResolutionTimer, RedCap UE sets contention resolution timer to ContentionResolutionTimer. Msg 3 transmission time is determined by the time domain allocation indicator of the random access response message. The RedCap UE determines the start time and transmission duration of the PUSCH to which Msg 3 is to be transmitted according to the PUSCH time domain allocation entry, indicated by a time domain allocation indicator, of a specific list from among a pusch-TimeDomainAllocationList, a second pusch-TimeDomainAllocationList and a default list. In steps3A-27, the RedCap UE receives Msg 4 from the base station. Msg 4 includes a downlink RRC control message such as RRCSetup. The RedCap UE determines a transmission resource for transmitting the HARQ ACK for Msg 4 by selecting one of the first PUCCH common resource information (pucch-ResourceCommon) and the second PUCCH common resource information (pucch-ResourceCommon). The RedCap UE determines the nominal power offset to be applied to HARQ ACK transmission for Msg 4 by selecting one of a nominal power offset (p0-normal) included in the first PUCCH common configuration information (pucch-ConfigCommon) and a nominal power offset (p0-norminal) included in the second PUCCH common configuration information (pucch-ConfigCommon) and a nominal power offset fixed to a predetermined value. The RedCap UE and the base station that have transmitted and received the RRCRequest message and the RRCSetup message establish an RRC connection. The base station and the AMF may transmit/receive various NAS messages and control messages to the UE for which the RRC connection is configured in steps3A-31. The RedCap UE and the base station can exchange configuration information and the like through RRC connection, configure a bearer, and then transmit/receive data. In ServingCellConfigCommon of SIB1, PDCCH-ConfigCommon2 is located behind PDCCH-ConfigCommon. In ServingCellConfigCommon of SIB1, PUCCH-ConfigCommon2 is located behind PUCCH-ConfigCommon. In ServingCellConfigCommon of SIB1, RACH-ConfigCommon2 is located behind RACH-ConfigCommon. In ServingCellConfigCommon of SIB1, the second downlink IBWP configuration information is located behind the first downlink IBWP configuration information. In ServingCellConfigCommon of SIB1, the second uplink IBWP configuration information is located behind the first uplink IBWP configuration information. In ServingCellConfigCommon of SIB1, controlResourceSetZero_RedCap is located behind controlResourceSetZero. In ServingCellConfigCommon of SIB1, searchSpaceZero_RedCap is located behind searchSpaceZero. In ServingCellConfigCommon of SIB1, ra-SearchSpace_RedCap is located behind ra-SearchSpace. The order of various pieces of information is defined as described above in order to maintain backward compatibility with a terminal or a base station of a previous release. FIG.4is a diagram illustrating an operation of a terminal. In step4A-01, the reduced capability terminal receives in the first NR cell a master information block with a first information indicating whether or not the cell is barred set to notBarred including a first intra-frequency cell reselection information element for controlling intra-frequency cell reselection and a second information element corresponding to time/frequency where system information block 1 is scheduled. In step4A-03, the reduced capability terminal receives the system information block 1 using the second information element. In step4A-05, the reduced capability terminal determines a second IFRI. In step4A-07, the reduced capability terminal performs cell selection or reselection according to the determination. If the reduced capability terminal does not receive SIB1, it determines whether it can select a second NR cell of the same frequency as the first NR cell according to the first IFRI. If the second IFRI is included in SIB1, the reduced capability terminal determines whether it can select or reselect the second NR cell of the same frequency as the first NR cell according to the second IFRI. If the second IFRI is not included in the SIB1, the reduced capability terminal considers that the second IFRI is set to notAllowed and excludes the second NR cell of the same frequency as the first NR cell from cell reselection candidates. The reduced capability terminal considers the frequencies of the first NR cell and the second NR cell as the lowest priority frequency and performs cell reselection. The reduced capability terminal performs cell reselection for frequencies other than the frequencies of the first NR cell and the second NR cell. FIG.5Ais a block diagram illustrating the internal structure of a UE to which the disclosure is applied. Referring to the diagram, the UE includes a controller5A-01, a storage unit5A-02, a transceiver5A-03, a main processor5A-04and I/O unit5A-05. The controller5A-01controls the overall operations of the UE in terms of mobile communication. For example, the controller5A-01receives/transmits signals through the transceiver5A-03. In addition, the controller5A-01writes and reads data in the storage unit5A-02. To this end, the controller5A-01includes at least one processor. For example, the controller5A-01may include a communication processor (CP) that performs control for communication and an application processor (AP) that controls the upper layer, such as an application program. The controller controls storage unit and transceiver such that UE operations illustrated inFIG.3andFIG.4are performed. The storage unit5A-02stores data for operation of the UE, such as a basic program, an application program, and configuration information. The storage unit5A-02provides stored data at a request of the controller5A-01. The transceiver5A-03consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and the like. The RF processor may perform MIMO and may receive multiple layers when performing the MIMO operation. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the system. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The main processor5A-04controls the overall operations other than mobile operation. The main processor5A-04process user input received from I/O unit5A-05, stores data in the storage unit5A-02, controls the controller5A-01for required mobile communication operations and forward user data to I/O unit (905). I/O unit5A-05consists of equipment for inputting user data and for outputting user data such as a microphone and a screen. I/O unit5A-05performs inputting and outputting user data based on the main processor's instruction. FIG.5Bis a block diagram illustrating the configuration of a base station according to the disclosure. As illustrated in the diagram, the base station includes a controller5B-01, a storage unit5B-02, a transceiver5B-03and a backhaul interface unit5B-04. The controller5B-01controls the overall operations of the main base station. For example, the controller5B-01receives/transmits signals through the transceiver5B-03, or through the backhaul interface unit5B-04. In addition, the controller5B-01records and reads data in the storage unit5B-02. To this end, the controller5B-01may include at least one processor. The controller controls transceiver, storage unit and backhaul interface such that base station operation illustrated inFIG.3are performed. The storage unit5B-02stores data for operation of the main base station, such as a basic program, an application program, and configuration information. Particularly, the storage unit5B-02may store information regarding a bearer allocated to an accessed UE, a measurement result reported from the accessed UE, and the like. In addition, the storage unit5B-02may store information serving as a criterion to deter mine whether to provide the UE with multi-connection or to discontinue the same. In addition, the storage unit (5B-02) provides stored data at a request of the controller5B-01. The transceiver5B-03consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a DAC, an ADC, and the like. The RF processor may perform a down link MIMO operation by transmitting at least one layer. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the first radio access technology. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The backhaul interface unit5B-04provides an interface for communicating with other nodes inside the network. The backhaul interface unit5B-04converts a bit string transmitted from the base station to another node, for example, another base station or a core network, into a physical signal, and converts a physical signal received from the other node into a bit string.
62,862
11943672
DETAILED DESCRIPTION Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. In addition, in the description of the present invention, if it is determined that a detailed description of a related known function or configuration may unnecessarily obscure the gist of the present invention, the detailed description thereof will be omitted. In addition, the terms to be described later are terms defined in consideration of functions in the present invention, which may vary according to intentions or customs of users and operators. Therefore, the definition should be made based on the content throughout this specification. The terms used, in the following description, for indicating access nodes, network entities, messages, interfaces between network entities, and diverse identity information is provided for convenience of explanation. Accordingly, the terms used in the following description are not limited to specific meanings but may be replaced by other terms equivalent in technical meanings. In the following descriptions, the terms and definitions given in the 3GPP standards are used for convenience of explanation. However, the present disclosure is not limited by use of these terms and definitions and other arbitrary terms and definitions may be employed instead. Table 1 lists the acronyms used throughout the present disclosure. TABLE 1AcronymFull name5GC5G Core NetworkACKAcknowledgementAMAcknowledged ModeAMFAccess and Mobility Management FunctionARQAutomatic Repeat RequestASAccess StratumASN.1Abstract Syntax Notation OneBSRBuffer Status ReportBWPBandwidth PartCACarrier AggregationCAGClosed Access GroupCGCell GroupC-RNTICell RNTICSIChannel State InformationDCIDownlink Control InformationDRB(user) Data Radio BearerDRXDiscontinuous ReceptionHARQHybrid Automatic Repeat RequestIEInformation elementLCGLogical Channel GroupMACMedium Access ControlMIBMaster Information BlockNASNon-Access StratumNG-RANNG Radio Access NetworkNRNR Radio AccessPBRPrioritised Bit RatePCellPrimary CellPCIPhysical Cell IdentifierPDCCHPhysical Downlink Control ChannelPDCPPacket Data Convergence ProtocolPDSCHPhysical Downlink Shared ChannelPDUProtocol Data UnitPHRPower Headroom ReportPLMNPublic Land Mobile NetworkPRACHPhysical Random Access ChannelPRBPhysical Resource BlockPSSPrimary Synchronisation SignalPUCCHPhysical Uplink Control ChannelPUSCHPhysical Uplink Shared ChannelRACHRandom Access ChannelRANRadio Access NetworkRARRandom Access ResponseRA-RNTIRandom Access RNTIRATRadio Access TechnologyRBRadio BearerRLCRadio Link ControlRNARAN-based Notification AreaRNAURAN-based Notification Area UpdateRNTIRadio Network Temporary IdentifierRRCRadio Resource ControlRRMRadio Resource ManagementRSRPReference Signal Received PowerRSRQReference Signal Received QualityRSSIReceived Signal Strength IndicatorSCellSecondary CellSCSSubcarrier SpacingSDAPService Data Adaptation ProtocolSDUService Data UnitSFNSystem Frame NumberS-GWServing GatewaySISystem InformationSIBSystem Information BlockSpCellSpecial CellSRBSignalling Radio BearerSRSSounding Reference SignalSSSearch SpaceSSBSS/PBCH blockSSSSecondary Synchronisation SignalSULSupplementary UplinkTMTransparent ModeUCIUplink Control InformationUEUser EquipmentUMUnacknowledged ModeCRPCell Reselection Priority Table 2 lists the terminologies and their definition used throughout the present disclosure. TABLE 2TerminologyDefinitionCarrier frequencycenter frequency of the cell.Cellcombination of downlink and optionally uplink resources. Thelinking between the carrier frequency of the downlink resourcesand the carrier frequency of the uplink resources is indicated in thesystem information transmitted on the downlink resources.Cell Groupin dual connectivity, a group of serving cells associated with eitherthe MeNB or the SeNB.Cell reselectionA process to find a better suitable cell than the current serving cellbased on the system information received in the current serving cellCell selectionA process to find a suitable cell either blindly or based on thestored informationCell ReselectionPriority of a carrier frequency regarding cell reselection. SystemPriorityInformation Block 2 and System Information Block 3 provide theCRP of the serving frequency and CRPs of inter-frequenciesrespectively. UE consider higher priority frequency for cellreselection if channel condition of the frequency is better than aspecific threshold even if channel condition of a lower priorityfrequency is better than that of the higher priority frequency.DedicatedSignalling sent on DCCH logical channel between the network andsignallinga single UE.FieldThe individual contents of an information element are referred to asfields.Frequency layerset of cells with the same carrier frequency.Global cellAn identity to uniquely identifying an NR cell. It is consisted ofidentitycellIdentity and plmn-Identity of the first PLMN-Identity in plmn-IdentityList in SIB1.gNBnode providing NR user plane and control plane protocolterminations towards the UE, and connected via the NG interface tothe 5GC.Handoverprocedure that changes the serving cell of a UE inRRC_CONNECTED.InformationA structural element containing single or multiple fields is referredelementas information element.LThe Length field in MAC subheader indicates the length of thecorresponding MAC SDU or of the corresponding MAC CELCID6 bit logical channel identity in MAC subheader to denote whichlogical channel traffic or which MAC CE is included in the MACsubPDULogical channela logical path between a RLC entity and a MAC entity. There aremultiple logical channel types depending on what type ofinformation is transferred e.g. CCCH (Common Control Channel),DCCH (Dedicate Control Channel), DTCH (Dedicate TrafficChannel), PCCH (Paging Control Channel)NRNR radio accessPCellSpCell of a master cell group.registered PLMNPLMN which UE has registered toselected PLMNPLMN which UE has selected to perform registration procedureequivalent PLMNPLMN which is equivalent to registered PLMN. UE is informed oflist of EPLMNs by AMF during registration procedurePLMN ID Checkthe process that checks whether a PLMN ID is the RPLMN identityor an EPLMN identity of the UE.Primary CellThe MCG cell, operating on the primary frequency, in which theUE either performs the initial connection establishment procedureor initiates the connection re-establishment procedure.Radio BearerLogical path between a PDCP entity and upper layer (i.e. SDAPentity or RRC)RLC bearerRLC and MAC logical channel configuration of a radio bearer inone cell group.RLC bearerThe lower layer part of the radio bearer configuration comprisingconfigurationthe RLC and logical channel configurations.Serving CellFor a UE in RRC_CONNECTED not configured with CA/DCthere is only one serving cell comprising of the primary cell. For aUE in RRC_CONNECTED configured with CA/ DC the term‘serving cells’ is used to denote the set of cells comprising of theSpecial Cell(s) and all secondary cells.SpCellprimary cell of a master or secondary cell group.Special CellFor Dual Connectivity operation the term Special Cell refers to thePCell of the MCG or the PSCell of the SCG, otherwise the termSpecial Cell refers to the PCell.SRBSignalling Radio Bearers” (SRBs) are defined as Radio Bearers(RBs) that are used only for the transmission of RRC and NASmessages.SRB0SRB0 is for RRC messages using the CCCH logical channelSRB1SRB1 is for RRC messages (which may include a piggybackedNAS message) as well as for NAS messages prior to theestablishment of SRB2, all using DCCH logical channel;SRB2SRB2 is for NAS messages and for RRC messages which includelogged measurement information, all using DCCH logical channel.SRB2 has a lower priority than SRB1 and may be configured bythe network after AS security activation;SRB3SRB3 is for specific RRC messages when UE is in (NG)EN-DC orNR-DC, all using DCCH logical channelSRB4SRB4 is for RRC messages which include application layermeasurement reporting information, all using DCCH logicalchannel.DCCHDCCH is a logical channel to transfer RRC messages after RRCconnection establishmentSuitable cellA cell on which a UE may camp. Following criteria applyThe cell is part of either the selected PLMN or the registeredPLMN or PLMN of the Equivalent PLMN listThe cell is not barredThe cell is part of at least one TA that is not part of the list of“Forbidden Tracking Areas for Roaming” (TS 22.011 [18]), whichbelongs to a PLMN that fulfils the first bullet above.The cell selection criterion S is fulfilled (i.e. RSRP and RSRQ arebetter than specific values In the present invention, “trigger” or “triggered” and “initiate” or “initiated” may be used in the same meaning. In the present invention, a terminal with reduced capability and RedCap UE may be used in the same meaning. FIG.1Ais a diagram illustrating the architecture of an 5G system and a NG-RAN to which the disclosure may be applied. 5G system consists of NG-RAN 1A-01 and 5GC 1A-02. An NG-RAN node is either:a gNB, providing NR user plane and control plane protocol terminations towards the UE; oran ng-eNB, providing E-UTRA user plane and control plane protocol terminations towards the UE. The gNBs 1A-05 or 1A-06 and ng-eNBs 1A-03 or 1A-04 are interconnected with each other by means of the Xn interface. The gNBs and ng-eNBs are also connected by means of the NG interfaces to the 5GC, more specifically to the AMF (Access and Mobility Management Function) and to the UPF (User Plane Function). AMF 1A-07 and UPF 1A-08 may be realized as a physical node or as separate physical nodes. A gNB 1A-05 or 1A-06 or an ng-eNBs 1A-03 or 1A-04 hosts the functions listed below. Functions for Radio Resource Management such as Radio Bearer Control, Radio Admission Control, Connection Mobility Control, Dynamic allocation of resources to UEs in uplink, downlink and sidelink (scheduling); and IP and Ethernet header compression, uplink data decompression and encryption of user data stream; and Selection of an AMF at UE attachment when no routing to an MME can be determined from the information provided by the UE; and Routing of User Plane data towards UPF; and Scheduling and transmission of paging messages; and Scheduling and transmission of broadcast information (originated from the AMF or O&M); and Measurement and measurement reporting configuration for mobility and scheduling; and Session Management; and QoS Flow management and mapping to data radio bearers; and Support of UEs in RRC_INACTIVE state; and Radio access network sharing; and Tight interworking between NR and E-UTRA; and Support of Network Slicing. The AMF 1A-07 hosts the functions such as NAS signaling, NAS signaling security, AS security control, SMF selection, Authentication, Mobility management and positioning management. The UPF 1A-08 hosts the functions such as packet routing and forwarding, transport level packet marking in the uplink, QoS handling and the downlink, mobility anchoring for mobility etc. FIG.1Bis a diagram illustrating a wireless protocol architecture in an 5G system to which the disclosure may be applied. User plane protocol stack consists of SDAP 1B-01 or 1B-02, PDCP 1B-03 or 1B-04, RLC 1B-05 or 1B-06, MAC 1B-07 or 1B-08 and PHY 1B-09 or 1B-10. Control plane protocol stack consists of NAS 1B-11 or 1B-12, RRC 1B-13 or 1B-14, PDCP, RLC, MAC and PHY. Each protocol sublayer performs functions related to the operations listed in the table 3. TABLE 3SublayerFunctionsNASauthentication, mobility management, security control etcRRCSystem Information, Paging, Establishment, maintenance and releaseof an RRC connection, Security functions, Establishment,configuration, maintenance and release of Signalling Radio Bearers(SRBs) and Data Radio Bearers (DRBs), Mobility, QoS management,Detection of and recovery from radio link failure, NAS messagetransfer etc.SDAPMapping between a QoS flow and a data radio bearer, Marking QoSflow ID (QFI) in both DL and UL packets.PDCPTransfer of data, Header compression and decompression, Cipheringand deciphering, Integrity protection and integrity verification,Duplication, Reordering and in-order delivery, Out-of-order deliveryetc.RLCTransfer of upper layer PDUs, Error Correction through ARQ,Segmentation and re-segmentation of RLC SDUs, Reassembly ofSDU, RLC re-establishment etc.MACMapping between logical channels and transport channels,Multiplexing/demultiplexing of MAC SDUs belonging to one ordifferent logical channels into/from transport blocks (TB) deliveredto/from the physical layer on transport channels, Schedulinginformation reporting, Priority handling between UEs, Priorityhandling between logical channels of one UE etc.PHYChannel coding, Physical-layer hybrid-ARQ processing, Ratematching, Scrambling, Modulation, Layer mapping, DownlinkControl Information, Uplink Control Information etc. A reduced capability UE or RedCap UE has lower performance than a general UE and is used in limited scenarios such as IOT. Compared to a typical terminal having a bandwidth of 100 MHz, a transmission/reception speed of several Gbps, and four or more Rx processing units (Rx branches), RedCap terminals have a bandwidth of 20 MHz, a transmission/reception speed of several tens of Mbps, and two or less Rx processing units. The present invention provides a method and apparatus for a RedCap UE to access a cell supporting RedCap, receive system information, and perform necessary operations. In particular, the terminal applies search space 0 (Search Space 0, hereinafter SS #0) and control resource set 0 (Control Resource Set 0, hereinafter CORESET #0) in the initial bandwidth part (IBWP) to obtain system information. FIG.2Ais a diagram illustrating an example of a bandwidth part. With Bandwidth Adaptation (BA), the receive and transmit bandwidth of a UE need not be as large as the bandwidth of the cell and can be adjusted: the width can be ordered to change (e.g. to shrink during period of low activity to save power); the location can move in the frequency domain (e.g. to increase scheduling flexibility); and the subcarrier spacing can be ordered to change (e.g. to allow different services). A subset of the total cell bandwidth of a cell is referred to as a Bandwidth Part (BWP) and BA is achieved by configuring the UE with BWP(s) and telling the UE which of the configured BWPs is currently the active one. FIG.2Adescribes a scenario where3different BWPs are configured:BWP1 with a width of 40 MHz and subcarrier spacing of 15 kHz; 2A-11 or 2A-19BWP2 with a width of 10 MHz and subcarrier spacing of 15 kHz; 2A-13 or 2A-17BWP3 with a width of 20 MHz and subcarrier spacing of 60 kHz. 2A-15 FIG.2Bis a diagram illustrating an example of a search space and a control resource set. A plurality of SSs may be configured in one BWP. The UE monitors PDCCH candidates according to the SS configuration of the currently active BWP. One SS consists of an SS identifier, a CORESET identifier indicating the associated CORESET, the period and offset of the slot to be monitored, the slot unit duration, the symbol to be monitored in the slot, the SS type, and the like. The information may be explicitly and individually configured or may be configured by a predetermined index related to predetermined values. One CORESET consists of a CORESET identifier, frequency domain resource information, symbol unit duration, TCI state information, and the like. Basically, it can be understood that CORESET provides frequency domain information to be monitored by the UE, and SS provides time domain information to be monitored by the UE. CORESET #0 and SS #0 may be configured in the IBWP. One CORESET and a plurality of SSs may be additionally configured in the IBWP. Upon receiving the MIB 2B-01, the UE recognizes CORESET #0 2B-02 and SS #0 2B-03 for receiving SIB1 using predetermined information included in the MIB. The UE receives SIB1 2B-05 through CORESET #0 2B-02 and SS #0 2B-03. In SIB1, information constituting CORESET #0 2B-06 and SS #0 2B-07 and information constituting another CORESET, for example, CORESET #n 2B-11 and SS #m 2B-13 may be included. The terminal receives necessary information from the base station before the terminal enters the RRC_CONNECTED state, such as SIB2 reception, paging reception, and random access response message reception by using the CORESETs and SSs configured in SIB1. CORESET #0 2B-02 configured in MIB and CORESET #0 2B-06 configured in SIB1 may be different from each other, and the former is called a first CORESET #0 and the latter is called a second CORESET #0. SS #0 2B-03 configured in MIB and SS #0 2B-07 configured in SIB1 may be different from each other, and the former is referred to as a first SS #0 and the latter is referred to as a second SS #0. SS #0 and CORESET #0 configured for the RedCap terminal are referred to as a third SS #0 and a third CORESET #0. The first SS #0, the second SS #0, and the third SS #0 may be the same as or different from each other. The first CORESET #0, the second CORESET #0, and the third CORESET #0 may be the same as or different from each other. SS #0 and CORESET #0 are each indicated by a 4-bit index. The 4-bit index indicates a configuration predetermined in the standard specification. Except for SS #0 and CORESET #0, the detailed configuration of the remaining SS and CORSESET is indicated by each individual information element. When the RRC connection is established, additional BWPs may be configured for the UE. FIG.3is a diagram illustrating operations of a terminal and a base station according to an embodiment of the present disclosure. In a network consisting of a RedCap UE 3A-01, a base station 3A-03 and an AMF 3A-05, the RedCap UE receives system information, determines whether to bar a cell, performs cell reselection, monitors a paging message, selects and applies cell common configuration information and transmits and receives RRC control messages. In step 3A-11, the RedCap UE camps on a cell managed by the base station by performing cell selection or cell reselection. The RedCap UE selects a cell having a good reception signal from among cells of the highest priority frequency in consideration of cell reselection priority and the like. In step 3A-13, the RedCap UE receives the MIB in the selected cell. The MIB includes controlResourceSetZero, which is a 4-bit index indicating the configuration of the first CORESET #0, and controlResourceSetZero, which is a 4-bit index, indicating the configuration of the first SS #0. The UE receives SIB1 by applying the frequency domain and time pattern indicated by the first CORESET #0 and the first SS #0. The MIB includes cellBarred, which is 1-bit information indicating whether or not the cell is barred. cellBarred indicates either barred or notBarred. The UE uses cellBarred to determine whether to bar the cell. The MIB includes a first intraFreqReselection that is 1-bit information for controlling intra-frequency cell reselection. The first intraFreqReselection is defined as Enumerated {allowed, notAllowed}. Also called IFRI_MIB. In steps 3A-15, the RedCap UE receives SIB1. The RedCap UE stores the acquired SIB1. SIB1 includes ServingCellConfigCommon, which is common configuration information of a serving cell, and a second intraFreqReselection. The second intraFreqReselection is defined as enumerated with one of Allowed and notAllowed. It is also called IFRI_SIB. In step 3A-16, the RedCap UE selects one of a plurality of common configuration information included in ServingCellConfigCommon. The servingCellConfigCommon of SIB1 includes the following information. TABLE 4DownlinkConfigCommonThis is a common downlink configuration of theserving cell. It consists of subfields such asfrequencyInfoDL, initialDownlinkBWP, bcch-Config, and pcch-Config.frequencyInfoDLIt is a basic parameter of a downlink carrier. Itconsists of subfields such as a frequency band list andcarrier bandwidth for each SCS.initialDownlinkBWPThis is the configuration of the second downlinkIBWP. It consists of subfields such as BWP, PDCCH-ConfigCommon, and PDSCH-ConfigCommon. Thefirst IBWP has a frequency domain corresponding tothe first CORESET#0 of the MIB and has subcarrierspacing indicated by the MIB. The first IBWP is theIBWP indicated by the MIB and used for receivingSIB1, the second IBWP is the IBWP indicated by theSIB1 and used for receiving the SIB2, paging, randomaccess response message, and the like.BWPIt is IE that configures general parameters of BWP. Itconsists of subfields such as locationAndBandwidthindicating the bandwidth and location of the BWP,and subcarrierSpacing indicating the SCS of theBWP.PDCCH-ConfigCommonIt is the cell-specific PDCCH parameters of the BWP.It consists of subfields such ascontrolResourceSetZero,commonControlResourceSet, searchSpaceZero,commonSearchSpaceList,searchSpaceOtherSystemInformation,pagingSearchSpace, and ra-SearchSpace.controlResourceSetZeroIt is defined as an integer between 0 and 15. Indicatesone of the predefined CORESET#0 configurations.The controlResourceSetZero included in the MIBcorresponds to the first CORESET#0, and thecontrolResourceSetZero included in the PDCCH-ConfigCommon of the servingCellConfigCommon ofSIB1 corresponds to the second CORESET#0.searchSpaceZeroIt is defined as an integer between 0 and 15. Indicatesone of the predefined SS#0 configurations. ThesearchSpaceZero included in the MIB corresponds tothe first SS#0, and the controlResourceSetZeroincluded in the PDCCH-ConfigCommon of theservingCellConfigCommon of SIB1 corresponds tothe second SS#0.commonControlResourceSetA common CORESET defined byControlResourceSet IE. Defines an additionalCORESET that can be used for paging reception,random access response reception, systeminformation reception, etc.commonSearchSpaceListList of common SSs. The common SS may be usedfor paging reception, random access responsereception, system information reception, and the like.searchSpaceOtherSystemInformationDefined by the SS identifier IE. If it is 0, the secondSS#0 is indicated, and if it is a value other than 0, oneof the SSs defined in commonSearchSpaceList isindicated.pagingSearchSpaceDefined by the SS identifier IE. If it is 0, the secondSS#0 is indicated, and if it is a value other than 0, oneof the SSs defined in commonSearchSpaceList isindicated.ra-SearchSpaceDefined by the SS identifier IE. If it is 0, the secondSS#0 is indicated. If it is a value other than 0, one ofthe SSs defined in the commonSearchSpaceList isindicated.PDSCH-ConfigCommonCell-specific PDSCH parameters of this BWP. Itconsists of a pdsch-TimeDomainAllocationList. Thepdsch-TimeDomainAllocationList is a list composedof a plurality of pdsch-TimeDomainAllocations.pdsch-TimeDomainAllocationA time domain relationship between the PDCCH andthe PDSCH. It consists of subfields such as K0 andstartSymbolAndLength. K0 is the slot offset betweenthe DCI and the scheduled PDSCH.startSymbolAndLength is an index indicating a validstart symbol and length combination.pcch-ConfigConfiguration related to paging. It consists of sub-fields such as the base station paging period, PF-related parameters, and PO-related parameters.bcch-configIt is a configuration related to system information. Itconsists of subfields such as modificationPeriodCoeffindicating the length of the modification period.UplinkConfigCommonSIBThis is a common uplink configuration of the servingcell. It consists of subfields such as frequencylnfoUL,initialUplinkBWP, andtimeAlignmentTimerCommon.frequencyInfoULIt is a basic parameter of the uplink carrier. It consistsof subfields such as a frequency band list and carrierbandwidth for each SCS.initialUplinkBWPThis is the configuration of the second uplink IBWP.It consists of subfields such as BWP, rach-ConfigCommon, pusch-ConfigCommon, and pucch-ConfigCommon.rach-ConfigCommonThis is the cell-specific random access parameter ofthe BWP. It consists of subfields such as prach-ConfigurationIndex, msg1-FrequencyStart,preambleReceivedTargetPower, ra-ResponseWindow, preambleTransMax, msg1-SubcarrierSpacing, rsrp-ThresholdSSB, and ra-ContentionResolutionTimer.prach-ConfigurationIndexPRACH configuration index. One PRACHconfiguration corresponds to pattern information on aPRACH transmission opportunity in the time domain(information indicating in which symbol in which slotof which radio frame PRACH transmission ispossible), a transmission format of a preamble, andthe like.msg1-FrequencyStartThe offset from PRB0 of the lowest PRACHtransmission opportunity. Information indicating aPRACH transmission resource in the frequencydomain. PRB0 is the lowest frequency PRB amongPRBs of the corresponding carrier.preambleReceivedTargetPowerThis is the target power level of the network receivingend. It is a parameter related to transmission powercontrol during the random access procedure.ra-ResponseWindowThe length of the random access response windowexpressed in the number of slots.preambleTransMaxThe maximum number of random access preambletransmissionsmsg1-SubcarrierSpacingIt is PRACH's SCS. It is commonly applied to generalterminals and RedCap UEs.rsrp-ThresholdSSBSSB selection criteria. The UE performs randomaccess by selecting a preamble corresponding to theselected SSB.ra-ContentionResolutionTimerThis is the initial value of the contention resolutiontimer. Indicates the number of subframes.pusch-ConfigCommonCell-specific PUSCH parameters of this BWP. Itconsists of subfields like pusch-TimeDomainAllocationList. The pusch-TimeDomainAllocationList is a list composed of aplurality of pusch-TimeDomainAllocations.pusch-TimeDomainAllocationA time domain relationship between the PDCCH andthe PUSCH. It consists of subfields such as K2 andstartSymbolAndLength. K2 is the slot offset betweenthe DCI and the scheduled PUSCH.startSymbolAndLength is an index indicating a validcombination of start symbol and length.pucch-ConfigCommonThis is the cell-specific PUCCH parameter of theBWP. It consists of subfields such as pucch-ResourceCommon and p0-norminal.pucch-ResourceCommonIt is an index corresponding to a cell-specific PUCCHresource parameters. One index corresponds to aPUCCH format, a PUCCH time period, a PUCCHfrequency period, a PUCCH code, and the like.p0-norminalThis is a power offset applied during PUCCHtransmission. Defined as an integer between -202 and24 in increments of 2. The unit is dBm.timeAlignmentTimerCommonThis is a timer applied when the UE performs randomaccess for RRC connection establishment procedureand RRC connection re-establishment procedure.When the UE receives the RAR, it starts the timer, andstops the timer when contention fails.tdd-UL-DL-ConfigurationCommonCell specific TDD UL/DL configuration. It consistsof subfields such as referenceSubcarrierSpacing,pattern1, and pattern2.referenceSubcarrierSpacingThis is the reference SCS used to determine the timedomain boundary in the UL-DL pattern.pattern1, pattern2TDD Uplink Downlink Pattern. It consists ofsubfields such as dl-UL-TransmissionPeriodicity,nrofDownlinkSlots, nrofDownlinkSymbols,nrofUplinkSlots, and nrofUplinkSymbols.dl-UL-TransmissionPeriodicityIndicates the period of the DL-UL pattern.nrofDownlinkSlotsIndicates the number of consecutive full DL slots ineach DL-UL pattern.nrofDownlinkSymbolsIndicates the number of consecutive DL symbolsfrom the beginning of the slot following the last fullDL slot.nrofUplinkSlotsIndicates the number of consecutive full UL slots ineach DL-UL pattern.nrofUplinkSymbolsIndicates the number of consecutive UL symbols atthe last time point of a slot preceding the first full ULslot ServingCellConfigCommon may also include the following information for RedCap UE. TABLE 5controlResourceSetZero_RedCapIt is defined as an integer between 0 and 15.Indicates one of the predefined CORESET#0configurations. It corresponds to the thirdCORESET #0.searchSpaceZero_RedCapIt is defined as an integer between 0 and 15.Indicates one of the predefined SS#0configurations. It corresponds to the third SS#0.searchSpaceOtherSystemInformation_RedCapDefined by the SS identifier IE. If it is 0, the thirdSS#0 is indicateed, if not 0, one of the SSs definedin commonSearchSpaceList is indicateed.ra-SearchSpace_RedCapDefined by the SS identifier IE. If it is 0, the thirdSS#0 is indicateed, if not 0, one of the SSs definedin commonSearchSpaceList is indicateed. .prach-ConfigurationIndex_RedCapPRACH configuration index for RedCap.msg1-FrequencyStart_RedCapPRACH transmission resource information on thefrequency domain for RedCappreambleReceivedTargetPower_RedCapThe target power level of the network receiver forRedCap.ra-ResponseWindow_RedCapLength of the random access response window forRedCap.preambleTransMax_RedCapMaximum number of random access preambletransmissions for RedCaprsrp-ThresholdSSB_RedCapSSB selection criteria for RedCap.ra-Initial value of the contention resolution timer forContentionResolutionTimer_RedCapRedCap.intraFreqReselection_RedCapControls cell selection/reselection within thefrequency of RedCap UE when the highest-prioritycell is barred. It is 1-bit information and is definedas Enumerated {Allowed, notAllowed}. Alsocalled IFRI_SIB1. IFRI_MIB is defined to be present mandatorily and IFRI_SIB1 is defined to be present optionally. This is to ensure backward compatibility of SIB1. Instead of defining IEs for RedCap UEs in units of individual IEs, it is also possible to define configuration information related to RedCap UEs in units of IE sets as follows. ServingCellConfigCommon of SIB1 includes downlink IBWP configuration information and uplink IBWP configuration information. Downlink IBWP configuration information includes PDCCH-ConfigCommon and PDCCH-ConfigCommon2. PDCCH-ConfigCommon is used by general terminals and RedCap UEs, and PDCCH-ConfigCommon2 is used by RedCap UEs. RedCap UE uses PDCCH-ConfigCommon when only PDCCH-ConfigCommon is included in downlink IBWP configuration information and uses PDCCH-ConfigCommon2 when both PDCCH-ConfigCommon and PDCCH-ConfigCommon2 are included. PDCCH-ConfigCommon includes controlResourceSetZero, commonControlResourceSet, searchSpaceZero, commonSearchSpaceList, searchSpaceOtherSystemInformation, pagingSearchSpace, and ra-SearchSpace. PDCCH-ConfigCommon2 includes controlResourceSetZero_RedCap, commonControlResourceSet_RedCap, searchSpaceZero_RedCap, commonSearchSpaceList_RedCap, ra-SearchSpace_RedCap. RedCap UE uses controlResourceSetZero and searchSpaceZero of PDCCH-ConfigCommon if controlResourceSetZero_RedCap and searchSpaceZero_RedCap are not included in PDCCH-ConfigCommon2. That is, it is considered that the same value as the second SS #0 is configured for the third SS #0 and the same value as the second CORESET #0 is configured for the third CORESET #0. The RedCap UE uses the values indicated in the MIB when controlResourceSetZero_RedCap and searchSpaceZero_RedCap are not included in PDCCH-ConfigCommon2 and controlResourceSetZero and searchSpaceZero are not included in PDCCH-ConfigCommon. That is, it is considered that the same value as the first SS #0 is configured for the third SS #0 and the same value as the first CORESET #0 is configured for the third CORESET #0. RedCap UE uses ra-SearchSpace of PDCCH-ConfigCommon if ra-SearchSpace_RedCap is not included in PDCCH-ConfigCommon2. That is, it is considered that the same value as ra-SearchSpace is set as ra-SearchSpace_RedCap. The RedCap UE performs a random access procedure by applying the third SS #0 and the third CORESET #0. The uplink IBWP configuration information includes PUCCH-ConfigCommon and PUCCH-ConfigCommon2. PUCCH-ConfigCommon is used by general UEs and RedCap UEs, and PUCCH-ConfigCommon2 is used by RedCap UEs. RedCap UE uses PUCCH-ConfigCommon when only PUCCH-ConfigCommon is included in uplink IBWP configuration information, and uses PUCCH-ConfigCommon2 when both PUCCH-ConfigCommon and PUCCH-ConfigCommon2 are included. PUCCH-ConfigCommon2 is used by RedCap UE. PUCCH-ConfigCommon contains pusch-TimeDomainAllocationList. PUCCH-ConfigCommon2 contains pusch-TimeDomainAllocationList_RedCap. The uplink IBWP configuration information includes RACH-ConfigCommon and RACH-ConfigCommon2. RACH-ConfigCommon is used by general terminals and RedCap UEs, and RACH-ConfigCommon2 are used by RedCap UEs. RedCap UE uses RACH-ConfigCommon when only RACH-ConfigCommon is included in uplink IBWP configuration information, and uses RACH-ConfigCommon2 when both RACH-ConfigCommon and RACH-ConfigCommon2 are included. RACH-ConfigCommon includes prach-ConfigurationIndex, msg1-FrequencyStart, preambleReceivedTargetPower, ra-ResponseWindow, preambleTransMax, msg1-SubcarrierSpacing, rsrp-ThresholdSSB, and ra-ContentionResolutionTimer. RACH-ConfigCommon2 includes prach-ConfigurationIndex_RedCap, msg1-FrequencyStart_RedCap, preambleReceivedTargetPower_RedCap, ra-ResponseWindow_RedCap, preambleTransMax_RedCap, rsrp-ThresholdSSB_RedCap, ra-ContentionResolutionTimer_RedCap. msg1-SubcarrierSpacing included in RACH-ConfigCommon is applied to both normal UEs and RedCap UEs. In other words, the RedCap UE applies msg1-FrequencyStart included in RACH-ConfigCommon2 and msg1-SubcarrierSpacing included in RACH-ConfigCommon when applying msg1 frequency-related information. If RACH-ConfigCommon2 does not contain prach-ConfigurationIndex_RedCap, msg1-FrequencyStart_RedCap, preambleReceivedTargetPower_RedCap, ra-ResponseWindow_RedCap, preambleTransMax_RedCap, msg1-SubcarrierSpacing_RedCap, rsrp-ThresholdSSB_RedCap, ra-ContentionResolutionTimer_RedCap, RedCap UE uses a same values of prach-ConfigurationIndex, a same values of msg1-FrequencyStart, a same values of preambleReceivedTargetPower, a same values of ra-ResponseWindow, a same values of preambleTransMax, a same values of msg1-SubcarrierSpacing, a same values of rsrp-ThresholdSSB, a same values of ra-ContentionResolutionTimer in RACH-ConfigCommon respectively. In another method, the ServingCellConfigCommon of SIB1 includes the first downlink IBWP configuration information, the first uplink IBWP configuration information, the second downlink IBWP configuration information, the second uplink IBWP configuration information, and tdd-UL-DL-ConfigurationCommon. The first downlink IBWP configuration information and the first uplink IBWP configuration information are information for a terminal with general capability, and the second downlink IBWP configuration information and the second uplink IBWP configuration information are information for a RedCap UE. tdd-UL-DL-ConfigurationCommon is information that is commonly applied to a UE with general capability and a RedCap UE. The first uplink IBWP configuration information includes pucch-ConfigCommon and timeAlignmentTimerCommon. The second uplink IBWP configuration information may include pucch-ConfigCommon_RedCap. The pucch-ConfigCommon may include a first pucch-ResourceCommon and a first p0-norminal. The pucch-ConfigCommon_RedCap may include a second pucch-ResourceCommon and a second p0-norminal. pucch-ConfigCommon is information for a normal UE. pucch-ConfigCommon_RedCap is information for RedCap UE. timeAlignmentTimerCommon is information commonly applied to normal UE and RedCap UE. The RedCap UE transmits the preamble and initiates timeAlignmentTimerCommon upon reception of the RAR. Upon receiving Msg 4, the UE transmits a HARQ ACK by applying a predetermined pucch-ResourceCommon and a predetermined p0-normal. If both the second pucch-ResourceCommon and the first pucch-ResourceCommon exist, the time/frequency/code resource for transmitting the HARQ ACK is determined by applying the second pucch-ResourceCommon. If only the first pucch-ResourceCommon exists, the time/frequency/code resource for transmitting the HARQ ACK is determined by applying the first pucch-ResourceCommon. When both the second p0-norminal and the first p0-norminal exist, the second p0-norminal is applied to determine a power offset to be applied to the HARQ ACK. If only the first p0-norminal exists, the power offset to be applied to the HARQ ACK is determined by applying the first p0-norminal. If neither the second p0-norminal nor the p0-norminal exist, a power offset to be applied to the HARQ ACK is determined by applying a predetermined value. The predetermined value may be, for example, 2 dBm. In step 3A-17, the RedCap UE determines whether the current cell is a barred cell or an allowed cell, in consideration of MIB and SIB1. Regarding cell barring, the RedCap UE determines that the current cell is not barred if all of the following conditions are satisfied. The conditions below are defined so that the RedCap UE camps on the cell only when it can operate properly in the cell. <Cell Allowance Conditions>0: The received MIB's cellBarred is set to notBarred.1: IFRI_SIB1 exists (or is included) in the received SIB1. This is because the absence of IFRI_SIB1 means that the corresponding cell does not consider the operation of the RedCap UE, and the presence of IFRI_SIB1 means that the corresponding cell is a cell that considers the operation of the RedCap UE.2: If the current cell is TDD cell, the UE supports one or more of the frequency bands indicated in the frequencyBandList for downlink in the received SIB1 for TDD, or one or more of the frequency bands indicated in the frequencyBandList for uplink in the received SIB1 for FDD, and they are not downlink only bands, and3: The UE supports an uplink channel bandwidth with a maximum transmission bandwidth configuration fulfilling following conditions: It is smaller than or equal to the uplink carrierBandwidth indicated in SIB1 and it is wider than or equal to the bandwidth of the initial uplink BWP4: the UE supports a downlink channel bandwidth with a maximum transmission bandwidth configuration fulfilling following conditions: It is smaller than or equal to the downlink carrierBandwidth indicated in SIB1 it is wider than or equal to the bandwidth of the initial downlink BWP5: trackingAreaCode is provided in SIB1 for the selected PLMN or the registered PLMN or PLMN of the equivalent PLMN list For example, if trackingAreaCode x is included in SIB1 and trackingAreaCode related to the registered PLMN of the terminal is also x, condition 5 is satisfied. The trackingAreaCode related to the PLMN is provided to the terminal by the AMF during the registration procedure with the terminal. The RedCap UE, which determines that the current cell is not barred, performs the following operation. <Operation of Terminal after Receiving SIB1 in Non-Prohibited Cell>1: Apply the configuration included in the servingCellConfigCommon. More specifically, the UE applies the TDD-UL-DL configuration to determine a downlink slot, an uplink slot, a downlink symbol, and an uplink symbol, and applies a PDSCH configuration selected from among a plurality of PDSCH-ConfigCommon to receive a PDSCH, and applies a PUSCH configuration selected from among a plurality of PUSCH-ConfigCommon to transmit a PUSCH.2: A specified PCCH configuration is applied. The specified PCCH configuration is no SDAP, no PDCP, and RLC TM. A paging message is received by applying the PCCH configuration.3: If a valid SIB is stored, the stored SIB is used, and if a valid SIB is not stored, a related system information message (SI message) is acquired The UE also receives subsequent system information, for example, SIB2, SIB3, SIB4, etc. in the not barred cell. SIB2 includes parameters for intra-frequency cell reselection. SIB3 includes other parameters for intra-frequency cell reselection. SIB4 contains parameters for cell reselection between frequencies. The RedCap UE regards the current serving cell as a barred cell in the cases listed in the table below and performs an appropriate operation according to the situation. TABLE 6CaseSituationRedCap UE operation1MIB reception failureThe current cell is considered as a barred cell.The current cell is excluded from cellselection/cell reselection candidates for 300seconds.It is assumed that both IFRI_MIB andIFRI_SIB1 are allowed. That is, neighboringcells of the corresponding frequency may beincluded in the cell selection/cell reselectioncandidates.2Successful reception ofThe current cell is considered as a barred cell.MIB with cellBarred set toExcludes the current cell from candidates fornotBarred.cell selection/cell reselection for 300 seconds.SIB1 reception failureIf the received IFRI_MIB is allowed,IFRI_SIB1 is considered as allowed, andneighboring cells of the correspondingfrequency may be included in the cellselection/cell reselection candidates.If the received IFRI_MIB is NotAllowed,IFRI_SIB1 is also considered as NotAllowed,and neighboring cells of the correspondingfrequency are excluded from cell selection/cellreselection candidates.3Successful reception ofThe current cell is considered a barred cell.MIB with cellBarred set toExcludes the current cell from candidates forBarred.cell selection/cell reselection for 300 seconds.If the received IFRI_MIB is allowed,IFRI_SIB1 is considered as allowed, andneighboring cells of the correspondingfrequency may be included in the cellselection/cell reselection candidates.If the received IFRI_MIB is NotAllowed,IFRI_SIB1 is also considered as NotAllowed,and neighboring cells of the correspondingfrequency are excluded from the cellselection/cell reselection candidates.The general terminal does not receive SIB1.The RedCap UE may receive SIB1 instead ofreferring to IFRI_MIB, and may exclude orinclude neighboring cells of the correspondingfrequency from cell selection/cell reselectioncandidates according to the received value ofIFRI_SIB1.4Successful MIB receptionThe current cell is considered as a barred cell.with cellBarred set toExcludes the current cell from candidates fornotBarred. SIB1 receptioncell selection/cell reselection for 300 seconds.without IFRI_SIB1Regardless of the value of the receivedIFRI_MIB, IFRI_SIB1 may be considered asNotAllowed and neighboring cells of thecorresponding frequency may be excluded fromcell selection/cell reselection candidates.5Successfully received MIBThe current cell is considered a barred cell.with cellBarred set toExcludes the current cell from candidates fornotBarred. Received SIB1cell selection/cell reselection for 300 seconds.with IFRI_SIB1According to the received IFRI_SIB value,The bandwidth supportedneighboring cells of the correspondingby the terminal is less thanfrequency are included or excluded from the cellthe bandwidth of the IBWP.selection/cell reselection candidates.6Successful reception ofThe current cell is considered a barred cell.MIB with cellBarred set toExcludes the current cell from candidates fornotBarred. Received SIB1cell selection/cell reselection for 300 seconds.with IFRI_SIB1Regardless of the received IFRI values, bothThe bandwidth supportedIFRI_MIB and IFRI_SIB1 are considered asby the terminal is greaterNotAllowed and neighboring cells of thethan or equal to thecorresponding frequency are excluded from cellbandwidth of the IBWP.selection/cell reselection candidates.There is noTrackingAreaCodematching theTrackingAreaCodereceived from SIB1. The reason why the RedCap UE operates as described above is to prevent camp-on in a cell that does not support the RedCap function and to appropriately control whether or not to reselect cells for cells of the same frequency. If there is no IFRI to be referred to as in case 1, both IFRIs may be assumed to be a predetermined value and may be operated accordingly. Alternatively, if reception of IFRI_SIB1 fails as in case 2, IFRI_MIB may be referred to. The RedCap UE may be given two IFRI parameters: IFRI_MIB and IFRI_SIB1. RedCap UE considers two parameters and determines whether to allow intra-frequency reselection as shown in the table below. TABLE 7IFRI_MIBIFRI_SIB1RedCap UE operationNoteReceptionReceptionIFRI_SIB1 is considered asfailurefailureAllowedAllowedReceptionIFRI_SIB1 is considered asIFRI_SIB1 considered as thefailureAllowedsame value as IFRI_MIBAllowedNotIFRI_SIB1 is considered asIt is determined that RedCapPresentNotAllowedis not supported in thecorresponding frequency.AllowedAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isAllowedapplied as it isAllowedNotAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isNotAllowedapplied as it isNotAllowedReceptionIFRI_SIB1 is considered asIFRI_SIB1 considered as thefailureNotAllowedsame value as IFRI_MIBNotAllowedNotIFRI_SIB1 is considered asIt is determined that RedCapPresentNotAllowedis not supported in thecorresponding frequency.NotAllowedAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isAllowedapplied as it isNotAllowedNotAllowedIFRI_SIB1 is considered asthe received IFRI_SIB1 isNotAllowedapplied as it is The RedCap UE applies the received IFRI_SIB1, if both IFRI_MIB and IFRI_SIB1 are received. The RedCap UE considers that IFRI_SIB1 is Allowed If neither IFRI_MIB nor IFRI_SIB1 are received. If the RedCap UE receives IFRI_MIB but does not receive IFRI_SIB1, it determines IFRI_SIB1 by distinguishing whether SIB1 reception has failed or IFRI_SIB1 is not included in SIB1. If the reception of SIB1 is unsuccessful, the UE considers that IFRI_SIB1 is the same as IFRI_MIB. If SIB1 is received but IFRI_SIB1 is not included, the UE considers that IFRI_SIB1 is a predetermined value (eg, notAllowed). This is because, since cells of the same frequency in the same region are highly likely to be configured identically, if IFRI_SIB1 is not provided in the current cell, it is highly likely that IFRI_SIB1 is not provided in other cells as well. Alternatively, If IFRI_SIB1 is preconfigured to be considered as Allowed when UE has received SIB1 from the base station but IFRI_SIB1 is not included, IFRI_SIB1 is considered as Allowed. If MIB reception fails, IFRI_MIB cannot be received. If IFRI_SIB1 is Allowed, the RedCap UE may select or reselect other cells of the same frequency as the barred cell if the cell reselection selection criteria are fulfilled If IFRI_SIB1 is NotAllowed, for 300 seconds the RedCap UE does not select or reselect other cells of the same frequency as the barred cell, and excludes them from candidates for cell selection/reselection. If IFRI_SIB1 is NotAllowed, the RedCap UE sets the cell reselection priority of the frequency of the barred cell for 300 seconds to the lowest priority. The RedCap UE performs cell reselection for frequencies other than the barred cell frequency. At this time, the RedCap UE performs cell reselection by applying the cell reselection priority indicated in the system information received from an NR cell other than the first NR cell. A UE camped on a not barred cell and prepares to perform random access in order to perform a necessary procedure. The UE refers to the received ServingCellConfigCommon. In steps 3A-21, the RedCap UE transmits a preamble to the base station. If both prach-ConfigurationIndex_RedCap and prach-ConfigurationIndex are included in rach-ConfigCommon (or ServingCellConfigCommon), the RedCap UE applies prach-ConfigurationIndex_RedCap to determine a radio frame, subframe, slot, symbol and preamble format in which preamble transmission is possible. If only prach-ConfigurationIndex is included in rach-ConfigCommon (or in ServingCellConfigCommon), RedCap UE determines radio frame, subframe, slot, symbol and preamble format in which preamble transmission is possible by applying prach-ConfigurationIndex. If both msg1-FrequencyStart_RedCap and msg1-FrequencyStart are included in rach-ConfigCommon (or ServingCellConfigCommon), the RedCap UE applies msg1-FrequencyStart_RedCap to determine a frequency region in which preamble transmission is possible. If only msg1-FrequencyStart is included in rach-ConfigCommon (or ServingCellConfigCommon), RedCap UE applies msg1-FrequencyStart to determine a frequency range in which preamble transmission is possible. RedCap UE selects SSB by applying rsrp-ThresholdSSB_RedCap if both rsrp-ThresholdSSB_RedCap and rsrp-ThresholdSSB are included in rach-ConfigCommon (or in ServingCellConfigCommon). RedCap UE selects SSB by applying rsrp-ThresholdSSB if only rsrp-ThresholdSSB is included in rach-ConfigCommon (or ServingCellConfigCommon). The terminal selects an SSB having the highest received signal strength among SSBs having a received signal strength higher than the threshold value. The UE selects a preamble/PRACH transmission opportunity (Occasion) corresponding to the selected SSB and transmits the preamble. After transmitting the preamble, the UE monitors whether a random access response message is received during the random access response window, and if not received, retransmits the preamble. As the maximum number of preamble retransmissions, the UE applies preambleTransMax_RedCap when both preambleTransMax_RedCap and preambleTransMax are included in ServingCellConfigCommon, and applies preambleTransMax when only preambleTransMax is included. The UE applies msg1-SubcarrierSpacing included in rach-ConfigCommon when transmitting the preamble. One ServingCellConfigCommon may include two prach-ConfigurationIndex, two msg1-FrequencyStart, two rsrp-ThresholdSSB, two preambleTransMax and one msg1-SubcarrierSpacing for Msg1 transmission. One of the two prach-ConfigurationIndex, one of the two msg1-FrequencyStart, one of the two rsrp-ThresholdSSB, and one of the two preambleTransMax apply only to RedCap UEs, and msg1-SubcarrierSpacing is applied to both RedCap UEs and non-RedCap UEs. Msg 1 is the preamble. In steps 3A-23, a random access response message is received from the base station. The random access response message includes information such as an uplink grant for Msg 3 transmission, a time domain allocation indicator, and a temporary identifier of the terminal. The random access response message is addressed by the RA-RNTI. The terminal receives a random access response message by monitoring a predetermined SS in a predetermined CORESET in the random access window time period. If ServingCellConfigCommon includes controlResourceSetZero, searchSpaceZero, ra-SearchSpace, controlResourceSetZero_RedCap, searchSpaceZero_RedCap, and ra-SearchSpace_RedCap and If ra-SearchSpace_RedCap indicates 0, RedCap UE applies 3rd CORESET #0 and 3rd SS #0 to RA-Monitor the RNTI and receive a random access response message. If only controlResourceSetZero, searchSpaceZero, and ra-SearchSpace are included in servingCellConfigCommon and If ra-SearchSpace indicates 0, the RedCap UE applies the 2nd CORESET #0 and the 2nd SS #0 to monitor the RA-RNTI and receive a random access response message. If controlResourceSetZero, searchSpaceZero, ra-SearchSpace, controlResourceSetZero_RedCap, searchSpaceZero_RedCap, and ra-SearchSpace_RedCap are all included in servingCellConfigCommon and if ra-SearchSpace_RedCap indicates a value other than 0, the RedCap UE applies the SS having the indicated identifier and the CORESET associated with the SS to monitor RA-RNTI and receive a random access response message. If only controlResourceSetZero, searchSpaceZero and ra-SearchSpace are included in servingCellConfigCommon and if ra-SearchSpace indicates a value other than 0, the RedCap UE applies the SS having the indicated identifier and the CORESET associated with the SS to monitor RA-RNTI and receive a random access response message. If both ra-ResponseWindow and ra-ResponseWindow_RedCap are included in ServingCellConfigCommon, the RedCap UE determines the length of the random access response window by applying ra-ResponseWindow_RedCap. If only ra-ResponseWindow is included in ServingCellConfigCommon, RedCap UE determines the length of the random access response window by applying ra-ResponseWindow. Upon receiving the random access response, the RedCap UE starts tlmeAlignmentTimer and generates a MAC PDU to transmit Msg 3 to the base station. The MAC PDU includes an uplink RRC control message such as RRCRequest. In step 3A-25, the RedCap UE transmits Msg 3 to the base station and starts the contention resolution timer. If servingCellConfigCommon contains both ra-ConttentionResolutionTimer and ra-ContentionResolutionTimer_RedCap, the RedCap UE sets the contention resolution timer to ra-ContentionResolutionTimer_RedCap. If servingCellConfigCommon contains only ra-ConttentionResolutionTimer, RedCap UE sets contention resolution timer to ContentionResolutionTimer. Msg 3 transmission time is determined by the time domain allocation indicator of the random access response message. The RedCap UE determines the start time and transmission duration of the PUSCH to which Msg 3 is to be transmitted according to the PUSCH time domain allocation entry, indicated by a time domain allocation indicator, of a specific list from among a pusch-TimeDomainAllocationList, a second pusch-5 TimeDomainAllocationList and a default list. In steps 3A-27, the RedCap UE receives Msg 4 from the base station. Msg 4 includes a downlink RRC control message such as RRCSetup. The RedCap UE determines a transmission resource for transmitting the HARQ ACK for Msg 4 by selecting one of the first PUCCH common resource information (pucch-ResourceCommon) and the second PUCCH common resource information (pucch-ResourceCommon). The RedCap UE determines the nominal power offset to be applied to HARQ ACK transmission for Msg 4 by selecting one of a nominal power offset (p0-normal) included in the first PUCCH common configuration information (pucch-ConfigCommon) and a nominal power offset (p0-norminal) included in the second PUCCH common configuration information (pucch-ConfigCommon) and a nominal power offset fixed to a predetermined value. The RedCap UE and the base station that have transmitted and received the RRCRequest message and the RRCSetup message establish an RRC connection. The base station and the AMF may transmit/receive various NAS messages and control messages to the UE for which the RRC connection is configured in steps 3A-31. The RedCap UE and the base station can exchange configuration information and the like through RRC connection, configure a bearer, and then transmit/receive data. In ServingCellConfigCommon of SIB1, PDCCH-ConfigCommon2 is located behind PDCCH-ConfigCommon. In ServingCellConfigCommon of SIB1, PUCCH-ConfigCommon2 is located behind PUCCH-ConfigCommon. In ServingCellConfigCommon of SIB1, RACH-ConfigCommon2 is located behind RACH-ConfigCommon. In ServingCellConfigCommon of SIB1, the second downlink IBWP configuration information is located behind the first downlink IBWP configuration information. In ServingCellConfigCommon of SIB1, the second uplink IBWP configuration information is located behind the first uplink IBWP configuration information. In ServingCellConfigCommon of SIB1, controlResourceSetZero_RedCap is located behind controlResourceSetZero. In ServingCellConfigCommon of SIB1, searchSpaceZero_RedCap is located behind searchSpaceZero. In ServingCellConfigCommon of SIB1, ra-SearchSpace_RedCap is located behind ra-SearchSpace. The order of various pieces of information is defined as described above in order to maintain backward compatibility with a terminal or a base station of a previous release. FIG.4is a diagram illustrating an operation of a terminal. In step 4A-01, the reduced capability terminal receives in the first NR cell a master information block with a first information indicating whether or not the cell is barred set to notBarred including a first intra-frequency cell reselection information element for controlling intra-frequency cell reselection and a second information element corresponding to time/frequency where system information block 1 is scheduled. In step 4A-03, the reduced capability terminal receives the system information block 1 using the second information element. In step 4A-05, the reduced capability terminal determines the second IFRI and whether the first NR cell is barred. In step 4A-07, the reduced capability terminal performs cell selection or reselection according to the determination. If the reduced capability terminal does not receive SIB1 in the first NR cell, reduced capability terminal determines that the first NR cell is a barred cell, excludes the first NR cell from the candidates for cell selection and cell reselection for 300 seconds, consider the second IFRI is identical to the first IFRI and determined whether to include or exclude the second NR cell of the same frequency as the first NR cell as a candidate for cell selection/re selection. If the SIB1 received in the first NR cell does not include the second IFRI, reduced capability terminal determines that the first NR cell is a barred cell, excludes the first NR cell from the candidates for cell selection and cell reselection for 300 seconds, consider the second IFRI is set to notAllowed and determined to exclude the second NR cell of the same frequency as the first NR cell as a candidate for cell selection/reselection. In order to ensure backward compatibility with the previous release base station or terminal, the first IFRI IE is defined as a mandatory existence indicating one of Allowed and notAllowed, and the second IFRI IE is defined as an optional existence indicating one of Allowed and notAllowed.FIG.5Ais a block diagram illustrating the internal structure of a UE to which the disclosure is applied. Referring to the diagram, the UE includes a controller 5A-01, a storage unit 5A-02, a transceiver 5A-03, a main processor 5A-04 and I/O unit 5A-05. The controller 5A-01 controls the overall operations of the UE in terms of mobile communication. For example, the controller 5A-01 receives/transmits signals through the transceiver 5A-03. In addition, the controller 5A-01 writes and reads data in the storage unit 5A-02. To this end, the controller 5A-01 includes at least one processor. For example, the controller 5A-01 may include a communication processor (CP) that performs control for communication and an application processor (AP) that controls the upper layer, such as an application program. The controller controls storage unit and transceiver such that UE operations illustrated inFIG.3andFIG.4are performed. The storage unit 5A-02 stores data for operation of the UE, such as a basic program, an application program, and configuration information. The storage unit 5A-02 provides stored data at a request of the controller 5A-01. The transceiver 5A-03 consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), and the like. The RF processor may perform MIMO and may receive multiple layers when performing the MIMO operation. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the system. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The main processor 5A-04 controls the overall operations other than mobile operation. The main processor 5A-04 process user input received from I/O unit 5A-05, stores data in the storage unit 5A-02, controls the controller 5A-01 for required mobile communication operations and forward user data to I/O unit (905). I/O unit 5A-05 consists of equipment for inputting user data and for outputting user data such as a microphone and a screen. I/O unit 5A-05 performs inputting and outputting user data based on the main processor's instruction. FIG.5Bis a block diagram illustrating the configuration of a base station according to the disclosure. As illustrated in the diagram, the base station includes a controller 5B-01, a storage5unit 5B-02, a transceiver 5B-03 and a backhaul interface unit 5B-04. The controller 5B-01 controls the overall operations of the main base station. For example, the controller 5B-01 receives/transmits signals through the transceiver 5B-03, or through the backhaul interface unit 5B-04. In addition, the controller 5B-01 records and reads data in the storage unit 5B-02. To this end, the controller 5B-01 may include at least one processor. The controller controls transceiver, storage unit and backhaul interface such that base station operation illustrated inFIG.3are performed. The storage unit 5B-02 stores data for operation of the main base station, such as a basic program, an application program, and configuration information. Particularly, the storage unit 5B-02 may store information regarding a bearer allocated to an accessed UE, a measurement result reported from the accessed UE, and the like. In addition, the storage unit 5B-02 may store information serving as a criterion to deter mine whether to provide the UE with multi-connection or to discontinue the same. In addition, the storage unit (5B-02) provides stored data at a request of the controller 5B-01. The transceiver 5B-03 consists of a RF processor, a baseband processor and plurality of antennas. The RF processor performs functions for transmitting/receiving signals through a wireless channel, such as signal band conversion, amplification, and the like. Specifically, the RF processor up-converts a baseband signal provided from the baseband processor into an RF band signal, transmits the same through an antenna, and down-converts an RF band signal received through the antenna into a baseband signal. The RF processor may include a transmission filter, a reception filter, an amplifier, a mixer, an oscillator, a DAC, an ADC, and the like. The RF processor may perform a down link MIMO operation by transmitting at least one layer. The baseband processor performs a function of conversion between a baseband signal and a bit string according to the physical layer specification of the first radio access technology. For example, during data transmission, the baseband processor encodes and modulates a transmission bit string, thereby generating complex symbols. In addition, during data reception, the baseband processor demodulates and decodes a baseband signal provided from the RF processor, thereby restoring a reception bit string. The backhaul interface unit 5B-04 provides an interface for communicating with other nodes inside the network. The backhaul interface unit 5B-04 converts a bit string transmitted from the base station to another node, for example, another base station or a core network, into a physical signal, and converts a physical signal received from the other node into a bit string.
63,329
11943673
DETAILED DESCRIPTION Some embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms “data,” “content,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention. Additionally, as used herein, the term ‘circuitry’ refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term herein, including in any claims. As a further example, as used herein, the term ‘circuitry’ also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware. As another example, the term ‘circuitry’ as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device (such as a core network apparatus), field programmable gate array, and/or other computing device. In both evolved packet systems (EPS) and fifth generation systems (5GS), a mobile communication network may be divided into service areas, known as tracking areas (TAs). The use of individual tracking areas is useful to determine the location of a user equipment (UE) within the communication network. A TA may correspond to a geographic paging area in the communication network. The TA may comprise one or more radio access network (RAN) nodes, such as a next generation NodeB (gNB) or an E-UTRAN Node B (eNB). Each RAN node may be associated with one or more network cells within the TA. Network entities that are part of the EPS and/or 5GS communication network, such as an access and mobility management function (AMF) and/or a mobility management entity (MME), may assign a registration area (RA) comprising one or more TAs. The size of a RA, e.g. the geographic coverage area, may be determined by a variety of factors, such as the paging load within a RA as well as consideration of the frequency of UE signaling due to the UE leaving the RAs. For example, a RA which is too large may result in the paging channel being overloaded while a RA that is too small may result in undesirable, frequent RA update messages. In short, the size of the RA may be a tradeoff between the magnitude of data traffic signaling from the paging channel of the associated one or more TAs comprising the RA as well as mobility management signaling, e.g., the frequency at which the AMF and/or MME need to handle UE signaling due to the UE leaving the RA. Additionally, it may be beneficial for the RA to comprise one or more TAs adjacent to one another such that a UE may remain within the RA as it traverses through one or more TAs. However, currently network entities, such as the AMF and MME, lack the ability to detect the topology of a deployed TA, unless provided this information through an operation, administration, maintenance (OAM) entity. Without such topology information, it may be difficult for the AMF and/or MME to allocate an RA comprising a sensible list of TAs able to balance UE signaling and paging traffic. Furthermore, in the 5GS communication network, this issue is further exacerbated by an ability to allocate smaller TAs due to the requirement of uniformly supporting a network slice in a TA, such that a smaller TA may be provisioned when the area of service of a network slice is limited to a small number of RAN nodes and/or cells. Therefore, it may be beneficial to provide network entities, such as an AMF and/or MME, with information such that the AMF and/or MME may build topology awareness and determine a registration area comprising a list of sensible TAs. FIG.1depicts a representation of configurations for a plurality of TAs in a communication network100within which illustrative embodiments are to be implemented. However, it is to be appreciated that embodiments are not limited to the network configurations illustrated herein or otherwise described below. It is to be understood that the elements shown in communication system100are intended to represent an example embodiment of a TA and/or RA configuration, but any TA and/or RA configuration may be contemplated. In the configuration depicted inFIG.1, three TAs101,102and103are depicted, comprising cells101a-d,102a-dand103a-d, respectively. In some embodiments, each TA of the plurality of TAs101,102and103may correspond to a tracking area identity (TAI), which may uniquely identify the TA from other TAs. The cells101a-d,102a-d, and103a-dare shown as hexagonal, thereby representing a hexagonal geographic coverage area. However, any shape and size may be contemplated. The individual cells of a TA may be associated with a RAN node, such as an eNB or gNB, and may be associated with a geographic coverage area. In some embodiments, each cell of the plurality of TA cells101a-d,102a-dand103a-dmaintains distinct geographic coverage areas such that one or more TA cells do not overlap with one another. In some embodiments, two or more TA cells may share the same geographic boundary such that each side of the geographic boundary corresponds to one of the two or more TA cells. In some embodiments, two or more TA cells may be positioned to define a gap between their respective geographic boundaries such that the geographic area within the gap does not correspond to any TA cell. In some embodiments, each TA cell of the plurality of TA cells101a-d,102a-dand103a-dmay correspond to an identifier, such as an evolved universal mobile telecommunication system terrestrial radio access network cell global identifier (ECGI), which may uniquely identify the TA cell from other TA cells. Each TA of the plurality of TAs101,102, and103may comprise one or more RAN nodes, such as a gNB and/or eNB. These RAN nodes may be positioned anywhere within a TA and may be associated with one or more cells, such as cells101a-d,102a-dand/or103a-d. The number of RAN nodes may differ between TAs, including TAs within the same RA. The RAN nodes may also comprise different RAN node types. For example, TA101may comprise three eNBs, TA102may comprise two eNBs, and TA103may comprise an eNB and a gNB. In some embodiments, one or more TAs may correspond to a RA. In some embodiments, the RA is assigned by an AMF and/or MME in communication with the one or more RAN nodes within the one or more TAs. In some embodiments, the RA may comprise a list of one or more TAIs uniquely identifying a corresponding TA. For example, an RA may comprise TA101and TA102, each comprising cells101a-dand102a-d, respectively. In some embodiments, a TA border104may correspond to the geographic boundary between TA101and TA102. However, the RA may not comprise TA103and therefore, may not include cells103a-d. A TA border105may correspond to the geographic boundary between the TA101and TA103and the TA border106may correspond to the geographic boundary between the TA102and TA103. In an example embodiment, if a UE is registered with a gNB in cell101dwithin TA101and moves to cell103bin TA103, the UE may initiate a tracking area update (TAU) as it is no longer within the RA. However, if the UE is registered with a gNB in TA101band moves to TA102c, the UE may not need to initiate a TAU as the UE remains in RA, moving to a new TA102. FIG.2shows a communication system200within which certain illustrative embodiments are to be implemented. However, it is to be appreciated that embodiments are not limited to the network configurations illustrated herein or otherwise described below. It is to be understood that the elements shown in communication system200are intended to represent the main function provided within the system. As such, the blocks shown inFIG.2reference specific elements in EPC and 5G networks that provide the main functions. However, other network elements may be used to implement some or all of the main functions represented. Also, it is to be understood that not all functions of an EPC or 5G network are depicted inFIG.2. Rather, functions that facilitate an explanation of illustrative embodiments are represented. By way of example, the communication system200may be deployed within a radio access architecture. However, the system may be deployed in other applications including within other communication networks including, for example, long term evolution advanced (LTE Advanced, LTE-A), a universal mobile telecommunications system (UMTS) radio access network (UTRAN or E-UTRAN), wireless local area network (WLAN or WiFi), worldwide interoperability for microwave access (WiMAX), Bluetooth®, personal communications services (PCS), ZigBee®, wideband code division multiple access (WCDMA), systems using ultra-wideband (UWB) technology, sensor networks, mobile ad-hoc networks (MANETs) and Internet Protocol multimedia subsystems (IMS) or any combination thereof. Any access network eligible to access the 5G core network such as an Un-trusted Non 3GPP access terminated at a Non-3GPP interworking function (N3IWF), a trusted Non-3GPP access terminated at a trusted non-3GPP gateway function (TNGF) or a Wireline access terminated at a wireless access gateway function (W-AGF) may be used instead of the NG RAN/gNB. In the radio access architecture ofFIG.2, user equipment201is configured to be in a wireless connection on one or more communication channels in a cell with an access node, such as an eNB or gNB. The physical link from a user equipment201to an eNB or gNB is called the uplink or reverse link and the physical link from the eNB or gNB to the UE is called the downlink or forward link. It should be appreciated that the eNBs, gNBs, or their functionalities may be implemented by using any node, host, server or access point (AP), etc. entity suitable for such a usage. A communications system typically comprises more than one eNB or gNB, in which case the eNBs or gNBs may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for signaling purposes. The eNB or gNB is a computing device configured to control the radio resources of the communication system to which the eNB or gNB is coupled. The eNB or gNB may also be referred to as a base station, an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment. The eNB or gNB includes or is coupled to transceiver(s). From the transceivers of the eNB or gNB, a connection is provided to an antenna unit that establishes bi-directional radio links to UEs. As such, the transceivers of the eNB or gNB and the transceivers of the UEs may include transmitters and receivers configured to communicate via a channel. Accordingly, as shown, communication system200comprises UE201that communicates, such as via an air interface, with an RAN node202. The UE201may be a mobile station, and such a mobile station may comprise, by way of example, a mobile telephone, a computer, or any other type of communication device. In an LTE-V2X implementation, one or more UEs may deployed in a given vehicle. The term “user equipment” as used herein is therefore intended to be construed broadly, so as to encompass a variety of different types of mobile stations, subscriber stations or, more generally, communication devices, including examples such as a combination of a data card inserted in a laptop or other equipment (e.g., a vehicle). The user equipment201may also refer to a portable computing device that includes wireless mobile communication devices operating with or without a subscriber identification module (SIM), including, but not limited to, the following types of devices: a mobile station (mobile phone), smartphone, personal digital assistant (PDA), handset, device using a wireless modem (alarm or measurement device, etc.), laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device. It should be appreciated that a UE may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network. A UE may also be a device having the capability to operate in an IoT network, which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. The user equipment (or in some embodiments a layer 3 relay node) is configured to perform one or more of user equipment functionalities. The user equipment may also be called a subscriber unit, mobile station, remote terminal, access terminal, user terminal or user device just to mention but a few names or apparatuses. In one embodiment, UE201is comprised of a Universal Integrated Circuit Card (UICC) and Mobile Equipment (ME). The UICC is the user-dependent part of the UE and contains at least one Universal Subscriber Identity Module (USIM) and appropriate application software. The USIM securely stores the International Mobile Subscriber Identity (IMSI) number and its related key, which are used to identify and authenticate subscribers to access networks. The ME is the user-independent part of the UE and contains terminal equipment (TE) functions and various mobile termination (MT) functions. The RAN node202is illustratively part of a RAN of the communication system200. In an EPS network, the RAN node is typically implemented by an eNB while in a 5GS network, the RAN node is typically implemented by an gNB. Such an access network may comprise, for example, an EPC or 5GS (or mixed) having a plurality of base stations and one or more associated radio network control functions. The base stations and radio network control functions may be logically separate entities, but in a given embodiment may be implemented in the same physical network element, such as, for example, a base station router or femto cellular access point. In some example embodiments, the RAN node202is operatively coupled to a mobility management function203, such as via an S1 interface or NG interface. In an EPS network, the function is typically implemented by an MME while in a 5GS network, the function is typically implemented by an AMF. A mobility management function may be an element of function in the core network (CN) part of the communication network200that generates, among other network operations, a RA comprising a list of TAIs corresponding to TAs. One example of an apparatus300that may be configured to function as a network entity, such as AMF or MME, is depicted inFIG.3. As shown inFIG.3, the apparatus300includes, is associated with or is in communication with processing circuitry302, a memory306and a communication interface304. The processing circuitry302may be in communication with the memory device via a bus for passing information among components of the apparatus300. The memory device306may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory device306may be an electronic storage device (e.g., a computer readable storage medium) comprising gates configured to store data (e.g., bits) that may be retrievable by a machine (e.g., a computing device like the processing circuitry). The memory device306may be configured to store information, data, content, applications, instructions, or the like for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present disclosure. For example, the memory device306could be configured to buffer input data for processing by the processing circuitry302. Additionally or alternatively, the memory device306could be configured to store instructions for execution by the processing circuitry302. The apparatus300may, in some embodiments, be embodied in various computing devices as described above. However, in some embodiments, the apparatus may be embodied as a chip or chip set. In other words, the apparatus may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard). The structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon. The apparatus may therefore, in some cases, be configured to implement an embodiment of the present invention on a single chip or as a single “system on a chip.” As such, in some cases, a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein. The processing circuitry302may be embodied in a number of different ways. For example, the processing circuitry302may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processing circuitry may include one or more processing cores configured to perform independently. A multi-core processing circuitry may enable multiprocessing within a single physical package. Additionally or alternatively, the processing circuitry may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading. In an example embodiment, the processing circuitry302may be configured to execute instructions stored in the memory device306or otherwise accessible to the processing circuitry302. Alternatively or additionally, the processing circuitry may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processing circuitry may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Thus, for example, when the processing circuitry is embodied as an ASIC, FPGA or the like, the processing circuitry may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processing circuitry302is embodied as an executor of instructions, the instructions may specifically configure the processor to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processing circuitry302may be a processor of a specific device (e.g., an image or video processing system) configured to employ an embodiment of the present invention by further configuration of the processing circuitry by instructions for performing the algorithms and/or operations described herein. The processing circuitry302may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processing circuitry. The communication interface304may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data, including media content in the form of video or image files, one or more audio tracks or the like. In this regard, the communication interface304may include, for example, an antenna (or multiple antennas) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally or alternatively, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface may alternatively or also support wired communication. As such, for example, the communication interface may include a communication modem and/or other hardware/software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms. FIGS.4A-Billustrate interface management messages using a RAN node401and MME/AMF402. Specifically, the interface management message may correspond to a setup request as depicted inFIG.4Aor a RAN configuration update as depicted inFIG.4B. In operation1ofFIG.4A, the RAN node401may establish initial communication with a core network, such as an EPC or 5G core network, by causing the transmission of a setup request message. In some embodiments, the RAN node401may comprise an eNB or a gNB. In some embodiments, the setup request message is transmitted via the S1 or NG interface. In some embodiments, the RAN node401may be associated with one or more TAs, which may each be identified by a corresponding TAI. In some embodiments, the setup request message may comprise information describing the geographic coverage area of the TA corresponding to the TA with which the RAN node401is associated. In some embodiments, the setup request message may comprise a count of the radio access network nodes within the TA. In some embodiments, the setup request message may further comprise information describing the geographic coverage area and/or a RAN node count of one or more TAs other than the TA corresponding to the RAN node401, such as one or more adjacent TAs. In some embodiments, the MME/AMF402may receive the setup request message. In some embodiments, the MME/AMF402may use the information from the setup request message to determine a RA comprising a list of one or more TAIs. In operation2ofFIG.4A, the MME/AMF402may cause the transmission of a setup response message to the RAN node401. The setup response message may include data indicating the setup request was successful. In operation1ofFIG.4B, the RAN node403may update application level configuration data for the RAN node403and the MME/AMF404, by causing the transmission of a RAN configuration update message. The RAN configuration update message does not affect existing UE-related contexts. In some embodiments, the RAN node403may comprise an eNB or a gNB. In some embodiments, the RAN configuration update message is transmitted via the S1 or NG interface. In some embodiments, the RAN node403may be associated with a one or more TAs, which may each be identified by a corresponding TAI. In some embodiments, the RAN configuration update message may comprise information describing the geographic coverage area of the TA corresponding to the TA with which the RAN node403is associated. In some embodiments, the RAN configuration update message may comprise a count of the radio access network nodes within the TA. In some embodiments, the RAN configuration update message may further comprise information describing the geographic coverage area and/or a RAN node count of one or more TAs other than the TA corresponding to the RAN node401, such as one or more adjacent TAs. In some embodiments, the MME/AMF404may receive the RAN configuration update message. In some embodiments, the MME/AMF404may use the information from the configuration update message to determine a RA comprising a list of one or more TAIs. In operation2ofFIG.4B, the MME/AMF404may cause the transmission of a RAN configuration update acknowledgement message to the RAN node403. In some embodiments, the RAN configuration update acknowledgement message may indicate the configuration data was successfully updated. FIG.5illustrates causing the transmission of and receiving uplink messages using a RAN node501and MME/AMF502. In some embodiments, the uplink message may correspond to an uplink NAS transport message. In operation1ofFIG.5, the RAN node501may cause the transmission of the uplink NAS transport message to MME/AMF502. In some embodiments, the RAN node501may comprise an eNB or a gNB. In some embodiments, the uplink NAS transport message is transmitted via the S1 or NG interface. In some embodiments, the RAN node501may be associated with one or more TAs, which may be identified by a corresponding TAI. In some embodiments, the uplink NAS transport message may comprise the geographic coordinates of a predefined portion, such as the center, of the one or more associated TAs and a measure of the size, such as the radius, of the one or more associated TAs. In some embodiments, the uplink NAS transport message may comprise one or more of the geographic coordinates providing the center of the one or more cells comprising the one or more associated TAs and a radius of the one or more cells associated with each of the one or more associated TAs. In some embodiments, the uplink NAS transport message may comprise topology information related to where the UE is located. In some embodiments, the uplink non-access stratum transport message comprises a TAI of the TA and the ECGI of the cell where the UE is located. In some embodiments, the MME/AMF502may receive the uplink NAS transport message. In some embodiments, the MME/AMF502may use the information from the uplink NAS transport message to determine a RA comprising a list of one or more TAIs. FIGS.6A-Cillustrate handover (HO) signal messages using a RAN node601and MME/AMF602. Specifically, the HO signal message may correspond to a handover required message as depicted inFIG.6A, a path switch request as depicted inFIG.6B, or as a message indicating a success of an handover such as a handover notify message (defined e.g. in 3GPP TS 36.413 or TS 38.413) as depicted inFIG.6C. A UE may be associated with a source RAN node601within a TA, such as TA101corresponding to a source cell101d. If the UE moves outside of the TA, such as into cell103bcorresponding to TA103, the source RAN node601may cause the transmission of a handover required message to MME/AMF602as shown in operation1ofFIG.6A. In some embodiments, the handover required message may comprise the appropriate cause value for the handover. In some embodiments, the handover request message may comprise topology information related to the TA of the source cell101dof the handover process. For example, the handover request message may comprise the TAI corresponding to the TA101with which the source RAN node601is associated and an ECGI of the source cell101d. In some embodiments, the handover request message may identify one or more TAs adjacent to the tracking area of the source cell, such as by their corresponding TAIs. In some embodiments, the MME/AMF602may receive the handover required message. In some embodiments, the MME/AMF602may use the information from the setup request message to determine a RA comprising a list of one or more TAIs. In some embodiments, the MME/AMF602may determine the most frequent subsequent registrations for a particular UE. In some embodiments, MME/AMF602may begin to determine an RA with a single TA and allocating additional TAs. The MME/AMF602may add TAs based at least in part on the inference that some TAs are adjacent from the collected history of TA HO signals from a UE. In operation2ofFIG.6A, the MME/AMF602may cause the transmission of a handover command message to the source RAN node601. In some embodiments, the handover command message may include data indicating the reservation of resources at a target RAN node is ready. FIG.6Bdepicts a target RAN node603within a TA to which the UE has moved. If the UE moves outside of the TA, such as from TA101binto TA102c, the target RAN node603corresponding to TA102cmay cause the transmission of a path switch request message to MME/AMF604as shown in operation1ofFIG.6B. In some embodiments, the path switch request message may comprise the appropriate cause value for the handover. In some embodiments, the path switch request message may comprise the TAI corresponding to the TA with which the target RAN node603is associated. In some embodiments, the MME/AMF604may receive the path switch request message. A UE may be associated with a target RAN node603within a TA, such as TA103corresponding to a target cell103b. This may have resulted from the UE moving from a source cell101dassociated with TA101into target cell103b. The target RAN node603may cause the transmission of a path switch request message to MME/AMF604as shown in operation1ofFIG.6B. In some embodiments, the path switch request message may comprise the appropriate cause value for the handover. In some embodiments, the path switch request message may comprise topology information related to the TA of the source cell101dof the handover process and/or topology information related to the TA of the target cell103dof the handover process. For example, the handover request message may comprise the TAI corresponding to the TA103and an ECGI of the source cell103d, to which the UE has been handed over as well as possibly the TAI corresponding to the TA101and an ECGI of the source cell101d, from which the UE has been handed over. In some embodiments, the handover request message may identify one or more TAs adjacent to the tracking area of the source cell, such as by their corresponding TAIs. In some embodiments, the handover request message may identify one or more TAs adjacent to the tracking area of the target cell, such as by their corresponding TAIs. In some embodiments, the MME/AMF604may use the information from the path switch request message to determine a RA comprising a list of one or more TAIs corresponding to one or more TAs. In some embodiments, the MIME/AMF604may determine the most frequent subsequent registrations for a particular UE. In some embodiments, MME/AMF604may begin to determine an RA with a single TA and allocating additional TAs. The MME/AMF604may add TAs based at least in part on the inference that some TAs are adjacent from the collected history of TA HO signals from a UE. In operation2ofFIG.6B, the MME/AMF604may cause the transmission of a path switch request acknowledge message to the RAN node603. In some embodiments, the path switch request acknowledge message may include data indicating the path switch request was successful. FIG.6Cdepicts a target RAN node605within a TA to which the UE has moved. If the UE moves outside of the TA, such as from TA101binto TA102c, the target RAN node605corresponding to TA102cmay cause the transmission of a handover notify message to MME/AMF606as shown in operation1ofFIG.6C. In some embodiments, the handover notify message may comprise a notification that a UE has been identified in the target cell and the handover has been successfully completed. In some embodiments, the handover notify message may comprise the topology information of the cell with of a TA. In some embodiments, the handover notify message may comprise the TAI corresponding to the TA with which the target RAN node605is associated. Referring now toFIG.7, an example flowchart700implemented, for example, by an apparatus300embodied by a network entity, such as AMF and/or MME203, to determine a RA will be discussed herein. As shown in block701, the apparatus300embodied by a network entity, such as MME and/or AMF203, may include means, such as the processor302, the communication interface304or the like, for receiving one or more indications of topology information related to one or more tracking areas. In some embodiments, each tracking area is associated with one or more cells in the RAN network served by each of the one or more RAN nodes202. In some embodiments, this indication may be received via a setup request, RAN configuration update, uplink NAS transport, handover request, and/or path switch request as discussed with respect toFIGS.4-6. In some embodiments, the one or more indications of topology information may comprise the geographic coverage area of each of the one or more TAs and/or the count of the cells associated with each of the one or more TAs. In some embodiments, the one or more indications of topology information may comprise one or more geographic coordinates providing the center location of the one or more TAs and a radius of the one or more TAs. In some embodiments, the one or more indications of topology information may comprise one or more geographic coordinates providing the center location of the one or more cells and a radius of the one or more cells. In some embodiments, the one or more indications of topology information may comprise topology information related to the TA where a UE is located. In some embodiments, the one or more indications of topology information may comprise a TAI of a corresponding TA and ECGI of the cell where the UE is located. In some embodiments, the indication identifies one or more tracking areas associated with a source cell of a handover process for a UE. In some embodiments, the one or more indications of topology information may comprise one or more TAs adjacent to each of the one or more TAs. In some embodiments, the one or more indications of topology information may comprise one or more TAs adjacent to a source cell of a handover process for a UE. In some embodiments, the MME and/or AMF203may generate a historical log for a UE201accessing one or more RAN nodes202. The historical log may comprise one or more registration procedures performed by the UE201with the one or more RAN nodes202. As shown in block702, the apparatus300embodied by the network entity, such as MME and/or AMF203, may include means, such as the processor302or the like, for determining a RA based at least in part on the indication of one or more TAs associated with the one or more RAN nodes. As described above, in some embodiments, the RA may comprise a list of one or more TAIs corresponding to one or more TAs. The list of one or more TAIs may include TAs proximately located to one another, such as within a predefined distance of one another, such that the RA comprises a list of sensible TAs. In this way, the MME and/or AMF302may determine the list of TAI comprising a RA in a way that is topographically aware. Referring now toFIG.8, an example flowchart800implemented, for example, by an apparatus300embodied by a network entity, such as a RAN node202, to cause a mobility management function203to determine a RA will be discussed herein. As shown in block801, the apparatus300embodied by a network entity, such as a RAN node202, may include means, such as the processor302, the communication interface304or the like, for causing the transmission an indication of one or more tracking areas to a mobility management function203. In some embodiments, this indication may be transmitted via a setup request, RAN configuration update, uplink NAS transport, handover request, and/or path switch request as discussed with respect toFIGS.4-6. In some embodiments, the one or more indications of topology information may comprise the geographic coverage area of each of the one or more TAs and/or the count of the cells associated with each of the one or more TAs. In some embodiments, the one or more indications of topology information may comprise one or more geographic coordinates providing a location, such as the center location, of the one or more TAs and a size, such as a radius, of the one or more TAs. In some embodiments, the one or more indications of topology information may comprise one or more geographic coordinates providing the center location of the one or more cells and a radius of the one or more cells. In some embodiments, the one or more indications of topology information may comprise topology information related to the TA where a UE is located. In some embodiments, the one or more indications of topology information may comprise a TAI of a corresponding TA and ECGI of the cell where the UE is located. In some embodiments, the indication identifies one or more tracking areas associated with a source cell of a handover process for a UE. In some embodiments, the one or more indications of topology information may comprise one or more TAs adjacent to each of the one or more TAs. In some embodiments, the one or more indications of topology information may comprise one or more TAs adjacent to a source cell of a handover process for a UE. As shown in block802, the apparatus300embodied by the network entity, such as RAN node202, may include means, such as the processor302or the like, for causing a mobility management function203to determine a RA based at least in part on the indication of one or more TAs. As described above, in some embodiments, the RA may comprise a list of one or more TAIs corresponding to one or more TAs. The list of one or more TAIs may include TAs proximately located to one another, such as within a predefined distance of one another, such that the RA comprises a list of sensible TAs. In this way, the RAN node202may provide the mobility management function, such as MME and/or AMF203, with information required to determine the list of TAI comprising a RA in a way that is topographically aware. As described above, a method, apparatus, and computer program product are disclosed for determining a RA. In this regard, the method, apparatus and system are configured to determine a RA comprising one or more TAIs corresponding to one or more TAs in a way that is topographically aware. By providing a network entity, such as MME and/or AMF203with an an indication of one or more TAs, the network entity, such as MME and/or AMF203may more efficiently determine a RA comprising a sensible list of TAIs. In this way, the network entity may more efficiently balance a RA paging load with the frequency of mobility management signaling, thus leading to an overall more efficient communication network. FIGS.3-8illustrate message flows and flow charts depicting methods according to an example embodiment of the present invention. It will be understood that each block of the message flow may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other communication devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory device306of an apparatus300employing an embodiment of the present invention and executed by a processor302. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flowchart blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the function specified in the flowchart blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart blocks. Accordingly, blocks of the flowcharts and message flows support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, can be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions. Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations may be provided in addition to those set forth herein. Moreover, the implementations described above may be directed to various combinations and sub-combinations of the disclosed features and/or combinations and sub-combinations of several further features disclosed above. Other embodiments may be within the scope of the following claims. If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Although various aspects of some of the embodiments are set out in the independent claims, other aspects of some of the embodiments comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims. It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications that may be made without departing from the scope of some of the embodiments as defined in the appended claims. Other embodiments may be within the scope of the following claims. The term “based on” includes “based on at least.” The use of the phase “such as” means “such as for example” unless otherwise indicated. It should therefore again be emphasized that the various embodiments described herein are presented by way of illustrative example only and should not be construed as limiting the scope of the claims. For example, alternative embodiments can utilize different communication system configurations, user equipment configurations, base station configurations, identity request processes, messaging protocols and message formats than those described above in the context of the illustrative embodiments. These and numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.
44,723
11943674
Like labels are used to refer to same or similar items in the drawings. DETAILED DESCRIPTION For enhancing mobility robustness, 3GPP Release 16 may provide a “Conditional Handover” (CHO) as described in 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA) and NR; Multi-connectivity; Stage 2 (Release 16) as well as other versions of this document, hereinafter 3GPP TS 37.340. In a CHO, one or more handover target cells are prepared in advance based on a measurement report from a user equipment. The user equipment may access a handover target cell based on a trigger (e.g., event or radio measurement of conditions) configured by a radio access network to avoid radio link failures due to a failed handover command. In the case of dual connectivity, CHO refers to a source master (or primary) node handover to another, target master node, or, in the case of an intra-node (same master node) handover between a source cell and target cell. In the case of dual connectivity, a change of a serving cell within a secondary node or from a source secondary node to a target secondary node may also be performed. This secondary cell change, if done in conditional manner, is referred to as a Conditional Primary secondary cell (PSCell) Change (CPC). In the case of dual connectivity, if there is a CHO occurring while a CPC is occurring, this may cause issues with loss of data, connectivity, etc. absent some form of coordination. In some example embodiments, there may be provided a way to coordinate CHO and CPC in order to avoid that they occur at the same or similar time. As noted, if a CHO is initiated while a CPC is in progress for example (or vice versa), it may cause problems. Moreover, there may be provided improvements for CHO of a master node (MN) in the presence of a secondary node (SN). In some example embodiments, a first base station such as a master node (which is the master node after the dual connectivity has been established) may be configured to support a conditional handover. When a second base station is being set up as a secondary node for dual connectivity, the second base station may receive information related to the conditional handover configuration of the first base station. When this information indicates that a conditional handover (CHO) may be executed, for example, the secondary base station knows that CPC is not allowed. When the information indicates that a CHO is not planned, however, the secondary base station knows that CPC is allowed. In some example embodiments, a message, such as a secondary node addition request (e.g., an “SgNB ADDITION REQUEST” or “S-NG-RAN NODE ADDITION REQUEST”) used to add a secondary node for dual connectivity, is extended to include an indication of whether the secondary node is allowed to use CPC (or, e.g., whether the secondary node is not to use CPC. Alternatively, or additionally, a message, such as the secondary node addition request (e.g., an “SgNB ADDITION REQUEST” or “S-NG-RAN NODE ADDITION REQUEST”), is extended to include an indication that an existing secondary node will stop an ongoing CPC processes (which was initiated prior to receipt of the message). Alternatively, or additionally, a message, such as the secondary node addition request (“S-Node Addition Request”), is extended to include an indication that the “S-Node Addition Request” is triggered by a conditional master node handover. In some example embodiments, a master node may decide per dual connectivity connection whether CPC or CHO is to be used. If the master node determines that CHO is more relevant (e.g., if the user equipment is closer to a cell boundary between master nodes), the master node may then instruct the future secondary node to not use CPC (e.g., an indication representative of CPC being prohibited such as a “CPC Prohibited”). If the master node determines that CPC is more relevant (e.g., if the user equipment is closer to a cell boundary between secondary nodes), the target master node may choose to allow CPC in the future secondary node. In some embodiments, the master node (MN) may be configured to not use CHO after allowing CPC in the secondary node. In some embodiments, the extension of the message, such as the “SgNB ADDITION REQUEST” or the “S-NG-RAN NODE ADDITION REQUEST”, may be a flag or indicator that CPC is allowed in the receiving secondary node. For instance, the message may include an indicator such as a “1” to indicate that CPC is prohibited, and an indicator of “0” may indicate that CPC is allowed. In some embodiments, the secondary node receives a message, such as the S-Node Addition Request. This message includes an indication that CPC is prohibited and thus the secondary node may not use CPC after receiving that message, and/or may stop using it, if already configured for the connection. In some embodiments, if the message does not include CHO related information (e.g., an indication that CPC is prohibited), the secondary node may decide that CPC is allowed. For example, the CHO related information (e.g., an indication that that CPC is prohibited) may be missing from the message, when the target master node is not supporting CHO or for other reasons. In some embodiments, the secondary node may reject the request message to add the secondary node (e.g., the S-Node Addition Request), if the secondary node is not willing to accept the CPC prohibition. For example, the secondary node may know (e.g., from the past failures, etc.) that PSCell changes are very risky and require improvements via a CPC. When this is the case, the secondary node may reject the request message prohibiting CPC. When rejected, the secondary node may add a cause indicative of the rejection, and respond with a response message (e.g., “S-Node Addition Request Reject”) that indicates the reason for rejecting the CPC prohibition. As a consequence, the master node may re-evaluate the CPC prohibition and may send out a new S-Node Addition Request allowing the CPC. In some embodiments, if the master node changes the CPC or CHO decision after sending out the request message (e.g., the S-Node Addition Request), the master node may update the secondary node using a modification message, such as an extended “SgNB MODIFICATION REQUEST” or “S-NG-RAN NODE MODIFICATION REQUEST” which includes the noted CHO related information. If for example the user equipment moves from a master node boundary to a secondary node boundary, the master node may decide to remove the CPC prohibition and allow a future CPC by sending to the secondary node the modification message, such as the S-Node Modification Request indicating CPC allowed. In some embodiments, the secondary node (which received a CPC prohibit) may stop an ongoing CPC processes (e.g., by releasing the CPC configurations inside the UE) when receiving a request message, such as a secondary node modification or addition request (e.g., an S-Node Modification Request or an S-Node Addition Request). This may happen in at least two example situations. First, the secondary node may stop the ongoing CPC process if the master node sends a modification request (e.g., S-Node Modification Request) prohibiting CPC in the secondary node (which has had CPC initiated previously). Here, it might be desirable that the secondary node stops the CPC. Second, if the master node initiates a conditional handover to another candidate target master node without a secondary node change, the candidate target master node may send, for example, an S-Node Addition Request to the existing secondary node to set up the new dual connectivity session. When this is the case, the CPC may have already been initiated by the secondary node SN (e.g., if source master node has originally allowed CPC). In this case, it may even be essential that the secondary node stops CPC as the master node cannot initiate CHO otherwise. This situation may also be resolved by sending for example the S-Node Modification Request (and stopping CPC then). In some embodiments, the secondary node may stop an ongoing CPC implicitly whenever it receives a CPC prohibition, without the need of a separate indication for stopping ongoing CPC. In some embodiments, the secondary node may stop an ongoing CPC whenever the secondary node receives explicit information (e.g., separate indication such as a “1” in the message mapped to CPC stop, in addition or instead of the indication that prohibits the CPC). In some embodiments, the CHO related information may be a temporary prohibition of CPC. For example, the CHO related information may include an indication, such as a value indicating a time period for which CPC is prohibited, and after the expiry of the time period, CPC is allowed to be used again. Likewise, the CHO related information may trigger a timer, after the expiry of which CPC is allowed again. Alternatively, or additionally, the CPC prohibition may be until an event, such as when an ongoing master node conditional handover process is completed and/or the secondary node is released by the source master node. When a master node (which is a configured with a secondary node as part of the dual connectivity) initiates a conditional handover (CHO) which includes a corresponding secondary node change (e.g., by sending a handover request to a target master node), the target master node may send a subsequent secondary node addition message (e.g., the S-Node Addition Request) that includes CHO related information that indicates that this secondary node addition is part of a conditional handover (CHO) of a master node. For example, the target master node may send to the secondary node the secondary node addition request message which includes the noted CHO related information. In response, the secondary node may react to this secondary node addition message (which includes information which indicates that this secondary node addition is part of a conditional handover (CHO) of a master node) by reserving fewer resources (e.g., radio resources, control channels, backhaul capacity, and/or the like) when compared to a legacy conditional master node handover with secondary node change which is not using the CHO related information. In some embodiments, the information (which indicates that this secondary node addition is part of a conditional handover (CHO) of a master node) is combined with the information which indicates whether CPC is allowed or prohibited at the secondary node. For example, a bitmap of, for example, 2 bits may be used, so that the first bit indicates whether CPC is prohibited and the second bit indicates whether the handover procedure is part of a conditional handover (CHO) of the master node. Although some of the examples refer to specific types of messages, such as the S-Node Modification Request or an S-Node Addition Request, other types of messages may be used including messages used to dynamically add or modify nodes or cells as part of the dual connectivity process. Moreover, although some of the examples refer to an internode handover between nodes (e.g., master base station to master base station), the handover may be intranode (e.g., same master base station), in which case the same node handles the handover from a first cell serving a UE to a second cell serving the UE. Likewise, although some of the examples refer to an internode cell change (e.g., from a secondary node to another secondary node), the secondary node cell change may also be intranode, in which case the same secondary node handles the cell change. FIG.1Adepicts an example of a signaling diagram between a user equipment102, a source master node110A, a secondary node112, a target master node110B, one or more other potential target master nodes112C, a first core network node115(e.g., a serving gateway (S-GW) and/or user plane function), and a second core network node117(e.g., a mobility management entity (MME) and/or access and mobility management), in accordance with some example embodiments. In the example ofFIG.1A, the UE is first configured with dual connectivity with source master node110A and secondary node112and then performing at least a portion of the conditional handover process with one or more target master nodes110B-C. The process atFIG.1Amay be similar in some respects to 3GPP, 37.340, where a legacy handover of the master station occurs without a secondary node change butFIG.1Afurther includes messages extended as noted herein to enable coordination between CPC and CHO. At 1, the source master node110A may send to at least one secondary node112a message, such as a message requesting the addition of a secondary node (e.g., the “SgNB Addition Request”). In accordance with some example embodiments, this message may include the noted CHO related information. For example, the CHO related information may include an indication of whether CPC is prohibited or allowed at the secondary node. To illustrate further, the indication may represent a flag or other indicate of CPC prohibited (e.g., a “1” bit value) or CPC not prohibited (e.g., a “0” bit value). In the example ofFIG.1A, the CHO related information requests that the secondary node112prohibits CPC due to the source master110planning to perform (at for example, 8 below) CHO. At 2, the secondary node112may send to the source master node110A an acknowledgement message (e.g., the “SgNB Addition Request Acknowledgement”), in accordance with some example embodiments. The acknowledgement message may indicate to the source master node that the CPC prohibition is accepted. As noted however, the secondary node may reject to prohibit CPC. When this is the case, the secondary node112may not send an acknowledgement or send a response message indicating a rejection (e.g., a cause code indicting rejection or a secondary node addition request reject message that indicates the reason for rejecting the CPC prohibition). At 8, the source master node110A may send a handover request to the target master node110B and, in particular, a conditional master node handover request without a change in the secondary node112(e.g., the secondary node112remains the same before and after the CHO to target master node110B). In some embodiments, if the secondary node112(which has received the CPC prohibition) wants to do a change of the PSCell, the secondary node112may use the prior PSCell change procedure (as described in e.g.37.340) but not the conditional PSCell (CPC) procedure. FIG.1Bdepicts an example of a signaling diagram, in accordance with some example embodiments.FIG.1Bis similar in some respects toFIG.1AbutFIG.1Bdepicts the case where the message at 1 allows CPC at the secondary node112. At 1, the source master node110A may send to at least one target master node110B a message such as request message (e.g., the “SgNB Addition Request”) including CHO related information, which in this example is an indication that CPC is allowed at the secondary node112. At 2, the secondary node112may send to the source master node110A an acknowledgement message (e.g., the “SgNB Addition Request Acknowledgement”), in accordance with some example embodiments. The message at 2 may be the same or similar to message 2 noted above with respect toFIG.1A. At 8, the source master node110A may send a handover request to the target master node110B and, in particular a legacy master node handover request without a change in the secondary node112(e.g., the secondary node112remains the same before and after the handover to target master node110B). The master node shall not use a conditional handover since the secondary node is allowed to trigger CPC at any time which would collide with the CHO at the master node, so the master node can only use a legacy handover procedure. At 9, the master node110B may send to the secondary node112a message such as request message (e.g., the “SgNB Addition Request”) including an indication to stop CPC (e.g., not initiate or stop an ongoing CPC) at the secondary node112. At 10, the secondary node112may send to master node110B an acknowledgement message (e.g., the “SgNB Addition Request Acknowledgement”), in accordance with some example embodiments. The message at10may be the same or similar to message 2 noted above with respect toFIG.1A. FIG.1Cdepicts an example of a signaling diagram for a modification request, in accordance with some example embodiments. The process atFIG.1Cis similar in some respects to 3GPP, 37.340, where a secondary node addition procedure such as a SgNB modification procedure is described butFIG.1Bfurther includes messages extended as noted herein to enable coordination between CPC and CHO. At 1, the source master node110A may send to at least one target master node110B a message such as request message (e.g., the “SgNB Addition Request”) including CHO related information, which in this example is an indication that CPC is allowed at the secondary node112. Alternatively, or additionally, the CHO related information may contain an indication to stop an ongoing CPC process. At 2, the secondary node112may send to the source master node110A an acknowledgement message (e.g., the “SgNB Addition Request Acknowledgement”), in accordance with some example embodiments. The message at 2 may be the same or similar to message 2 noted above with respect toFIG.1A. At 7, the source master node110A may send to the secondary node112a message such as request message (e.g., the “SgNB Modification Request”) including CHO related information, which in this example is an indication that CPC is prohibited at the secondary node112. In this example, the secondary node was initially allowed to perform a CPC with message 1, but later modified to prohibit CPC with message 7. At 8, the secondary node112may send to the source master node110A an acknowledgement message (e.g., the “SgNB Modification Request Acknowledgement”), in accordance with some example embodiments. The acknowledgement message may acknowledge the request so that the source node knows the secondary knows agrees to the request to prohibit CPC, or the acknowledgement message may indicate a rejection of the request to prohibit CPC (e.g., fail to respond within a certain time period, indicate a rejection, indicate a cause code (e.g., reason for the rejection). As inFIG.1A, the master node can initiate a conditional handover since the secondary node will not use CPC. FIG.1Ddepicts an example when the addition request is used during an ongoing conditional handover process.FIG.1Dis very similar toFIG.1A. However, at 1, the source master node110A may send to the secondary node a message “SgNB Addition Request”), that CPC is allowed. At 8, the source master node decides to initiate a CHO although it has allowed the CPC in the secondary node. In order to avoid extra signaling though sending a modification request as inFIG.1C, it initiates the CHO. As part of this CHO process, the target master node110B sends an addition request at 9 to the secondary node112. The target may extend this message with CHO related information which causes the secondary node to stop the ongoing CPC (if it was initiated previously), prohibit CPC for the future, and/or prohibit CPC until a timer expires or another event occurs (e.g. the SgNB Release Request in 18). FIG.2Adepicts an example of a process200, in accordance with some example embodiments. At205, a master node, such as source master node110A or target node110B, may determine for a dual connectivity connection whether to allow a conditional cell change of a secondary cell (e.g., a CPC) or allow a conditional handover of the master (CHO). For example, if the UE102is closer to a master node boundary (where perhaps another candidate target master node may be better suited to serve the UE) the source master node may allow CHO but disable CPC at a secondary node. Alternatively for example, if the UE102is closer to a secondary node boundary (where perhaps another secondary nodes may be better suited to serve the UE) the source master node may disable CHO but allow CPC at a secondary node. This determination may be specific to a dual connectivity connection including a master node and at least one secondary node both coupled in dual connectivity to the UE. At210, a master node, such as source master node110A or target node110B, may send to a secondary node, such as secondary node112, a message including an indication of whether CPC is prohibited. This message may be in the form of a request message or a modification message, such as the noted SgNB Addition Request and SgNB Modification Request. Moreover, this message may be sent during the CHO process to enable dynamic coordination of CPC and CHO. For example, the message may indicate to the secondary node that CPC is allowed (in which case a CHO should not be used), a CPC is prohibited (in which case a CHO can be used), and/or CPC is stopped (e.g., an ongoing CPC is stopped to enable a CHO to proceed). In some embodiments, the message prohibits CPC or allows CPC temporarily. For example, the prohibition (or allowance) of CPC at a secondary node may be for a given time period, after which CPC is allowed to be used again. Likewise, the indication prohibiting CPC may trigger a timer, after the expiry of which CPC is allowed again. As noted, an event (e.g., the reception of a secondary node release message after a master node handover from a source master node) may cause the time period to expire as well. At215, the master node, such as source master node110A or target node110B, may receive from the secondary node, such as secondary node112, a message acknowledging the request message. For example, the secondary node112may send to the master node an acknowledgement message indicating that the secondary node accepts the CPC prohibition or indicating that the CPC prohibition is rejected (in which case the secondary node may use CPC despite the request to prohibit). In some embodiments, the receiving of an acknowledgment is passive in the sense by not sending a rejection to the master node, the master node is receiving an acknowledgement from the other node. At230, the master node, such as source master node110A or target node110B, may send a message to the secondary node, such as secondary node112, to modify (e.g., update, change, etc.) a CPC state at the secondary node. For example, the message may modify the CPC prohibition (which was previously requested) at the secondary node to allow the CPC. Alternatively, the message may modify the CPC allowed at the secondary node to prohibit CPC. FIG.2Bdepicts an example of a process299, in accordance with some example embodiments. At250, a secondary node, such as secondary node112, may receive from a master node, such as source master node110A or target node110B, a message including an indication of whether CPC is prohibited. This message may be the same or similar message noted at205. At260, a secondary node, such as secondary node112, may send to a master node, such as source master node110A or target node110B, a message acknowledging the message received at250. For example, the secondary node112may send to the master node an acknowledgement message indicating that the secondary node accepts the CPC prohibition or indicating that the CPC prohibition is rejected (in which case the secondary node may use CPC despite the request to prohibit). In some embodiments, the sending of an acknowledgment is passive in the sense by not sending a rejection to the master node, the secondary node is implicitly sending the acknowledgement to the master node. At270, the secondary node may operate in accordance with CPC prohibition or allow CPC based on the request message at250. In the case of CPC prohibition, this may be temporary as noted. Moreover, the CPC prohibition enables the CHO of the master node. At280, a secondary node, such as secondary node112, may receive from a master node, such as source master node110A or target node110B, a message to modify (e.g., update, change, etc.) a CPC state at the secondary node. For example, the message may modify the CPC prohibition (which was previously requested) at the secondary node to allow the CPC. Alternatively, the message may modify the CPC allowed at the secondary node to prohibit CPC. The secondary node may accept or reject this modification by acknowledging, for example the request as noted at260. And, the secondary node may then proceed to operate in accordance with the modified CPC state. FIG.3depicts an example of a network node330, in accordance with some example embodiments. In some example embodiments, the network node300may be implemented to provide a master node, secondary node, a core network node, and/or the like. As noted, the network node may be implemented in a base station (e.g., an evolved node B base station, 5G base station (gNB)) or at any other network node in the access network or packet core, such as the evolved packet core of 5G core. In some example embodiments, the network node300implements the process disclosed herein to enable coordination of CPC and CHO (see, e.g.,FIGS.1A-1DandFIGS.2A-B). The network node300may include a network interface302, at least one processor320, and at least one memory304, in accordance with some example embodiments. The network interface302may include wired and/or wireless transceivers to enable access other nodes including base stations, a data network such as the Internet, core network nodes, and/or other nodes. The memory304may comprise volatile and/or non-volatile memory including program code, which when executed by at least one processor320provides, among other things, the processes disclosed herein with respect to the network node. FIG.4illustrates a block diagram of an apparatus10, in accordance with some example embodiments. The apparatus10may represent a user equipment, such as a wireless device, although certain aspects of the apparatus10may be used to implement a network node, such as a base station or other type of network node. The apparatus10may include at least one antenna12in communication with a transmitter14and a receiver16. Alternatively transmit and receive antennas may be separate. The apparatus10may also include a processor20configured to provide signals to and receive signals from the transmitter and receiver, respectively, and to control the functioning of the apparatus. Processor20may be configured to control the functioning of the transmitter and receiver by effecting control signaling via electrical leads to the transmitter and receiver. Likewise, processor20may be configured to control other elements of apparatus10by effecting control signaling via electrical leads connecting processor20to the other elements, such as a display or a memory. The processor20may, for example, be embodied in a variety of ways including circuitry, at least one processing core, one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits (for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), and/or the like), or some combination thereof. Accordingly, although illustrated inFIG.4as a single processor, in some example embodiments the processor20may comprise a plurality of processors or processing cores. The apparatus10may be capable of operating with one or more air interface standards, communication protocols, modulation types, access types, and/or the like. Signals sent and received by the processor20may include signaling information in accordance with an air interface standard of an applicable cellular system, and/or any number of different wireline or wireless networking techniques, comprising but not limited to Wi-Fi, wireless local access network (WLAN) techniques, such as Institute of Electrical and Electronics Engineers (IEEE) 802.11, 802.16, 802.3, ADSL, DOCSIS, and/or the like. In addition, these signals may include speech data, user generated data, user requested data, and/or the like. For example, the apparatus10and/or a cellular modem therein may be capable of operating in accordance with various first generation (1G) communication protocols, second generation (2G or 2.5G) communication protocols, third-generation (3G) communication protocols, fourth-generation (4G) communication protocols, fifth-generation (5G) communication protocols, Internet Protocol Multimedia Subsystem (IMS) communication protocols (for example, session initiation protocol (SIP) and/or the like. For example, the apparatus10may be capable of operating in accordance with 2G wireless communication protocols IS-136, Time Division Multiple Access TDMA, Global System for Mobile communications, GSM, IS-95, Code Division Multiple Access, CDMA, and/or the like. In addition, for example, the apparatus10may be capable of operating in accordance with 2.5G wireless communication protocols General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), and/or the like. Further, for example, the apparatus10may be capable of operating in accordance with 3G wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), Code Division Multiple Access 2000 (CDMA2000), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), and/or the like. The apparatus10may be additionally capable of operating in accordance with 3.9G wireless communication protocols, such as Long Term Evolution (LTE), Evolved Universal Terrestrial Radio Access Network (E-UTRAN), and/or the like. Additionally, for example, the apparatus10may be capable of operating in accordance with 4G wireless communication protocols, such as LTE Advanced, 5G, and/or the like as well as similar wireless communication protocols that may be subsequently developed. It is understood that the processor20may include circuitry for implementing audio/video and logic functions of apparatus10. For example, the processor20may comprise a digital signal processor device, a microprocessor device, an analog-to-digital converter, a digital-to-analog converter, and/or the like. Control and signal processing functions of the apparatus10may be allocated between these devices according to their respective capabilities. The processor20may additionally comprise an internal voice coder (VC)20a, an internal data modem (DM)20b, and/or the like. Further, the processor20may include functionality to operate one or more software programs, which may be stored in memory. In general, processor20and stored software instructions may be configured to cause apparatus10to perform actions. For example, processor20may be capable of operating a connectivity program, such as a web browser. The connectivity program may allow the apparatus10to transmit and receive web content, such as location-based content, according to a protocol, such as wireless application protocol, WAP, hypertext transfer protocol, HTTP, and/or the like. Apparatus10may also comprise a user interface including, for example, an earphone or speaker24, a ringer22, a microphone26, a display28, a user input interface, and/or the like, which may be operationally coupled to the processor20. The display28may, as noted above, include a touch sensitive display, where a user may touch and/or gesture to make selections, enter values, and/or the like. The processor20may also include user interface circuitry configured to control at least some functions of one or more elements of the user interface, such as the speaker24, the ringer22, the microphone26, the display28, and/or the like. The processor20and/or user interface circuitry comprising the processor20may be configured to control one or more functions of one or more elements of the user interface through computer program instructions, for example, software and/or firmware, stored on a memory accessible to the processor20, for example, volatile memory40, non-volatile memory42, and/or the like. The apparatus10may include a battery for powering various circuits related to the mobile terminal, for example, a circuit to provide mechanical vibration as a detectable output. The user input interface may comprise devices allowing the apparatus20to receive data, such as a keypad30(which can be a virtual keyboard presented on display28or an externally coupled keyboard) and/or other input devices. As shown inFIG.4, apparatus10may also include one or more mechanisms for sharing and/or obtaining data. For example, the apparatus10may include a short-range radio frequency (RF) transceiver and/or interrogator64, so data may be shared with and/or obtained from electronic devices in accordance with RF techniques. The apparatus10may include other short-range transceivers, such as an infrared (IR) transceiver66, a Bluetooth™ (BT) transceiver68operating using Bluetooth™ wireless technology, a wireless universal serial bus (USB) transceiver70, a Bluetooth™ Low Energy transceiver, a ZigBee transceiver, an ANT transceiver, a cellular device-to-device transceiver, a wireless local area link transceiver, and/or any other short-range radio technology. Apparatus10and, in particular, the short-range transceiver may be capable of transmitting data to and/or receiving data from electronic devices within the proximity of the apparatus, such as within 10 meters, for example. The apparatus10including the Wi-Fi or wireless local area networking modem may also be capable of transmitting and/or receiving data from electronic devices according to various wireless networking techniques, including 6LoWPAN, Wi-Fi, Wi-Fi low power, WLAN techniques such as IEEE 802.11 techniques, IEEE 802.15 techniques, IEEE 802.16 techniques, and/or the like. The apparatus10may comprise memory, such as a subscriber identity module (SIM)38, a removable user identity module (R-UIM), an eUICC, an UICC, and/or the like, which may store information elements related to a mobile subscriber. In addition to the SIM, the apparatus10may include other removable and/or fixed memory. The apparatus10may include volatile memory40and/or non-volatile memory42. For example, volatile memory40may include Random Access Memory (RAM) including dynamic and/or static RAM, on-chip or off-chip cache memory, and/or the like. Non-volatile memory42, which may be embedded and/or removable, may include, for example, read-only memory, flash memory, magnetic storage devices, for example, hard disks, floppy disk drives, magnetic tape, optical disc drives and/or media, non-volatile random access memory (NVRAM), and/or the like. Like volatile memory40, non-volatile memory42may include a cache area for temporary storage of data. At least part of the volatile and/or non-volatile memory may be embedded in processor20. The memories may store one or more software programs, instructions, pieces of information, data, and/or the like which may be used by the apparatus for performing operations disclosed herein. The memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying apparatus10. The memories may comprise an identifier, such as an international mobile equipment identification (IMEI) code, capable of uniquely identifying apparatus10. In the example embodiment, the processor20may be configured using computer code stored at memory40and/or42to control and/or provide one or more aspects disclosed herein. For example, the processor20may be configured using computer code stored at memory40and/or42to provide one or more aspects described above including aspects of the processes discloses herein. Some of the embodiments disclosed herein may be implemented in software, hardware, application logic, or a combination of software, hardware, and application logic. The software, application logic, and/or hardware may reside on memory40, the control apparatus20, or electronic components, for example. In some example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a “computer-readable medium” may be any non-transitory media that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer or data processor circuitry, with examples depicted atFIG.4, computer-readable medium may comprise a non-transitory computer-readable storage medium that may be any media that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer. Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein may be enhanced coordination of handovers. The subject matter described herein may be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. For example, the base stations and user equipment (or one or more components therein) and/or the processes described herein can be implemented using one or more of the following: a processor executing program code, an application-specific integrated circuit (ASIC), a digital signal processor (DSP), an embedded processor, a field programmable gate array (FPGA), and/or combinations thereof. These various implementations may include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. These computer programs (also known as programs, software, software applications, applications, components, program code, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term “computer-readable medium” refers to any computer program product, machine-readable medium, computer-readable storage medium, apparatus and/or device (for example, magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions. Similarly, systems are also described herein that may include a processor and a memory coupled to the processor. The memory may include one or more programs that cause the processor to perform one or more of the operations described herein. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations may be provided in addition to those set forth herein. Moreover, the implementations described above may be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. Other embodiments may be within the scope of the following claims. If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Although various aspects of some of the embodiments are set out in the independent claims, other aspects of some of the embodiments comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims. It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications that may be made without departing from the scope of some of the embodiments as defined in the appended claims. Other embodiments may be within the scope of the following claims. The term “based on” includes “based on at least.” The use of the phase “such as” means “such as for example” unless otherwise indicated.
40,769
11943675
DETAILED DESCRIPTION FIG.1illustrates an exemplary wireless communications system according to some embodiments. Wireless communications system100may comprise a User Equipment105(i.e., fixed or mobile wireless communication device) and one or more base stations, including a master radio resource control (RRC) network node120, and a plurality of secondary RRC network nodes110A-B. In some embodiments, the master network node120and the secondary network nodes110A-B are further in communication with a core network130. In some embodiments, the master network node120may comprise a master Evolved Node B as known in LTE networks (referred to herein as MeNB), and the secondary network nodes110A-B may comprise secondary New Radio (NR) RRC entities for the next generation/5G access technologies (referred to herein as SgNB). In other embodiments, the master network node120may comprise a master NR network node (referred to herein as MgNB) and the secondary network nodes110A-B may comprise secondary eNBs (referred to herein as SeNB). In some embodiments, the master network node120may serve the UE105as indicated by link115A. In some embodiments, a secondary network node110A-B may further provide additional resources for the UE105, such as serving cells. For example, a secondary network node110A-B may provide additional resources based on a received measurement report, traffic conditions, or bearer types. Thus, in some embodiments, UE105may be served by both a master network node120and a source secondary network node110A, as illustrated by links115A and115B. However, in some embodiments, it may be desirable to switch from the source secondary network node110A to a target secondary network node110B, in which case the UE may be served by both the master network node120and the target secondary network node110B after a secondary network node transfer, as illustrated by links115A and115C. LTE Dual Connectivity In LTE Dual Connectivity (DC), thanks to the mutual intelligibility between master and secondary network nodes (MeNB120and SeNB110A), the MeNB120is able to maintain the RRM measurement configuration of the UE105for mobility procedures. Furthermore, the MeNB120may decide to ask a SeNB110A to provide additional resources (serving cells) for a UE105e.g., based on the received measurement reports or traffic conditions or bearer types as it is straightforward the interpret those by the RRC entity located at the master network node120. Therefore, the mobility can mainly be coordinated by the MeNB120in case of LTE DC. FIGS.2-5are prior art signaling diagrams for LTE DC based on 3GPP TS 36.300, which is incorporated by reference herein in its entirety. As illustrated inFIG.2, the SeNB Addition procedure for LTE DC is initiated by the MeNB120and is used to establish a UE context at the SeNB110A in order to provide radio resources from the SeNB110A to the UE105. This procedure is used to add at least the first cell, i.e., PSCell of the Secondary Cell Group (SCG) in case of LTE DC. As shown inFIG.2, the MeNB120may transmit a first message201, which is a SeNB Request (carry SCG-ConfigInfo) message. The SCG-ConfigInfo may include the MeNB120configuration and the entire UE105capabilities for UE capability coordination to be used as a basis for the reconfiguration by the SeNB. Next, the SeNB110A may transmit a second message203, which is a SeNB Addition Request Acknowledge (Carry SCG-Config) message. The SCG-Config may include the new radio resource of SCG, including radio configuration information and data forwarding address information (if applicable). Next, to perform the handover, the MeNB120may transmit a third message205to the UE105, which is a RRCConnectionReconfiguration message. Next, the UE105may transmit a fourth message207back to the MeNB120, the fourth message comprising a RRCConnectionReconfigurationComplete message. Finally, the MeNB120may transmit a fifth message209to the SeNB110A comprising a Reconfiguration Complete message. FIGS.3-4illustrate a SeNB110A release procedure for LTE DC. The SeNB Release procedure may be initiated either by the MeNB120or the SeNB110A and is used to initiate the release of the UE context at the SeNB. The recipient node of this request cannot reject. The SeNB Release procedure does not necessarily need to involve signaling towards the UE, e.g., RRC reconnection re-establishment due to Radio Link Failure in MeNB120.FIG.3illustrates a release procedure initiated by the MeNB120, andFIG.4illustrates a release procedure initiated by the SeNB110A. As shown inFIG.3, the MeNB120initiates the release procedure of the SeNB110A by transmitting a first message301to the SeNB110A, the first message being a SeNB Release Request. The SeNB Release Request may trigger the source SeNB110A to stop providing user data to the UE105, and if applicable, to start data forwarding. The MeNB120then transmits message303to the UE105comprising a RRC ConnectionReconfiguration, and the UE responds and transmits message305to the MeNB120confirming RRCConnectionReconfiguration Complete. As shown inFIG.4, the SeNB110A initiates the release procedure by transmitting a first message401to the MeNB120comprising a SeNB Release Required. The MeNB120then transmits message403to the SeNB110A comprising a SeNB Release Confirm. The MeNB120then transmits message405to the UE105comprising a RRC ConnectionReconfiguration, and the UE responds and transmits message407to the MeNB120confirming RRCConnectionReconfiguration Complete. FIG.5illustrates how a SeNB change procedure may be initiated by a MeNB120and used to transfer a UE context from a source SeNB110A to a target SeNB110B, as well as change the SCG configuration in the UE from the source SeNB110A to the target SeNB110B. As shown inFIG.5, the LTE SeNB change procedure may be initiated by a MeNB120transmitting message501, a SeNB Addition Request, towards a target SeNB110B via the source SeNB110A. In response, the target SeNB110B may transmit message503, a SeNB Addition Request Acknowledgement towards the MeNB120via the source SeNB110A. The MeNB120may transmit message505, a SeNB Release Request, to the source SeNB110A, which the recipient SeNB110A cannot reject. The MeNB120may then transmit message507, a RRCConnectionReconfiguration message towards the UE105, and in response receive message509, a RRCConnectionReconfigurationComplete message from the UE105. The MeNB120may further send message511, a SeNB Reconfiguration Complete message towards the target SeNB110B. Secondary Node Configuration in Case of LTE-NR Interworking In case of secondary node modification, or node change, or release procedures, the master node may not necessarily maintain the radio resource management, RRM, measurement configuration of the UE for the secondary node, but may only generate a final RRC message. The RRC message transmitted from the master node may contain the RRC PDU which is of an RRM measurement configuration prepared by the RRC entity in the secondary node. Whether the master node needs to understand the RRM measurement configuration or not may be left to the implementation. In case of secondary node modification, node change, or release procedures, the RRM measurement report related to the mobility within the secondary node(s) may be received by the master node (RRC entity of the master node) a final RRC message. In a first option, the master node, without needing to parse the information, may transfer the NR part of the RRC message including the RRM measurement report, e.g., over X2* interface, to the secondary node (e.g. to the RRC entity located in the secondary node), e.g. by means of a container. In a second option, if a direct SRB is allowed between the secondary node and UE, the measurement report may be sent directly between the UE and the secondary node. FIGS.5and6show two options, e.g. called option A and option B, for the secondary node change and the reconfiguration of a new secondary node, wherein the RRC protocol of a secondary node is partially in charge of the secondary node change. In both Options, different from LTE DC, secondary node change (SgNB) may be initiated by the secondary node (e.g. S-SgNB) instead of the master node (MeNB). As NR mobility is expected to be different from mobility in LTE, the mobility algorithms may cope with the beam based mobility. In Option A, not all the secondary node (SgNB) change signaling has to go through the master node (MeNB), whereas in Option B, all the signaling relevant to secondary node (SgNB) change goes via the master node (MeNB), allowing it to understand all the signaling steps; it may depend on the implementation, how deep the master node shall understand the signalling. In either case, if the procedure is not intercepted by master node (MeNB), the target secondary node (e.g. T-SgNB), configuration info e.g., NR-Configuration Information (or briefly NR-Config Info), is sent to the UE via a final RRC message from MeNB. Thus, target secondary node configuration info (T-SgNB NR-Config Info) may be (completely or partially) transparent to the MeNB that sends such configuration information to the UE in a final LTE RRC message. LTE-NR Secondary Network Node Change RRC diversity may be envisioned for both the downlink and uplink to address aforementioned challenges e.g. related to Ultra-Reliable and Low Latency Communications (URLLC) and mobility robustness. NR RRM is expected to be different than LTE RRM due to above-discussed beam based mobility. Especially NR RRM measurement configuration, measurement reporting events and triggers may be rather different than those already specified for LTE mobility. It may e.g. be preferable keeping the LTE and NR RRMs self-contained, e.g. to enable a future-proof NR RRM design e.g., when NR stand-alone operation is considered. In the following, it is described an exemplary set of embodiments related to the secondary network node change and the reconfiguration of a new secondary network node where the RRC protocol(s) of the source secondary network node and/or target secondary network node are partially in charge of the secondary network node change. Minimization of the specification of NR related mobility measurement configuration in LTE specifications and vice versa may be achieved by distributing mobility management/control between MeNB120and SgNB110A-B (or MgNB120and SeNB110A-B) in case of LTE-NR interworking The disclosure proposes two major options for the secondary network node change and the reconfiguration of a new secondary network node where the RRC protocol(s) of the source secondary network node and/or target secondary network node are partially in charge of the secondary network node change as shown inFIGS.6-7. These options are different from LTE DC, as described above, because, for example, the SgNB Change is initiated by the S-SgNB110A instead of the MeNB120. Additionally, in both options, the target SgNB configuration may be transparent to the MeNB. It may be desirable for the SgNB change to be initiated by the S-SgNB110A since NR mobility is expected to be different than LTE and the mobility algorithms may be beam based mobility. It may be expected that the entity deciding NR mobility may reside in the NR part of the 5G RAN, i.e., within a gNB, which may include knowledge about NR radio resource topology in the neighborhood, current NR radio resource status, and controlling and processing NR related UE measurements. The procedures described below proposes a solution where the LTE and NR related logical nodes of the 5G RAN are distinct, separate logical entities, inter-connected via an interface that is called “X2*.” First, the master network node120, such as the MeNB120inFIG.6, determines one or more suitable candidates to be the SgNB. This may be based on downlink (DL) measurements or uplink (UL) measurements. In the case of a DL measurement based procedure, the SgNB determines the suitable measurement configuration for the UE including suitable inter-frequencies to measure. In addition, need of measurement gaps can be determined based on the UE capability. The SgNB constructs the measurement (RRC) configuration. The configuration is sent to the UE either directly or via MeNB. The first solution is only possible if the direct SRBs between SgNB and UE are supported. In the latter solution, MeNB sends the final RRC message to the UE. After the UE has measured potential candidates for new SgNB, the UE sends a measurement report to the network. This may be sent to the SgNB directly in case SRB between UE and SgNB is supported. If the measurement report is sent to the MeNB, the MeNB forwards the measurement results to the SgNB via X2 or X2*. In the case of UL measurement based procedure, the decision to change SgNB may be performed in the original SgNB. The UE may be potentially configured with UL signal to be used for mobility. The signal may be similar to SRS. Depending on the solution, the UL signal configuration can be sent via RRC to the MeNB or SgNB directly. The SgNB can directly receive UL signal from the UE, and based on that determine suitable candidate(s) for the SgNB change. In cases where the MeNB receives the UL signal, the MeNB may forward the measurement result to the SgNB. FIG.12shows an exemplary signaling diagram with respect to above-described measurement configuration. The master network node120receives a measurement configuration information1201constructed by the current (or source) secondary network node, e.g. S-SgNB110A. The master network node120constructs a final RRC message1202comprising the received measurement configuration and sends it to the UE105. Based on the measurement configuration, UE105performs measurements of potential candidates for a new secondary network node e.g. T-SgNB110B. Finally, the UE responds with a measurement report message1203comprising the measurement report indicative of the measurements of potential candidates for the new secondary node. The master network node sends a measurement report1204comprising the measurement results to the current (or source) secondary network node110A that determines, based on the measurement report, the new (or target) secondary network node110B. Once the target SgNB is determined, the signaling to change the SgNB takes place as described below in connection withFIG.6orFIG.7. As shown inFIG.6, the SgNB change is initiated by the S-SgNB110A sending message601, a SgNB Handover Request message, to T-SgNB110B without passing it through the MeNB120. NR-Config information included within the Handover Request601message may be used by the S-SgNB110A to request the T-SeNB110B to perform certain configuration actions, similar to those performed via LTE SCG-ConfigInfo and/or Handover Request in LTE. Next, the T-SgNB110B replies back to the S-SgNB110A with message603, a SgNB Handover Response message including the NR configuration e.g., NR-Config. NR-Config may include the new radio resource associated with the T-SgNB110B. The S-SgNB110A then sends message605with the NR-Config information to MeNB120. Message605may be an X2* AP message, called SgNB Change Request inFIG.6, in order to enable the RRC reconfiguration of the UE105with the T-SgNB110B. The same X2*AP message605may include information on the user plane switch so as to be able to successfully execute the SgNB change and activate user plane data flow toward UE105. The NR configuration message (e.g., NR-Config), may be used to transfer the radio configuration generated by the T-SgNB110B. Upon receiving the NR configuration via message605, the MeNB120may (i) intercept, and send message607, a SgNB change reject to the S-SgNB110A, which in turn sends message609, SgNB Change Reject to the T-SgNB110B, or (ii) proceed by transmitting message611, a SgNB Release Request, to the S-SgNB110A. In the second case, the MeNB120may perform RRC Connection Reconfiguration steps, including transmitting message613, a RRCConnectionReconfiguration message, to the UE105, the UE105transmitting message615, a RRCConnectionReconfigurationComplete message, to the MeNB120, and the MeNB120transmitting message617, a SgNB Reconfiguration Complete message, to the T-SgNB110B to complete the SgNB transfer procedure. FIG.7illustrates a second signaling diagram according to some embodiments. As shown inFIG.7, the SgNB change procedure is initiated by the S-SgNB110A, but the signaling goes via the MeNB120. The S-SgNB110A initiates the SgNB change procedure by transmitting message701, a SgNB Change Request with NR Config Info message, to the MeNB120. The MeNB120may then reject the SeNB change by transmitting message703, a SgNB Change Reject message, or proceed with the change by transmitting message705, a SgNB Addition Request (include NR-Config Info) message, towards the T-SgNB110B. In the latter case, the T-SgNB110B may respond to message705by transmitting towards the MeNB120message707, a SgNB Addition Request Acknowledgement message, which includes the NR-Config Info for the T-SgNB110B. In response to message707, the MeNB120may transmit message711, a SgNB Change Request Acknowledgement (include NR-Config Info) to the S-SgNB110A, as well as transmit message713, a SgNB Release Request message, to the S-SgNB110A. The MeNB120may perform RRC Connection Reconfiguration steps, including transmitting message715, a RRCConnectionReconfiguration message, to the UE105, the UE105transmitting message717, a RRCConnectionReconfigurationComplete message, to the MeNB120, and the MeNB120transmitting message719, a SgNB Reconfiguration Complete message, to the T-SgNB110B to complete the SgNB transfer procedure. Depending on the implementation and which messages the MeNB120can partially or fully understand e.g., SgNB Change Request or SgNB Addition Request Acknowledge, the MeNB120may intercept the procedure e.g., proceed with/reject the SeNB change earlier as shown inFIG.7as compared to the other option as shown inFIG.6. However, in some embodiments, the procedure shown inFIG.6may be more desirable where forcing each signal to go through MeNB120may increase signaling overhead and latency for the SgNB change procedure. On the other hand, it may also be advantageous to allow a central entity to overlook the overall mobility behavior and respective RRM strategy due to, for example, the fact that mobility of the RRC connection that is controlled by the MeNB needs to be taken into account. Apart from that, the second option shown inFIG.7would be able to reuse existing LTE framework. In some embodiments, the NR configuration message, e.g., NR-Config Info in messages603,706, may be an RRC Protocol Data Unit (PDU) transferred between UE RRC entity and NR RRC entity. Yet in another embodiment, such information could be comprised by an information element (IE) similar to SCG-Config in LTE DC. In another option/embodiment, the LTE-NR interworking scenario as shown inFIGS.6-7could be other way around such, that a NR node is the master network node120(i.e., MgNB120), and LTE nodes are the source and target secondary network nodes (i.e., S-SeNB110A and T-SeNB110B and/or S-SgNB and T-SgNB). In some embodiments, the configuration may be transferred directly from the S-SgNB to the UE instead of transferring it via the MeNB. In another embodiment, the involved 5G RAN nodes could be nodes that support both LTE and NR access, hence, each entity could be in the position to comprehend and process RRC messages and perform respective RRM actions. Yet, in another embodiment, the scenario could be the same as shown inFIGS.6-7, and MeNB120can in parallel add an SgNB or change an SgNB by following the existing LTE DC procedures, as can be found in 3GPP TS 36.300. FIG.8is an exemplary flow diagram according to some embodiments. In preferred embodiments, method800is performed by the source secondary network node110A as described in connection withFIG.10to transfer a UE context from the source secondary network node110A to a target secondary network node110B that is different than the source secondary network node110B. In step801, the source secondary network node110A transmits a first message to the target secondary network node110B, wherein the target network node110B is configured to respond to the first message by transmitting to the source secondary network node110A a second message comprising configuration data of the target secondary network node110B. In step803, the source secondary network node110A receives the second message transmitted by the target secondary network node110B. In step805, after receiving the second message, the source secondary network node110A initiates a transfer of the UE context from the source secondary network node110A to the target secondary network node110B, wherein initiating the transfer of the UE context comprises the source secondary network node110A transmitting to a master network node120a third message comprising the configuration data of the target secondary network node. In some embodiments, the first message in step801may comprise a Handover Request message601as shown inFIG.6, the Handover Request message instructing the target secondary network node110B to perform one or more configuration actions. In some embodiments, the second message in steps801and803of method800may comprise a Handover Response message, such as the Handover Request Ack message603as shown inFIG.6. In some embodiments, the configuration data in the second message may comprise NR-Config Info, which may be one of a RRC PDU or an IE. In some embodiments, the source secondary network node110B may receive a fourth message transmitted by the master network node120in response to the master network node120receiving the third message. The fourth message may be a Release Request, such as message611shown inFIG.6. FIG.9is an exemplary flow diagram according to some embodiments. In preferred embodiments, method900is performed by the master network node120as described below in connection withFIG.11. In step901, the master network node120receives a first message transmitted by the source secondary network node110A, the first message comprising a request to initiate a transfer of the UE context from the source secondary network node110A to the target secondary network node110B. In some embodiments, the first message may comprise a Change Request, such as message701as shown inFIG.7. In step903, in response to the request, the master network node120transmits a second message to the target secondary network node110B. In step905, the master network node120receives a third message from the target secondary network node110B, the third message comprising configuration data of the target secondary network node110B. In some embodiments, the configuration data of the target secondary network node110B may comprise NR-Config Info, which may comprise one of a RRC PDU or an IE. In some embodiments, method900may further comprise the master network node120transmitting an acknowledgement of the request to the secondary network node110A, such as message711shown inFIG.7. In some embodiments, method900may further comprise the master network node120transmitting a release request to the source secondary network node110A, such as message713shown inFIG.7. In some embodiments, method900may further comprise the master network node120transmitting a message to the UE105in response to receiving the third message, the message comprising an RRC Connection Reconfiguration, (RRCConnectionReconfiguration) message such as message715shown inFIG.7. The method900may further comprise the master network node120receiving a message from the UE105, the message comprising an RRC Connection Reconfiguration Complete (RRCConnectionReconfiguration Complete) message such as message717shown inFIG.7. In some embodiments, the method900may further comprise the master network node120transmitting to the target secondary network node110B a Reconfiguration Complete message, such as message719shown inFIG.7. In connection withFIGS.8-9, in some embodiments, the source secondary network node110A comprises a first New Radio Node, the target secondary network node110B comprises a second New Radio Node, and the master network120node comprises an eNB. In other embodiments, the source secondary network node110A comprises a first eNB, the target secondary network node110B comprises a second eNB, and the master network node120comprises a New Radio Node. FIG.10is a block diagram of a source secondary network node110A according to some embodiments. As shown inFIG.10, source secondary network node110A may comprise: a data processing system (DPS)1002, which may include one or more processors1055(e.g., a general purpose microprocessor and/or one or more other data processing circuits, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), and the like); a network interface1005for use in connecting source secondary network node110A to network130; a radio transceiver1007(i.e., a receiver and a transmitter) coupled to an antenna1022for use in, for example, wirelessly communicating with UEs and other devices; and local storage unit (a.k.a., “data storage system”)1012, which may include one or more non-volatile storage devices and/or one or more volatile storage devices (e.g., random access memory (RAM)). In embodiments where source secondary network node110A includes a general-purpose microprocessor, a computer program product (CPP)1041may be provided. CPP1041includes a computer readable medium (CRM)1042storing a computer program (CP)1043comprising computer readable instructions (CRI)1044. CRM1042may be a non-transitory computer readable medium, such as, but not limited, to magnetic media (e.g., a hard disk), optical media (e.g., a DVD), memory devices (e.g., random access memory), and the like. In some embodiments, the CRI1044of computer program1043is configured such that when executed by data processing system1002, the CRI causes the source secondary network node110A to perform steps described above (e.g., steps described above with reference to the flow charts). In other embodiments, secondary network node110A may be configured to perform steps described herein without the need for code. That is, for example, data processing system1002may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software. FIG.11is a block diagram of a master network node120according to some embodiments. As shown inFIG.11, master network node120may comprise: a data processing system (DPS)1102, which may include one or more processors1155(e.g., a general purpose microprocessor and/or one or more other data processing circuits, such as an application specific integrated circuit (ASIC), field-programmable gate arrays (FPGAs), and the like); a network interface1105for use in connecting master network node120to network130; a radio transceiver1107coupled to an antenna1122for use in, for example, wirelessly communicating with UEs and other devices; and local storage unit (a.k.a., “data storage system”)1112, which may include one or more non-volatile storage devices and/or one or more volatile storage devices (e.g., random access memory (RAM)). In embodiments where master network node120includes a general purpose microprocessor, a computer program product (CPP)1141may be provided. CPP1141includes a computer readable medium (CRM)1142storing a computer program (CP)1143comprising computer readable instructions (CRI)1144. CRM1142may be a non-transitory computer readable medium, such as, but not limited, to magnetic media (e.g., a hard disk), optical media (e.g., a DVD), memory devices (e.g., random access memory), and the like. In some embodiments, the CRI1144of computer program1143is configured such that when executed by data processing system1102, the CRI causes the master network node120to perform steps described above (e.g., steps described above with reference to the flow charts). In other embodiments, master network node120may be configured to perform steps described herein without the need for code. That is, for example, data processing system1102may consist merely of one or more ASICs. Hence, the features of the embodiments described herein may be implemented in hardware and/or software. In the following various embodiments will be exemplarily described. Secondary Network Node Embodiments: E1. A method performed by a source secondary network node to transfer a User Equipment context from the source secondary network node to a target secondary network node that is different than the source secondary network node, the method comprising: transmitting, by the source secondary network node, a first message to the target secondary network node, wherein the target network node is configured to respond to the first message by transmitting to the source secondary network node a second message comprising configuration data of the target secondary network node; receiving, at the source secondary network node, the second message transmitted by the target secondary network node; and after receiving the second message, initiating a transfer of the UE context from the source secondary network node to the target secondary network node, wherein initiating the transfer of the UE context comprises transmitting, by the source secondary network node, to a master network node a third message comprising the configuration data of the target secondary network node. E2. The method of embodiment 1, wherein the first message comprises a Handover Request message, the Handover Request message instructing the target secondary network node to perform one or more configuration actions. E3. The method of embodiment 2, wherein the second message comprises a Handover Response message. E4. The method of anyone of embodiments 1-2, wherein the configuration data comprises one of: a radio resource control (RRC) protocol data unit (PDU) or an information element (IE). E5. The method of anyone of embodiments 1-4, further comprising: receiving, at the source secondary network node, a fourth message transmitted by the master network node, the fourth message comprising a Release Request, wherein the master network node is configured to transmit the fourth message after receiving the third message. E6. The method of anyone of embodiments 1-5, wherein the source secondary network node comprises a first New Radio Node, the target secondary network node comprises a second New Radio Node, and the master network node comprises an Evolved Node B. E7. The method of anyone of embodiments 1-5, wherein the source secondary network node comprises a first Evolved Node B, the target secondary network node comprises a second Evolved Node B, and the master network node comprises a New Radio Node. E8. A source secondary network node, comprising a transmitter; a receiver; a memory; and a data processing system comprising one or more processors, wherein the source secondary network node is configured to perform the method of anyone of embodiments 1-7. Master Network Node Embodiments E1. A method performed by a master network node to transfer a User Equipment context from a source secondary network node to a target secondary network node that is different than the source secondary network node, the method comprising: receiving, at the master network node, a first message transmitted by the source secondary network node, the first message comprising a request to initiate a transfer of the UE context from the source secondary network node to the target secondary network node; in response to the request, transmitting, by the master network node, a second message to the target secondary network node; and receiving, by the master network node, a third message from the target secondary network node, the third message comprising configuration data of the target secondary network node. E2. The method of embodiment 1, further comprising: transmitting, by the master network node, a fourth message to the source secondary network node, the fourth message comprising an acknowledgement of the request. E3. The method of embodiment 2, further comprising: transmitting, by the master network node, a fifth message to the source secondary network node, the fifth message comprising a Release Request. E4. The method of anyone of embodiments 1-3, further comprising: in response to receiving the third message, transmitting a fourth message to the User Equipment, the fourth message comprising a RRCConnectionReconfiguration message; and receiving a fifth message from the User Equipment, the fifth message comprising a RRCConnectionReconfiguration Complete message. E5. The method of embodiment 4, further comprising: in response to receiving the fifth message, transmitting, to the target secondary network node, a sixth message, the sixth message comprising a Reconfiguration Complete. E6. The method of anyone of embodiments 1-5, wherein the first message comprises a Change Request. E7. The method of anyone of embodiments 1-6, wherein the configuration data of the target secondary network node comprises one of: a radio resource control (RRC) protocol data unit (PDU) or an information element. E8. The method of anyone of embodiments 1-7, wherein the source secondary network node comprises a first New Radio Node, the target secondary network node comprises a second New Radio Node, and the master network node comprises an Evolved Node B. E9. The method of anyone of embodiments 1-7, wherein the source secondary network node comprises a first Evolved Node B, the target secondary network node comprises a second Evolved Node B, and the master network node comprises a New Radio Node. E10. A master network node, comprising: a transmitter; a receiver; a memory; and a data processing system comprising one or more processors, wherein the master network node is configured to perform the method of anyone of embodiments 1-9. While various embodiments of the present disclosure are described herein, it should be understood that they have been presented by way of example only. Additionally, while the processes described above and illustrated in the drawings are shown as a sequence of steps, this was done solely for the sake of illustration. Accordingly, it is contemplated that some steps may be added, some steps may be omitted, the order of some steps may be re-arranged, and some steps may be performed in parallel.
34,511
11943676
DETAILED DESCRIPTION Embodiments provide for a device, serving cell and method of switching between modes of operation for the device in a cellular network. In accordance with one aspect of the present disclosure, a method is provided for switching between modes of operation for a device in a cellular network. A service is received via a connection with a serving cell. While receiving the service via the connection with the serving cell, a condition indicative of a deterioration in the service received via the connection with the serving cell and an absence of a suitable neighbor cell are detected. Responsive to detecting both the condition indicative of a deterioration in the service received via the connection with the serving cell and the absence of a suitable neighbor cell, discovery of relay nodes is initiated. In accordance with another aspect of the present disclosure, a device operating in a cellular network includes a communication subsystem that receives a service via a connection with a serving cell a processor. The processor is communicatively coupled with the communication subsystem. While the at least one communication subsystem is receiving the service via the connection with the serving cell, the processor detects a condition indicative of a deterioration in the service received via the connection with the serving cell and an absence of a suitable neighbor cell. Responsive to detecting both the condition indicative of a deterioration in the service received via the connection with the serving cell and the absence of a suitable neighbor cell, the processor initiates discovery of relay nodes. In accordance with another aspect, a computer program product for enabling switching between modes of operation for a device in a cellular network is provided. The computer program product includes a non-transitory computer readable storage medium having computer readable program code embodied therewith. The computer readable program code contains instructions for providing a service to the device and determining, while providing the service to the device, that the device is approaching an edge of coverage. The computer readable program code contains further instructions for, responsive to determining that the device is approaching the edge of coverage, sending a relay discovery command to the device; receiving a measurement report from the device indicating discovered nodes capable of acting as relay nodes; selecting a suitable node to act as a relay node; and instructing the device to initiate a mechanism to switch to receiving the service via the relay node. According to another aspect, a serving cell is provided for enabling switching between modes of operation for a device operating in a cellular network. The serving cell includes a processor and a communication subsystem. The communication subsystem is communicatively coupled with the processor. The communication subsystem provides a service to the device, and responsive to the processor determining that the device is approaching an edge of coverage: sends a relay discovery command to the device and receives a measurement report from the device indicating discovered nodes capable of acting as relay nodes. While the communication subsystem is providing the service to the device, the processor determines that the device is approaching an edge of coverage. Responsive to receiving the measurement report from the device, the processor selects a suitable node to act as a relay node and instructs the device to initiate a mechanism to switch to receiving the service via the relay node. It should be noted that although the examples provided herein relate to 3GPP and LTE, the proposed solutions are not limited to those examples and may be applicable to other systems or Radio Access Technologies, such as (but not limited to) 3GPP GSM EDGE Radio Access Network (3GPP GERAN) or 3GPP UMTS Terrestrial Radio Access Network (3GPP UTRAN), IEEE 802.11, CDMA2000, etc. In addition, the names used for code-points, information elements and messages are only examples, and other names may be used. Furthermore, although the description of the solution might refer to a specific application (e.g. MCPTT), the solutions presented here are not limited in applicability to any particular application. Additionally, the terms “UNR,” “relay” and “relay node” are used interchangeably herein. Referring now toFIG.1, User-to-Network Relays (UNRs) may be used for extending network coverage for Mission Critical Push-To-Talk (MCPTT). UE-13, UE-4, and UE-5 are acting as UNRs102a,102b,102c(referenced generally or collectively as UNR102). A UNR102communicates with eNB108of the LTE network100through the LTE-Uu (Uu) radio interface and is able to connect a remote User Equipment (UE)104a-104j(referenced generally or collectively as UE104) that is outside radio network coverage to the LTE network100. The UNR102then relays downlink (network-to-UE) and uplink (UE-to-network) transmissions over the ProSe UE-to-UE Sidelink radio interface (PC5). As illustrated, the network may use multicast (e.g., Multimedia Broadcast Multicast Service (MBMS)) or unicast (Evolved Packet System (EPS) bearers) transmission types. In this example, the Group Communication Service Application Server (GCS AS)106is the MCPTT application server. Multicast service is provided as enhanced MBMS (eMBMS) via Broadcast-Multicast Service Center/MBMS (BM-SC/MBMS) gateway112. eMBMS transmission links between the GAS AS106and the LTE UE104, referred to as the LTE-Uu (Uu) are denoted as thick dashed lines. Unicast transmission links are provided via the Packet Data Network (PDN) gateway110and are denoted as thick solid lines. GCS AS106communicates with BM-SC/MBMS gateway112and PDN gateway110via an Internet Provider (IP) Network111. The network100can directly provide the MCPTT service to MCPTT UEs that are within radio coverage of an eNB108a,108b,108c(referenced generally or collectively as eNB108) in Network Mode Operation (NMO) mode. InFIG.1, UE-2104c, UE-3104dand UE-6104jare operating in NMO. UE-2104c, UE-3104dand UE-4 (UNR102b) are within broadcast range114of eNB108b. On the other hand, out of coverage UEs104may receive the MCPTT service via UNRs102in a mode referred to as Network Mode Operation via Relay (NMO-R). InFIG.1, UE-14104aand UE-15104bare operating in NMO-R through UE-13 (UNR102a), UE-7104eand UE-8104fare operating in NMO-R through UE-4 (UNR102b), while UE-9104g, UE-10104hand UE-11104iare operating in NMO-R through UE-5 (UNR102c). InFIG.1, UNR downlinks relaying over PC5 are denoted as thin solid lines. In addition, UE-14104ais in use by the current talker of the MCPTT group, and UE-13 (UNR102a) is the UNR in charge of transferring talker's voice to the eNB108aand eventually to the GCS/MCPTT application server106. UNR uplink relaying over PC5 is denoted as a thin dotted line. It should be noted that both end-user Public Safety (e.g., MCPTT) service provision and UE-to-Network relaying functions may be activated on a single UE. However, for the sake of clarity, these functions are further considered as independent functionalities. It should also be noted that both the application media stream (e.g., voice frames) and the corresponding signaling (e.g., Session Initiation Protocol (SIP) signaling messages) are relayed to/from out-of-coverage UEs (this infers that a listening-only UE may use uplink transmission in certain phases of a group call). NMO to NMO-R Switch For a UE receiving MCPTT service from the network in NMO, transitioning to NMO-R largely comprises two distinct phases:a) discovering suitable UNR; andb) executing a procedure to move the NMO bearers over Uu to NMO-R bearers over PC5. Referring now toFIG.2, a flowchart200is provided which illustrates an example process for switching a UE104from operating in NMO mode to operating in NMO-R mode. It should be noted, in the following descriptions, that the term “network” is used to indicate the infrastructure element to which the device (either UE104or UNR102depending on the context) is receiving the service from. Typically, this infrastructure element will be an LTE eNB108. Beginning at block S202, the UE104, operating NMO mode, upon satisfying (at block S204) the triggering conditions for discovering UNRs102, initiates (at block S206) UNR discovery. UNR discovery may be triggered by either the UE104or the network, and methods of triggering discovery by both the UE104and the network are discussed in further detail below. The UE104starts to attempt UNR discovery based on the deterioration of serving cell quality/signal strength and the absence of suitable neighbor (i.e. non-UNR) cells (i.e. edge of radio network coverage). The network may trigger UNR discovery based on, for example, failed handover attempt (target cell overloaded, etc.). If whilst discovering a UNR, the quality of the service received via the network improves or if a suitable target neighbor cell is found (thus providing a way for the UE to continue the NMO), the UE may stop the UNR discovery procedures and stay in NMO (i.e. return to block S202). Upon satisfying (at block S208) the discovery and selection of a suitable UNR102, the UE104then initiates (at block S210) mechanisms to switch to NMO-R at an appropriate time. When the UE104is in NMO, it is receiving service via the network. The application (e.g., the MCPTT application) should be oblivious to any changes in the lower layer when the UE104switches to NMO-R. The PDCP, RLC, MAC and PHY layers in the LTE stack however need to be reconfigured into NMO-R mode of operation upon moving into NMO-R. The association of various peer protocol layers in NMO and NMO-R are as shown inFIG.3. Triggering Conditions for Initiating Discovery of UNR The detection of trigger conditions for switching from NMO to NMO-R switch, in turn, initiates the discovery of suitable UNRs102. An example sequence may be measuring serving cell, measuring neighbor cell (NC), determining that the serving cell is low and there is no suitable NC, looking for UNRs102(i.e. performing discovery), and eventually switching to NMO-R upon finding a suitable UNR102. The UE104may indicate to the network its preference to the network (or that certain criteria are met) to switch to NMO-R, with or without identifying a candidate UNR102during this process. In RRC_CONNECTED mode, this preference may indicate a request for the network to terminate the RRC connection. Certain devices having more than one transceiver may be capable of performing a “Make-Before-Break” (MBB) handover, which is discussed in greater detail below. For both MBB-capable and non-MBB-capable devices, discovery may be initiated prior to RRC Connection Release in the serving cell. A UE104may discover one or more relays102supporting the MCPTT service (i.e. phase a) of NMO to NMO-R transition, mentioned above) the UE104is interested in to be able to switch to NMO-R operation (i.e. phase b) of NMO to NMO-R transition, mentioned above). However, searching for relays102in the vicinity of UE104incurs additional power consumption at the UE104. Performing discovery whilst being in NMO may also result in service interruption or degradation to the services received over the network depending on the UE104capabilities. Hence, a UE104in RRC_IDLE or RRC_CONNECTED with good radio conditions and using the MCPTT service in NMO with a satisfactory quality of service may not trigger UNR discovery. In principle, if the UE104finds a suitable neighbor cell when the serving cell quality degrades, then the UE104follows the normal procedures and reports target cell measurements to the eNB108(i.e. using measurement report) and depends on the eNB108for potential service continuity (e.g., handover (HO), as is the case currently in RRC_CONNECTED mode). However, if the eNB108, upon receiving the measurement report, makes a decision that handover is not suitable and instead NMO-R might be necessary (e.g., due to a high load in the reported neighbor cell, reported quality being not good enough, etc.), the eNB108may trigger the UE104to initiate discovery of the relays102at the UE104. Additionally, a UE104in NMO may autonomously initiate UNR discovery upon determining certain conditions calling for an imminent need for transition to NMO-R. In such a case, the UE104shall initiate and complete the UNR discovery before the UE104is abruptly disconnected from the network (e.g., by experiencing a Radio Link Failure). The triggering conditions for beginning the search for UNRs102may include one or more of.Detection of a condition indicating degrading network service; andDetection of “Edge of coverage” condition. One example of a condition indicating degrading network service is the detection of radio link degradation on the Uu interface. This degradation may include degradation of serving cell quality (e.g., Radio Signal Receive Quality (RSRQ) or Channel Quality Indicator (CQI)) below a predetermined threshold. Such a predetermined threshold may be signaled to the UE104via RRC signaling or may be preconfigured in the UE (e.g., specified in the standards, configured in the Universal Integrated Circuit Card (UICC), etc.). Anticipation of an imminent Radio Link Failure (RLF) is another example of a condition indicating degrading network service. Radio link monitoring is used to detect the quality of radio link between the eNB108and the UE104. The RLF procedure is used to trigger procedures that the UE104shall initiate upon detecting deterioration of the radio link between the eNB108and the UE104. Two phases govern the behavior associated to RLF. The first phase is started upon radio problem detection (i.e. upon detecting a predetermined number of out-of-sync indications from physical layer) and leads to RLF detection. The UE104continues to be in RRC_CONNECTED state and is based on timer or other (e.g., counting) criteria (T1). The timer is referred to as T310 in 3GPP TS 36.331. The second phase is started upon RLF detection (i.e. subsequent to first phase) or handover failure and is also timer based (T2) (i.e. a timer (referred to as T311 in 3GPP TS 36.331) is started upon detecting the RLF. During phase two, the UE104initiates a reestablishment procedure and attempts to reconnect to an eNB108. Upon expiry of the timer (T311), the UE104enters RRC_IDLE. Anticipation of RLF may include one or more of a timer indicative of imminent radio link failure (such as T310 or T312) is running or a predetermined number of “out-of-sync” indications have been received. The predetermined number of out-of-sync indications may be indicated to the UE104via RRC signaling or may be preconfigured in the UE104(e.g., specified in the standards, configured in the UICC, etc.). Another example of a condition indicating degrading network service is service quality degradation. The application or an underlying protocol such as Packet Data Convergence Protocol (PDCP) or Radio Link Control (RLC) detects that the quality of the received service has degraded below a predetermined threshold. For instance, this detection may include detection of a predetermined number or percentage of missed/un-decoded voice frames, user data frames or IP packets pertaining to a media. This detection may also include determination that other key parameters, such as the residual bit error rate on the application packets, has exceeded a predetermined threshold, etc. These predetermined numbers and thresholds may either be signaled to the UE104via RRC signaling or they may be preconfigured in the UE104(e.g. specified in the standards, configured in the UICC, etc.). Yet another example of a condition indicating degrading network service is the service becoming unavailable. In other words, the serving cell does not provide the service (e.g., the MCPTT session or the eMBMS session) the UE104is interested in (e.g., due to temporary lack or resources). An example of detection of “Edge of coverage” condition may include detection of one or more of the above conditions related to degrading network service in the serving cell whilst determining that there is no suitable neighbor cell providing the service by which the UE104is interested in. Edge of coverage may be detected based on the neighbor cell measurements and also via the system information of the neighbor cells to identify if the service is supported, for example, by reading the System Information Block (SIB)13to see if the related service (e.g., MCPTT service or the eMBMS session, etc.) is available. When a UE104is approaching the edge of coverage, none of the detected cells including the serving cells and neighbor cells on the measured frequencies, would look good (i.e. there is no suitable cell as defined in 3GPP TS 36.304). For example, the received power of those cells may be less than a threshold. If that is the case and the UE104has not triggered any events for handover (for example, Event A3 as defined in 3GPP TS 36.331), then NMO-R may be appropriate. According to 3GPP TS 36.331, Event A2 is triggered if the serving frequency signal becomes worse than a threshold. However, if the measurement report does not contain any neighbor cell measurement, it may be indicative of the Edge of coverage condition. Moreover, there is no event for reporting that non-serving cells become worse than a threshold. A new event, e.g., A7, may be defined and will be triggered when a non-serving frequency becomes worse than a threshold. When the network receives both A2 and A7 triggers, the network may assume that the UE104is approaching the edge of the coverage. Note that some or all of the above triggering conditions may be detected by either the UE104or the eNB108or both. Upon satisfying the triggering conditions for initiating discovery of UNR102, the UE104shall proceed to phase b) of the procedure for transitioning to NMO-R (i.e. the UE104shall initiate the UNR discovery procedure). UE Triggered Mechanisms to Switch to NMO-R Once the UE104initiates NMO to NMO-R switching mechanism (i.e. the phase b) of the transitioning to NMO-R), depending on the RRC state of the UE104in NMO, the UE104may need to execute different mechanisms to eventually complete the NMO to NMO-R switch. Details of the switching mechanisms depending on the UE's RRC state are also discussed in further detail below. UE in RRC_CONNECTED State Two approaches for switching from NMO to NMO-R for a UE104in RRC_CONNECTED state are disclosed: Break-Before-Make (BBM) and Make-Before-Break (MBB). In BBM, the MCPTT service is re-established through the UNR102over the PC5 interface after the RRC connection has been released and accesses the MCPTT services through the eNB108over the Uu interface is interrupted. Using MBB, the MCPTT service is handed over from the eNB/Uu path to the relay/PC5 path before the RRC connection is released and related access to MCPTT services is uninterrupted. Depending on the UE104capability (i.e. on whether the UE104supports simultaneously PC5 bearers and Uu bearers) and criticality of the MCPTT service, a choice between MBB and BBM procedures is made. This decision can be made by the UE104and signaled to the network, or the decision can be made at the network (e.g., based on the information provided by the UE104). “Break-Before-Make” (BBM): Using this first approach, depicted by event flow diagram400inFIG.4, the UE104is operating in NMO in the RRC_CONNECTED state. MCPTT service (i.e. data and control) is provided directly from the serving eNB108to the UE104over the Uu radio link. At step S401, the UE104detects that a condition to initiate UNR discovery exists. The UE104, at step S402, performs the ProSe Direct Discovery of UNRs102in communication range able to provide connectivity for the service the UE104is interested in and selects an appropriate relay102. ProSe Direct Discovery consists of a set of procedures used by ProSe enabled UEs or ProSe relays supporting Direct Discovery to detect and identify other ProSe-enabled UE(s) or ProSe relay(s) in their proximity, using E-UTRA direct radio signals via PC5. It should be noted that EPC-level Discovery (by which the Enhanced Packet Core determines the proximity of the UEs and informs them of their respective proximity) should be distinguished from ProSe Direct Discovery. 3GPP TS 23.303 specifies two discovery models, Model A and Model B. Model A (“I am here”) defines two roles for the ProSe-enabled UEs/ProSe relays that are participating in ProSe Direct Discovery: the Announcing UE announces certain information that could be used by UEs in proximity that have permission to discover and the Monitoring UE monitors certain information of interest in proximity of announcing UEs. In this model, the announcing UE broadcasts discovery messages at pre-defined discovery intervals and the monitoring UEs that are interested in these messages read and process these messages. Model B (“who is there?”/“are you there?”) defines two different roles for the ProSe-enabled UEs/ProSe relays that are participating in ProSe Direct Discovery: the Discoverer UE transmits a request containing certain information about what it is interested to discover and the Discoveree UE receives the request message and can respond with some information related to the discoverer's request. The following information may be used for ProSe UNR discovery and selection:Message type identifier (e.g., identifying Model A or Model B discovery);ProSe Relay (UE) ID: link layer identifier that is used for direct communication and is associated with a PDN connection the ProSe UNR has established;PLMN ID: identifies the Public Land Mobile Network (PLMN) to which radio frequencies used on the link to the Remote UE belong. If these radio frequencies are shared between multiple PLMNs, or not allocated to any PLMN, then the choice of PLMN ID is configured by the Home PLMN (HPLMN);ProSe Application Relay Code: parameter identifying connectivity the ProSe UNR provides (e.g., including Access Point Name (APN) information);Whether the discovered UE can act as a relay (i.e. whether a UE that has been discovered can act as an UNR); andStatus/maintenance flags (e.g., indicating whether the relay is temporarily without connectivity or battery running low so the Remote UEs can seek/reselect another Relay). Returning now toFIG.4, in order to enable the exit from NMO, the UE104sends, at step S403, a NMO-R Preferred (i.e. a relay mode preference) indication to the network. This indication may, implicitly or explicitly, express a request for releasing the RRC connection. For example, the release of the RRC connection may be performed for a device not supporting concurrent transmission on Uu and PC5, hence unable to switch to NMO-R in RRC_CONNECTED, while this is not performed for a device capable of simultaneous transmission on Uu and PC5. On receipt of the NMO-R Preferred indication, at step S404, the network may determine that the RRC connection should be released. If the network determines, at step S404, that the RRC connection should be released, the network sends, at step S405, a RRC Connection Release message to the UE104. A new release cause value is set in the RRC Connection Release message in order to indicate to the UE to not trigger service request procedure and keep the existing EPS bearers. Optionally, the network may also include, in the RRC Connection Release message, identities of any other target relays102that the network may deem appropriate. The network will know the approximate location of the UE104and may for instance be aware of UNRs102operating in the proximity of the UE104and indicate the UNRs'102identities for the UE104to discover. The UE104may use these UNR identities to perform a subsequent discovery step to find if a more suitable UNR102may be found. These identities may be included in the RRC Release message or sent separately from the release message. Upon releasing the RRC connection, the eNB108may also initiate S1bearer release for the UE104. Alternatively, the eNB108may keep the corresponding S1bearers and redirect the user plane traffic to the UNR102. The network may choose to release the UE context at this point although the UE104keeps the context of the PDN connection locally. The steps described below are independent of how the traffic is rerouted to the UNR102and whether or not the network releases the UE context. In other words, the subsequent steps are independent of whether the UE104may be considered as attached or detached as far as network is concerned. If the network does not release the RRC connection, the UE104remains in NMO and does not initiate the switch to NMO-R, until/unless a RLF is experienced and the UE104loses network Uu connectivity. The UE104performs, at step S406, procedures described inFIGS.5and/or7to switch to NMO-R, depending upon whether or not the UE104is currently in an established MCPTT session. During one-to-one connection establishment, the UE104may request the UNR102to relay the existing PDN connection(s), as shown inFIG.5. The UE104establishes a one-to-one connection with a UNR102capable of relaying the PDN connection(s) for the services to be carried over PC5 interface and requests, at step S501, the UNR102to relay this (these) PDN connection(s). This request can be complemented by relevant UE context information to the UNR102. The relevant UE context information may indicate the PDN connection(s) and the related APNs. The UE context may also include the Quality of Service (QoS) and other parameters related to the EPS bearers used by the UE104while in NMO. The UNR102requests, at step S502, bearer resource modification or allocation from the network based on the information received from the UE in step S501by transmitting Bearer Resource Allocation Request or Bearer Resource Modification Request to the network. In return the network may modify the already established EPS bearers between UNR102and eNB108(e.g., bearers serving UNR's102own communication needs or bearers for relaying transmissions for other out of coverage UEs104) or allocate new dedicated EPS bearers. This step ensures that the Uu link between the UNR102and the eNB108can efficiently serve the out of coverage UE104. The UNR102assigns, at step S503, logical channel identities of PC5 bearers corresponding to the EPS bearers to be relayed. The UNR102maintains the following information per EPS bearer to be relayed for relaying operation over PC5.a. L2 source address of the UE104;b. IP address of the UE104assigned by the UNR102;c. Identity of the EPS bearer(s) that the UE104requested;d. Identity of the UNR's EPS bearer which is now associated with the EPS bearer identity the UE104requested (i.e. transporting the corresponding data); ande. Sidelink logical channel identity assigned to the EPS bearer in c and d. At step S504, the UNR102responds to the UE104with the sidelink logical channel identities corresponding to EPS bearers. The UE104establishes the PC5 bearers and associates the logical channel identities with the corresponding EPS bearers. Further variants of the BBM approach presented above can be considered, such as, upon reception of the NMO-R Preferred indication in step S403, the network may elect to send a newly defined indication Switch to NMO-R Deferred instead of the RCC Connection Release in step S405, as a result of which the UE104remains in NMO and does not initiate the step S406, until/unless a RLF is experienced. As a further alternative, instead of sending a Switch to NMO-R Deferred indication, the network may send the ProSe configuration applicable in the cell to the UE104to enable NMO-R operation. This option is applicable when the ProSe frequency belongs to the serving cell. This option assumes that the UNR102is also using the same ProSe configuration (e.g., since the UNR102is connected to the same eNB108or to an eNB108whose ProSe configuration is known to the serving eNB). Additionally, the UE104starts a timer at the sending of the NMO-R Preferred indication to the network at step S403. If the timer elapses before the UE104receives a RRC Connection Release or a Switch to NMO-R Deferred (variant mentioned above), the UE104initiates step S406if capabilities allow. Sending the NMO-R Preferred indication to the network may be left optional. By default, if the UE104switches to NMO-R without transmitting this indication, the eNB108may send the UE104to RRC_IDLE due to inactivity. “Make-Before-Break” (MBB): In the MBB approach, the UE104performs discovery of suitable relay(s), may interact with the network before proceeding to NMO-R establishment, then switches to NMO-R. This approach minimizes the service interruption time incurred during the establishment of NMO-R as the UE104supports discovery of UNRs102and establishment of NMO-R whilst in RRC_CONNECTED state. Referring toFIG.6, the UE104detects, at step S601, a condition to initiate UNR discovery as described above (i.e. a trigger). The UE104performs, at step S602, the discovery of UNRs102in communication range able to provide connectivity for the service the UE104is interested in and selects an appropriate relay, as described above. The UE104may send, at step S603, a NMO-R Preferred indication to the network to make the network aware of the UE's intention to switch to NMO-R in a short term and to get the authorization from the network to perform the switch. The network may answer, at step S604, the NMO-R Preferred indication with a NMO-R Proceed indication to permit the UE104to proceed to switch immediately. In some implementations, the NMO-R Proceed indication may include the ProSe configuration of the cell (this option is applicable when the ProSe frequency is owned by the serving cell thereby enabling the UE104to adopt the signaled ProSe configuration for NMO-R operation). As an example, upon receiving NMO-R preferred indication, the network may include ProSe configuration parameters allowing the UE104to autonomously select resources from resource pools to transmit Sidelink Control and data or discovery messages (i.e. UE autonomous resource selection, also referred to as Mode 2 Direct Communication or Type 1 Discovery), thereby enabling the UE104to use the resources when out of coverage. In case the network does not send such indication, the UE104remains in NMO (which would result in a Break-Before-Make scenario). In another option, the network may include the details of target UNRs102(i.e. the ProSe layer 2 IDs of the target relays102) in the NMO-R proceed indication. The target UNR information may help the UE104in performing a further discovery step to discover a more suitable UNR102if appropriate. It is noted that steps S603and/or S604may be optional. During the one-to-one connection establishment with the UNR102, at step S605, the UE also performs the steps described inFIG.5. The UE104then operates in NMO-R while still in RRC_CONNECTED state. At step S606, the UE104indicates that it has completed the switch to NMO-R to the network by sending an NMO-R Entered indication. On receipt of the NMO-R Entered indication, the network may release, at step S607, the RRC connection and the UE104enters RRC_IDLE. A new release cause value may be set in the RRC Connection Release message in order to indicate to not trigger service request procedure and keep the existing EPS bearers. When the UE104establishes corresponding PC5 bearers, these PC5 bearers are associated with the EPS bearers. If the UE104does not inform the network of the switch, or if the network does not release the RRC connection, the UE104remains in RRC_CONNECTED until/unless a RLF is experienced and the UE104loses network Uu connectivity. Further variants of the Make-Before-Break approach presented above can be considered, such as, upon reception of the NMO-R preferred indication at step S603, the network may elect to send a newly defined indication Switch to NMO-R Deferred, as a result of which the UE104remains in NMO and does not initiate the switch to NMO-R, until/unless a RLF is experienced. Additionally, the UE104may start a timer at the sending of the NMO-R preferred indication to the network, at step S603. If the timer elapses before the UE104receives a NMO-R proceed or a Switch to NMO-R deferred, the UE104initiates the switch to NMO-R. Choice Between Break-Before-Make and Make-Before-Break UE capabilities may be considered in the choice between BBM and MBB. A UE104that can support NMO-R whilst in RRC_CONNECTED state can adopt a Make-Before-Break approach (i.e. according toFIG.6) whereas a UE104that is not capable of supporting NMO-R in RRC_CONNECTED state will employ the Break-Before-Make approach (according toFIG.4). For UEs104that can support both MBB and BBM, the choice between Break-Before-Make and Make-Before-Break approaches may further depend on the criticality of the service in use (e.g., on the priority of the MCPTT group call in which the user is involved), whether the UE104“has the floor” and is engaged in uplink transmission, the ProSe configuration, or other QoS related criteria. Typically, Make-Before-Break should be used in case of high priority or delay-sensitive communications, or if the MCPTT user is the current talker. Although it is assumed that discovery of UNRs104may be performed in parallel to NMO, discovery may incur a power penalty as highlighted above. In case of choosing a BBM strategy, the UE104can defer discovery until the UE104effectively loses network coverage (i.e. the NMO leg is broken). This further minimizes the number of discovery attempts and may be appropriate for delay tolerant bearers where BBM is selected. If the ProSe resources to enable NMO-R are available for the UE104only in one of the RRC states (e.g., only in RRC_CONNECTED state—i.e. operating scheduled resource allocation only) then the eNB108may keep the UE104in RRC_CONNECTED state. On the other hand, if the ProSe resources are also available for the UE104in RRC_IDLE state (i.e. UE autonomous resource selection is applicable) then the eNB108may choose to send the UE104to RRC_IDLE state depending on other criteria as mentioned above. Note that availability of pre-allocated ProSe resources in RRC_IDLE state may be helpful for the UE104to be able to receive the service when the UE104is totally out of coverage. The above choice between the two approaches may be made at the UE104or at the network or may be a cooperative decision between the UE104and the network based on some interaction between them. For instance, the UE104may select a preference for one of the above approaches (i.e. Make-Before-Break or Break-Before-Make) and may indicate this preference to the network using the NMO-R preferred message. The network may then consider the preference/choice indicated by the UE104along with other criteria for deciding between the approaches as mentioned above. Upon deciding on an approach, the chosen approach is then executed as perFIG.4orFIG.6. Specifically, the network, upon receiving an indication indicating preference for NMO-R, responds by sending a RRC Connection Release message to a UE104only supporting BBM and may keep the UE104in RRC_CONNECTED if the UE104supports MBB. If the UE104supports MBB, the network may further decide to still send a RRC Connection Release message to the UE104if deemed appropriate based on the criticality of the active NMO bearers (e.g., in case of delay tolerant NMO bearers, the network may choose to release the RRC connection—this option will be useful, for instance, when the network is congested and releasing the UE104earlier would help releasing the congestion or reducing interference situation in the network, etc). UE Preference Indication Any of the following may be used for indicating UE preference to switch to NMO-R (i.e. NMO-R preferred or NMO-R Entered indications in above figures):An RRC message defined to convey this information;By indicating UE preference in ProSe related signalling (i.e. within the ProSeUEInformation RRC indication);As an example, a new cause code could be included in the ProSeUEInformation indication to indicate to the network that NMO-R is preferred;By sending a detach request message. The UE104may also include an indication indicating to the network that the detach request is due to preference to switch to NMO-R. In this case, even though the UE104sends a detach request message to the network, the UE104still keeps the corresponding UE context and switches the Uu bearers to the corresponding PC-5 bearers once the NMO-R is activated. Hence, from the network perspective, the UE104may be considered as in “detached” state whilst the UE104may store part or all of the UE context information. Alternatively, the network may also keep the UE context. In other words, consider the UE104to be in attached state and the network will adopt this different behavior of retaining the UE context of a UE sending “detach” message based on the cause code indicated for the detach (i.e. a cause code indicating that UE104is requesting detach to enter NMO-R mode);By using a new MAC control element (MAC CE);By using an indication for a power optimised configuration on the network interface (e.g., by using UE assistance information message); orBy including a new information element or indicator indicating the preference for switch to NMO-R in any of the messages mentioned above. UE in RRC_IDLE state A UE104in RRC_IDLE state may be receiving MCPTT service (e.g., via eMBMS). In this case, the UE104may autonomously switch to NMO-R upon detecting suitable trigger conditions for such a switch as depicted inFIG.7. The UE104in RRC_IDLE detects, at step S701, that a condition to switch from NMO to NMO-R has been triggered as described above, (e.g., radio link degradation, service quality degradation, etc.) The UE104performs, at step S702, the discovery of UNRs102in communication range able to provide connectivity for the service in which the UE104is interested and selects an appropriate relay102. The UE104performs, in step S703, the operations for the establishment of NMO-R according to the processes described inFIG.4. Network Triggered/Assisted Switch to NMO-R In this instance, the UE104is assumed to be in RRC_CONNECTED state in NMO. The network facilitates the UE performing a switch to NMO-R. In one scenario, the network knows the UE capabilities and also its coverage situation (e.g., based on the measurement reports sent by the UE104). As in the UE triggered switch to NMO-R, both Make-Before-Break and Break-Before-Make approaches are feasible as well. When the UE104is approaching to the edge of coverage (i.e. triggering conditions satisfied as described above), the eNB108instructs the UE104to start looking for a UNR102in proximity. This scenario is depicted inFIG.8, which shows the Make-Before-Break case in which the UE104is capable of supporting NMO-R whilst in RRC_CONNECTED. The eNB108detects, at step S801, that triggering conditions for initiating UNR discovery have been met as described above. The eNB108sends, at step S802, a Relay discovery command message to the UE104. Alternatively, this command may be an enhanced measurement configuration message. The UE104performs, at step S803a UNR discovery procedure. The UE104may optionally obtain, via this procedure, the cell-related identifiers (e.g., C-RNTI) of the UNRs102discovered. The cell-related identifiers of the discovered UNRs102may be used by the eNB108in further steps of the procedure. The UE104reports, at step S804, information about one or more UNRs102discovered to the eNB108(e.g., the received signal power and quality measurements, the L2 source address, the battery level and the available processing power, the UNR's serving cell identifier, etc.) This information may be included in a measurement report, however, a new message may be defined, (e.g., a Relay discovery response message). The UE104may also provide additional information such as its own location either included in the above message or in addition to the above messages to facilitate the eNB108to find and configure UNRs102in the geographical area where the UE104is located. The eNB108, at step S805, selects one of the discovered relays102and instructs the UE104to establish one-to-one Sidelink communication (i.e. over PC5) with the selected relay102if the UE104supports simultaneous Sidelink and Uu communications with a NMO-R mode command. The indication may be conveyed in a RRC connection reconfiguration message. The UE104is aware of existing logical channels over Uu and their QoS parameters, e.g., logical channel priority and bit rates served by the eNB108. The UE104may establish the same number of Sidelink logical channels over the PC5 interface with similar QoS parameters as for the logical channels used over the Uu interface. Alternatively the eNB108may instruct the UNR102to establish a one-to-one Sidelink communication with the UE104. The eNB108is aware of the established logical channels over the Uu interface. The information about the logical channels may be conveyed to the UNR102to request the UE104to configure the same number of logical channels with similar QoS characteristics. The UE104, at step S806, establishes a one-to-one Sidelink communication with the UNR102as described above. The UE104informs, at step S807, the eNB108that the UE104has successfully established the Sidelink communication with a NMO-R Entered indication. This indication may be a RRC connection reconfiguration complete message. The network decides, at step S808, whether RRC connection needs to be maintained. The network may, at step S809, send a RRC Connection Release to the UE104to instruct the UE104to enter RRC_IDLE state. Upon transition to RRC_IDLE the UE104switches from the logical channels over Uu to PC5. If the UE104is unable/not capable of supporting NMO-R in RRC_CONNECTED state, one alternative is to adopt Break-Before-Make strategy (other reasons for a choice between Make-Before-Break and Break-Before-Make as described previously are also applicable in making this decision). In this case, the UE104receives an RRC connection release message prior to the switch to NMO-R. This procedure is depicted inFIG.9. Steps S901through S904are substantially similar to steps S801through S804inFIG.8as described above. In step S905, the eNB108selects one UNR102and instructs the UE104to establish a one-to-one Sidelink communication with the selected UNR102. The instruction may be conveyed, in step S906, in RRC Connection Release if the UE104does not support simultaneous Uu and Sidelink communications. The UE104establishes one-to-one Sidelink communication with the UNR102as described above, starting NMO-R operation. In order to facilitate the above scenarios, one or more new indications from the eNB108to the UE104may be defined. Specifically, to trigger discovery of relays102at the UE104, an indication referred to as Relay discovery command (see e.g.,FIG.8, step S802) may be sent by the eNB108to the UE104. The eNB108may configure one or more UEs in relay mode prior to sending this message to the UE104if the eNB108is aware that there are no potential relays close to the UE104. The UE104starts relay discovery upon receiving this indication (see e.g.,FIG.8, step S803). To trigger the UE104to switch to NMO-R, an indication referred to as NMO-R mode command may be sent by the eNB108to the UE104(see e.g.,FIG.8, step S805). The UE104establishes NMO-R upon receiving this indication. This command can include the identity of the relay102with which the UE104should associate. Any UNR identity such as the C-RNTI of the relay UE102or the ProSe UE ID (i.e. the source Layer-2 ID) of the UNR102may be used for this purpose. The UE104may confirm the completion of an NMO-R switch to the eNB108by sending an indication referred to as NMO-R Entered (see e.g.,FIG.8, step S807). The eNB108may initiate mechanisms to consolidate and potentially release the RRC connection of the UE104(e.g., when no other service configured to use Uu interface is active) upon receiving this indication (see e.g.,FIG.8, step S808). Any of the above indications may be included in an existing or a new RRC message, or may be conveyed via a new MAC Control Element. Further, the measurement report message, defined by 3GPP TS 36.331, may be enhanced to also indicate the discovered relays (see e.g.,FIG.8, step S804). The Source Layer-2 ID (ProSe UE ID), defined by 3GPP TS 23.303, or the C-RNTI of the UNR102or C-RNTI of the UNR102can be included in the measurement report message for this purpose. The network may select one of the reported UNRs102as the preferred candidate for connecting the UE104and can indicate this in the NMO-R mode command (see e.g.,FIG.8, step S805). Alternatively, the network may indicate a subset of relays, or otherwise, a ranked list of relays, in the NMO-R mode command. The network may prioritize the relays102within its own coverage over the relays102that are not in its coverage. As a further option, the eNB108may interrogate one or more UNRs102about their capacity to support an additional incoming UE104. This information may be helpful for load balancing purposes between the UNRs102. Communication between the eNB108and UNRs102may be as shown inFIG.10. The Incoming UE request message in step S1001may include the UE identifier of the potential incoming UE104. If there is more than one possible UNR102under the eNB control, the eNB108may select one UNR102or prepare more than one UNR102for the incoming UE104by providing the potential UNRs102with the UE identifier. This UE identifier can be the ProSe UE ID of the UE104or any other identity by which the UNR102can identify the incoming UE104on the PC5 link. The UE ID of the incoming UE104may be used by the relay102to establish the Sidelink connection. As a response to this message, the UNR102may send the Incoming UE response message, at step S1002. This message can include parameters that the eNB108can use to select a UNR102among multiple relays candidates. Examples of these parameters are supported applications/application IDs, battery status, mobility status, position/geographical location in the cell, load (e.g., number of out of coverage UEs currently associated with the UNR102, or relative load percentage), number of MCPTT groups which the relay forwards, an explicit indication to reject the additional incoming UE104, etc. Based on these parameters, the eNB108can select an appropriate UNR102and include the identity (ProSe UE ID) of the selected relay102in the NMO-R mode command transmitted to the UE104(see e.g.,FIG.8, step S805). Alternatively the Incoming UE request may be utilized for the eNB108to instruct the UNR102to establish a one-to-one connection with the UE104. In this case, the request may include the UE's L2 source address and the logical channels to be established over Sidelink. NMO-R to NMO Switch A UE104in NMO-R mode of operation may move into network coverage where NMO mode of operation is potentially available. NMO can be available when the PDN connectivity to the service can be provided by the network. NMO availability can be determined by the UE104based on the system information of the network (i.e. availability of MBMS sessions that the UE104is interested in, etc). In this case, two approaches are disclosed:1) The UE switches to NMO upon detecting network coverage supporting NMO.2) The UE stays in NMO-R until a switch to NMO is deemed necessary based on triggering criteria such as operation in NMO-R deteriorating, etc. The UE104may be preconfigured to choose between these approaches (e.g., configured in the UICC or via explicit signaling from the network). Alternatively, one specific behavior may be enforced by the standards. The behavior may also be based on the capability of the UE104(i.e. whether or not the UE104can support NMO-R whilst in RRC_CONNECTED). UE Always Switches to NMO Upon Finding Coverage In this instance, an out of coverage UE104operating in NMO-R always switches to NMO upon moving into network coverage. Thus, while out of coverage, the UE104will be performing cell search (as per standardized cell selection/reselection algorithms) until a suitable cell is found, and selects a suitable cell when available (see 3GPP TS 36.304). Upon selecting the cell, the UE104may enter connected mode and update its registration with the MCPTT server106via the network. If the UE104supports MBB, the UE104may initiate establishment of NMO and upon successful registration with the MCPTT server106and resuming access to the service, the UE104may detach from the UNR102and switch to NMO. This process is depicted inFIG.11. The UE104operating in NMO-R through the UNR102enters network coverage and selects a suitable cell, at step S1101. The UE104establishes, at step S1102an RRC connection in order to get MCPTT service in the serving cell. The UE104accesses, at step S1103, the MCPTT service using IMS/SIP procedures after mutual authentication and establishment of secure association (SA-R) between the UE104and the MCPTT server106. If needed, (i.e. not prevented by the MCPTT server106), the UE104may have to suppress duplicate information that could be temporarily received from the relay102and from the network. The UE104sends, at step S1104, a Sidelink Disconnect indication to the UNR102to stop the relay transferring MCPTT information for this UE104. The UNR102stops transmissions directed towards the UE104, at step S1105. The UNR102may answer the Sidelink Disconnect indication by a Sidelink Disconnect Ack, at step S1106. If there are no more UEs104employing relaying operation by the UNR102, then the UNR102may cease its relaying activity, at step S1107, and send a UNR Mode Stop indication to the serving eNB108. It should be noted that the serving eNB of the UNR102may or may not be the same one as the serving eNB for the UE104. As an example, the UNR Mode Stop indication may be included in an RRC message, such as ProSe Interest Indication, indicating that the UNR102is no longer interested in ProSe. The procedure discussed with respect toFIG.11works when the UE104can support NMO-R while in RRC_CONNECTED state. However, if the UE104is not capable of this, the UE104may switch to NMO after disconnecting from the UNR102. This choice of using Break-Before-Make or Make-Before-Break may involve other considerations as mentioned above. The “Break-Before-Make” option is depicted inFIG.12. The UE104operating in NMO-R through the UNR102enters network coverage, at step S1201and selects a suitable cell. The UE104sends, at step S1202, a Sidelink Disconnect indication to the UNR102to stop the relay transferring MCPTT information for this UE104. The UNR102stops transmissions directed towards the UE104, at step S1203. The UNR102may answer the Sidelink Disconnect indication, at step S1204by a Sidelink Disconnect Ack. The UE104establishes, at step S1205, an RRC connection in order to get MCPTT service in the serving cell. This procedure involves establishment, at step S1205a, of the EPS bearers corresponding to the services that the UE104is receiving over the PC-5 link. A Non-Access Stratum (NAS) layer in UE104triggers, at step S1205b, the service request procedure to establish the needed EPS bearers corresponding to the bearers that the UE is receiving the service over when in NMO-R. The service request message may be forwarded by the eNB108to the MME to configure appropriate EPS bearers for the UE104. The eNB108responds by sending an RRC Configuration, at step S1205c, to the UE104and this RRC configuration includes the configuration of the EPS bearers and DRBs to serve the UE104in NMO. The UE104, at step S1205d, associates the application data flows corresponding to the PC-5 bearers to the established Uu bearers. The UE104accesses, at step S1206, the MCPTT service106using IMS/SIP procedures after mutual authentication and establishment of secure association (SA-R) between UE104and the MCPTT server106. Upon successfully establishing the Uu bearers, the application data flows may be switched to the established Uu bearers. If there are no more UEs104employing relaying operation, the UNR102may stop its relaying activity, at step S1207, and send a UNR Mode Stop indication to its serving eNB108. It should be noted that the serving eNB of the UNR102may not be the same eNB as the serving eNB of the UE104. UE Conditionally Switches to NMO (e.g., NMO-R Service Deteriorating, Explicit Signaling from Network or the UNR, Etc.) Triggering Conditions for Switching to NMO Mode In this instance, the out of coverage UE104does not switch to NMO automatically or unconditionally upon finding network coverage. Instead, the UE104continues operating in NMO-R until a trigger causes the UE104to switch to NMO-R. Examples of such conditions include radio link degradation on the PC5 interface, which may include degradation of the PC5 link quality or loss of synchronization on PC5 link, etc. Another condition may be service quality degradation where an application detects that the quality of the received service has degraded below a predetermined threshold. For instance, this service quality may include the detection of a predetermined number or percentage of missed/un-decoded voice frames or video frames, determination that the residual bit error rate on the application packets has exceeded a predetermined threshold, etc. Another condition may be the service becoming unavailable such that the UNR102no longer supports the service the UE104is interested in (e.g., due to lack of PC5 resources, etc.). Other UNR related parameters indicating deterioration could also trigger switching, such as a low battery level reported by the UNR102or other explicit messages received from the UNR102necessitating a switch to NMO. Examples of such explicit messages may include commands indicating UNR mode termination of the relay or capacity of the relay exceeded, etc. Switching to NMO The flowchart1300ofFIG.13illustrates an example procedure for triggering the switch to NMO. The UE104begins, at block S1301, in NMO-R. If a suitable eNB108capable of supporting the UE104during NMO operation is found, at block S1302, and the trigger conditions for the switch to NMO are met, at block S1303, the UE104initiates, at block S1304, mechanisms to switch from NMO-R to NMO. The procedure for the UE104to switch to NMO mode of operation is detailed inFIG.14and is somewhat similar to the procedures depicted inFIGS.11and12. The UE104operating in NMO-R through the UNR102enters network coverage, at step S1401, and selects a suitable cell. The UE104determines, at step S1402, that conditions to switch to NMO are satisfied as described above. The UE104establishes, at step S1403, an RRC connection in order to get MCPTT service in the serving cell (if the UE104has not already switched to RRC_CONNECTED for other reasons such as being paged by the network for a mobile terminating session, etc.) and establishes the PDN connection(s) employed by the services carried over PC5 interface. The service request procedure is initiated by a NAS layer to establish the necessary EPS bearers for NMO. The UE104accesses, at step S1404, the MCPTT service106using IMS/SIP procedures after mutual authentication and establishment of secure association (SA-R) between UE104and the MCPTT server106. Upon successfully establishing the Uu bearers, the application data flows may be switched to the established Uu bearers. If needed, (e.g., not prevented by the MCPTT server106), the UE104may suppress duplicate information that could be temporarily received from the relay and from the network. The UE104sends, at step S1405, a Sidelink Disconnect indication to the UNR102to stop the relay transferring MCPTT information for this UE104. At step S1406, the UNR102stops transmissions directed towards the UE104. The UNR102may answer, at step S1407, the Sidelink Disconnect indication by a Sidelink Disconnect Ack. If there are no more UEs104employing relaying operation, the UNR102may stop its relaying activity, at step S1408, and send a UNR Mode Stop indication to the serving eNB108. Again, if the UE104is not capable of supporting NMO-R whilst in RRC_CONNECTED state, then a Break-Before-Make solution would be used as depicted inFIG.15. The UE104, operating in NMO-R through the UNR102, enters network coverage and selects a suitable cell, at step S1501. The UE104determines, at step S1502, that conditions to switch to NMO are satisfied as described above. The UE104sends, at step S1503, a Sidelink Disconnect indication to the UNR102to stop the relay transferring MCPTT information for this UE104. The UNR102stops transmissions directed towards the UE104, at step S1504. The UNR102may answer, at step S1505, the Sidelink Disconnect indication by a Sidelink Disconnect Ack. If there are no more UEs104employing relaying operation, the UNR102may stop its relaying activity, at step S1506, and send a UNR Mode Stop indication to the serving eNB108. The UE104establishes an RRC connection in step S1507in order to get MCPTT service in the serving cell (if the UE has not previously switched to RRC_CONNECTED for other reasons) and establish PDN connection(s) required by the services provided over PC5. The UE104accesses the MCPTT service, at step S1508, using IMS/SIP procedures after mutual authentication and establishment of secure association (SA-R) between UE104the MCPTT server106. The Equipment A block diagram of an example of a wireless communication device1600(such as UE104and UNR102) is shown inFIG.16. The wireless communication device1600includes multiple components, such as a processor1602that controls the overall operation of the wireless communication device. Communication functions, including data and voice communications, are performed through a communication subsystem1604. The communication subsystem1604may include a plurality of receivers and transmitters operating on one or more frequencies to allow simultaneous connection to two or more different entities. For UEs having MBB capabilities, at least two receivers and two transmitters may be employed. Data received by the wireless communication device is decompressed and decrypted by a decoder1606. The communication subsystem1604receives messages from and sends messages to a wireless network1650. The wireless network1650may be any type of wireless network, including, but not limited to, data wireless networks, voice wireless networks, and networks that support both voice and data communications. A power source1642, such as one or more rechargeable batteries or a port to an external power supply, powers the wireless communication device1600. The processor1602interacts with other components, such as Random Access Memory (RAM)1608, memory1610, a display1612(which may be a touch-sensitive display), one or more actuators1620, an auxiliary input/output (I/O) subsystem1624, a data port1626, a speaker1628, a microphone1630, short-range communications1632, and other device subsystems1634. User-interaction with a graphical user interface is performed through the touch-sensitive display1612. Information, such as text, characters, symbols, images, icons, and other items that may be displayed or rendered on a portable electronic device, is displayed on the touch-sensitive display1612via the processor1602. The processor1602may interact with an accelerometer1636that may be utilized to detect direction of gravitational forces or gravity-induced reaction forces. To identify a subscriber for network access, the wireless communication device1600uses a UICC such as a Subscriber Identity Module or a Removable User Identity Module (SIM/RUIM) card1638for communication with a network, such as the wireless network1650. Alternatively, user identification information may be programmed into memory1610. The wireless communication device1600includes an operating system1646and software programs or components1648, such as the MCPTT application1644, that are executed by the processor1602and are typically stored in a persistent, updatable store such as the memory1610. Additional applications or programs may be loaded onto the wireless communication device102,104through the wireless network1650, the auxiliary I/O subsystem1624, the data port1626, the short-range communications subsystem1632, or any other suitable subsystem1634. A received signal such as a text message, an e-mail message, instant message or web page download is processed by the communication subsystem1604and input to the processor1602. The processor1602processes the received signal for output to the display1612and/or to the auxiliary I/O subsystem1624. A subscriber may generate data items, for example e-mail messages, which may be transmitted over the wireless network1650through the communication subsystem1604. For voice communications, the overall operation of wireless communication device102,104is similar. The speaker1628outputs audible information converted from electrical signals, and the microphone1630converts audible information into electrical signals for processing. The touch-sensitive display1612may be any suitable touch-sensitive display, such as a capacitive, resistive, infrared, surface acoustic wave (SAW) touch-sensitive display, strain gauge, optical imaging, dispersive signal technology, acoustic pulse recognition, and so forth, as known in the art. A capacitive touch-sensitive display includes a capacitive touch-sensitive overlay. The overlay may be an assembly of multiple layers in a stack including, for example, a substrate, a ground shield layer, a barrier layer, one or more capacitive touch sensor layers separated by a substrate or other barrier, and a cover. The capacitive touch sensor layers may be any suitable material, such as patterned indium tin oxide (ITO). One or more actuators1620may be depressed or activated by applying sufficient force to the actuators1620to overcome the actuation force of the actuator. The actuator(s)1620may provide input to the processor1602when actuated. Actuation of the actuator(s)1620may result in provision of tactile feedback. Turning now toFIG.17, a block diagram of an example eNB108is provided. The eNB108includes at least one processor1702that controls the overall operation of the eNB108. Wired communication subsystem1704allows the eNB108to interact with various other devices, such as servers (e.g., an MCPTT application server), routers, gateways, etc., via a wired network such as the Internet. Wireless communication functions, including data and voice communications, are performed through a wireless communication subsystem1706. The eNB108includes memory1708storing computer-readable instructions for an operating system1710, data1712and software programs or components1714that are executed by the processor1702. It should be noted that other typical functionality and components of an eNB108are not shown here for simplicity and brevity. Aspects of the present disclosure may be embodied as a device or apparatus, system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware-based embodiment, an entirely software-based embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) may include the following tangible media: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. Non-tangible or non-transitory media may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. Computer program code or instructions for carrying out operations for aspects of the present disclosure may be any combination of one or more programming languages, including an object oriented programming language and conventional procedural programming languages. The program code may execute on one or more devices such as a computer and/or server. Aspects of the present disclosure have been described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. In this regard, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments. However it should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented wholly or partially by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Furthermore it also will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented wholly or partially by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. Finally, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. That is, the description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the form disclosed. Many modifications and variations will be apparent without departing from the scope of the disclosure defined in the appended claims.
67,455
11943677
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one aspect may be beneficially utilized on other aspects without specific recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatus, methods, processing systems, and computer readable mediums for new radio (NR) (new radio access technology or 5G technology). NR may support various wireless communication services, such as Enhanced mobile broadband (eMBB) targeting wide bandwidth (e.g. 80 MHz beyond), millimeter wave (mmW) targeting high carrier frequency (e.g. 60 GHz), massive MTC (mMTC) targeting non-backward compatible MTC techniques, and/or mission critical targeting ultra-reliable low latency communications (URLLC). These services may include latency and reliability requirements. These services may also have different transmission time intervals (TTI) to meet respective quality of service (QoS) requirements. In addition, these services may co-exist in the same subframe. Aspects of the present disclosure relate to beam recovery and radio link failure (RLF) in communications systems using beamforming. According to aspects of the present disclosure, a nodeB (NB) may be communicating with a UE via a transmit beam and a receive beam of an active beam pair. The NB (an example of a base station) may indicate to the UE one or more alternative beams for the UE to use to send one or more beam recovery messages to the NB in the event the transmit beam and the receive beam become misaligned. If the transmit and the receive beam become misaligned, the UE may select one or more of the alternative beams to use to send a beam recovery message to the NB. The following description provides examples, and is not limiting of the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. The techniques described herein may be used for various wireless communication networks such as LTE, CDMA, TDMA, FDMA, OFDMA, SC-FDMA and other networks. The terms “network” and “system” are often used interchangeably. A CDMA network may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA network may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA network may implement a radio technology such as NR (e.g. 5G RA), Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDMA, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). NR is an emerging wireless communications technology under development in conjunction with the 5G Technology Forum (5GTF). 3GPP Long Term Evolution (LTE) and LTE-Advanced (LTE-A) are releases of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). cdma2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). The techniques described herein may be used for the wireless networks and radio technologies mentioned above as well as other wireless networks and radio technologies. For clarity, while aspects may be described herein using terminology commonly associated with 3G and/or 4G wireless technologies, aspects of the present disclosure can be applied in other generation-based communication systems, such as 5G and later, including NR technologies. Example Wireless Communications System FIG.1illustrates an example wireless network100, such as a new radio (NR) or 5G network, in which aspects of the present disclosure may be performed, for example, for enabling connectivity sessions and internet protocol (IP) establishment, as described in greater detail below. As illustrated inFIG.1, the wireless network100may include a number of BSs110and other network entities. A BS may be a station that communicates with UEs. Each BS110may provide communication coverage for a particular geographic area. In 3GPP, the term “cell” can refer to a coverage area of a Node B and/or a Node B subsystem serving this coverage area, depending on the context in which the term is used. In NR systems, the term “cell” and eNB, Node B, 5G NB, AP, NR BS, NR BS, or TRP may be interchangeable. In some examples, a cell may not necessarily be stationary, and the geographic area of the cell may move according to the location of a mobile base station. In some examples, the base stations may be interconnected to one another and/or to one or more other base stations or network nodes (not shown) in the wireless network100through various types of backhaul interfaces such as a direct physical connection, a virtual network, or the like using any suitable transport network. In general, any number of wireless networks may be deployed in a given geographic area. Each wireless network may support a particular radio access technology (RAT) and may operate on one or more frequencies. A RAT may also be referred to as a radio technology, an air interface, etc. A frequency may also be referred to as a carrier, a frequency channel, etc. Each frequency may support a single RAT in a given geographic area in order to avoid interference between wireless networks of different RATs. In some cases, NR or 5G RAT networks may be deployed. A BS may provide communication coverage for a macro cell, a pico cell, a femto cell, and/or other types of cell. A macro cell may cover a relatively large geographic area (e.g., several kilometers in radius) and may allow unrestricted access by UEs with service subscription. A pico cell may cover a relatively small geographic area and may allow unrestricted access by UEs with service subscription. A femto cell may cover a relatively small geographic area (e.g., a home) and may allow restricted access by UEs having association with the femto cell (e.g., UEs in a Closed Subscriber Group (CSG), UEs for users in the home, etc.). A BS for a macro cell may be referred to as a macro BS. A BS for a pico cell may be referred to as a pico BS. A BS for a femto cell may be referred to as a femto BS or a home BS. In the example shown inFIG.1, the BSs110a,110band110cmay be macro BSs for the macro cells102a,102band102c, respectively. The BS110xmay be a pico BS for a pico cell102x. The BSs110yand110zmay be femto BS for the femto cells102yand102z, respectively. A BS may support one or multiple (e.g., three) cells. The wireless network100may also include relay stations. A relay station is a station that receives a transmission of data and/or other information from an upstream station (e.g., a BS or a UE) and sends a transmission of the data and/or other information to a downstream station (e.g., a UE or a BS). A relay station may also be a UE that relays transmissions for other UEs. In the example shown inFIG.1, a relay station110rmay communicate with the BS110aand a UE120rin order to facilitate communication between the BS110aand the UE120r. A relay station may also be referred to as a relay BS, a relay, etc. The wireless network100may be a heterogeneous network that includes BSs of different types, e.g., macro BS, pico BS, femto BS, relays, etc. These different types of BSs may have different transmit power levels, different coverage areas, and different impact on interference in the wireless network100. For example, macro BS may have a high transmit power level (e.g., 20 Watts) whereas pico BS, femto BS, and relays may have a lower transmit power level (e.g., 1 Watt). The wireless network100may support synchronous or asynchronous operation. For synchronous operation, the BSs may have similar frame timing, and transmissions from different BSs may be approximately aligned in time. For asynchronous operation, the BSs may have different frame timing, and transmissions from different BSs may not be aligned in time. The techniques described herein may be used for both synchronous and asynchronous operation. A network controller130may be coupled to a set of BSs and provide coordination and control for these BSs. The network controller130may communicate with the BSs110via a backhaul. The BSs110may also communicate with one another, e.g., directly or indirectly via wireless or wireline backhaul. The UEs120(e.g.,120x,120y, etc.) may be dispersed throughout the wireless network100, and each UE may be stationary or mobile. A UE may also be referred to as a mobile station, a terminal, an access terminal, a subscriber unit, a station, a Customer Premises Equipment (CPE), a cellular phone, a smart phone, a personal digital assistant (PDA), a wireless modem, a wireless communication device, a handheld device, a laptop computer, a cordless phone, a wireless local loop (WLL) station, a tablet, a camera, a gaming device, a netbook, a smartbook, an ultrabook, a medical device or medical equipment, a biometric sensor/device, a wearable device such as a smart watch, smart clothing, smart glasses, a smart wrist band, smart jewelry (e.g., a smart ring, a smart bracelet, etc.), an entertainment device (e.g., a music device, a video device, a satellite radio, etc.), a vehicular component or sensor, a smart meter/sensor, industrial manufacturing equipment, a global positioning system device, or any other suitable device that is configured to communicate via a wireless or wired medium. Some UEs may be considered evolved or machine-type communication (MTC) devices or evolved MTC (eMTC) devices. MTC and eMTC UEs include, for example, robots, drones, remote devices, sensors, meters, monitors, location tags, etc., that may communicate with a BS, another device (e.g., remote device), or some other entity. A wireless node may provide, for example, connectivity for or to a network (e.g., a wide area network such as Internet or a cellular network) via a wired or wireless communication link. Some UEs may be considered Internet-of-Things (IoT) devices. InFIG.1, a solid line with double arrows indicates desired transmissions between a UE and a serving BS, which is a BS designated to serve the UE on the downlink and/or uplink. A dashed line with double arrows indicates interfering transmissions between a UE and a BS. Certain wireless networks (e.g., LTE) utilize orthogonal frequency division multiplexing (OFDM) on the downlink and single-carrier frequency division multiplexing (SC-FDM) on the uplink. OFDM and SC-FDM partition the system bandwidth into multiple (K) orthogonal subcarriers, which are also commonly referred to as tones, bins, etc. Each subcarrier may be modulated with data. In general, modulation symbols are sent in the frequency domain with OFDM and in the time domain with SC-FDM. The spacing between adjacent subcarriers may be fixed, and the total number of subcarriers (K) may be dependent on the system bandwidth. For example, the spacing of the subcarriers may be 15 kHz and the minimum resource allocation (called a ‘resource block’) may be 12 subcarriers (or 180 kHz). Consequently, the nominal FFT size may be equal to 128, 256, 512, 1024 or 2048 for system bandwidth of 1.25, 2.5, 5, 10 or 20 megahertz (MHz), respectively. The system bandwidth may also be partitioned into subbands. For example, a subband may cover 1.08 MHz (i.e., 6 resource blocks), and there may be 1, 2, 4, 8 or 16 subbands for system bandwidth of 1.25, 2.5, 5, 10 or 20 MHz, respectively. While aspects of the examples described herein may be associated with LTE technologies, aspects of the present disclosure may be applicable with other wireless communications systems, such as NR. NR may utilize OFDM with a CP on the uplink and downlink and include support for half-duplex operation using time division duplex (TDD). A single component carrier bandwidth of 100 MHz may be supported. NR resource blocks may span 12 sub-carriers with a sub-carrier bandwidth of 75 kHz over a 0.1 ms duration. Each radio frame may consist of 50 subframes with a length of 10 ms. Consequently, each subframe may have a length of 0.2 ms. Each subframe may indicate a link direction (i.e., DL or UL) for data transmission and the link direction for each subframe may be dynamically switched. Each subframe may include DL/UL data as well as DL/UL control data. UL and DL subframes for NR may be as described in more detail below with respect toFIGS.6and7. Beamforming may be supported and beam direction may be dynamically configured. MIMO transmissions with precoding may also be supported. MIMO configurations in the DL may support up to 8 transmit antennas with multi-layer DL transmissions up to 8 streams and up to 2 streams per UE. Multi-layer transmissions with up to 2 streams per UE may be supported. Aggregation of multiple cells may be supported with up to 8 serving cells. Alternatively, NR may support a different air interface, other than an OFDM-based. NR networks may include entities such CUs and/or DUs. In some examples, access to the air interface may be scheduled, wherein a scheduling entity (e.g., a base station) allocates resources for communication among some or all devices and equipment within its service area or cell. Within the present disclosure, as discussed further below, the scheduling entity may be responsible for scheduling, assigning, reconfiguring, and releasing resources for one or more subordinate entities. That is, for scheduled communication, subordinate entities utilize resources allocated by the scheduling entity. Base stations are not the only entities that may function as a scheduling entity. That is, in some examples, a UE may function as a scheduling entity, scheduling resources for one or more subordinate entities (e.g., one or more other UEs). In this example, the UE is functioning as a scheduling entity, and other UEs utilize resources scheduled by the UE for wireless communication. A UE may function as a scheduling entity in a peer-to-peer (P2P) network, and/or in a mesh network. In a mesh network example, UEs may optionally communicate directly with one another in addition to communicating with the scheduling entity. Thus, in a wireless communication network with a scheduled access to time-frequency resources and having a cellular configuration, a P2P configuration, and a mesh configuration, a scheduling entity and one or more subordinate entities may communicate utilizing the scheduled resources. As noted above, a RAN may include a CU and DUs. A NR BS (e.g., eNB, 5G Node B, Node B, transmission and reception point (TRP), access point (AP)) may correspond to one or multiple BSs. NR cells can be configured as access cell (ACells) or data only cells (DCells). For example, the RAN (e.g., a central unit or distributed unit) can configure the cells. DCells may be cells used for carrier aggregation or dual connectivity, but not used for initial access, cell selection/reselection, or handover. In some cases DCells may not transmit synchronization signals—in some case cases DCells may transmit SS. NR BSs may transmit downlink signals to UEs indicating the cell type. Based on the cell type indication, the UE may communicate with the NR BS. For example, the UE may determine NR BSs to consider for cell selection, access, handover (HO), and/or measurement based on the indicated cell type. FIG.2illustrates an example logical architecture of a distributed radio access network (RAN)200, which may be implemented in the wireless communication system illustrated inFIG.1. A 5G access node206may include an access node controller (ANC)202. The ANC may be a central unit (CU) of the distributed RAN200. The backhaul interface to the next generation core network (NG-CN)204may terminate at the ANC. The backhaul interface to neighboring next generation access nodes (NG-ANs) may terminate at the ANC. The ANC may include one or more TRPs208(which may also be referred to as BSs, NR BSs, NodeBs, 5G NBs, APs, or some other term). As described above, a TRP may be used interchangeably with “cell.” The TRPs208may be a DU. The TRPs may be connected to one ANC (ANC202) or more than one ANC (not illustrated). For example, for RAN sharing, radio as a service (RaaS), and service specific AND deployments, the TRP may be connected to more than one ANC. A TRP may include one or more antenna ports. The TRPs may be configured to individually (e.g., dynamic selection) or jointly (e.g., joint transmission) serve traffic to a UE. The local architecture200may be used to illustrate fronthaul definition. The architecture may be defined that support fronthauling solutions across different deployment types. For example, the architecture may be based on transmit network capabilities (e.g., bandwidth, latency, and/or jitter). The architecture may share features and/or components with LTE. According to aspects, the next generation AN (NG-AN)210may support dual connectivity with NR. The NG-AN may share a common fronthaul for LTE and NR. The architecture may enable cooperation between and among TRPs208. For example, cooperation may be preset within a TRP and/or across TRPs via the ANC202. According to aspects, no inter-TRP interface may be needed and/or present. According to aspects, a dynamic configuration of split logical functions may be present within the architecture200. As will be described in more detail with reference toFIG.5, the Radio Resource Control (RRC) layer, Packet Data. Convergence Protocol (PDCP) layer, Radio Link Control (RLC) layer, Medium Access Control (MAC) layer, and a Physical (PHY) layers may be adaptably placed at the DU or CU (e.g., TRP or ANC, respectively). According to certain aspects, a BS may include a central unit (CU) (e.g., ANC202) and/or one or more distributed units (e.g., one or more TRPs208). FIG.3illustrates an example physical architecture of a distributed RAN300, according to aspects of the present disclosure. A centralized core network unit (C-CU)302may host core network functions. The C-CU may be centrally deployed. C-CU functionality may be offloaded (e.g., to advanced wireless services (AWS)), in an effort to handle peak capacity. A centralized RAN unit (C-RU)304may host one or more ANC functions. Optionally, the C-RU may host core network functions locally. The C-RU may have distributed deployment. The C-RU may be closer to the network edge. A DU306may host one or more TRPs (edge node (EN), an edge unit (EU), a radio head (RH), a smart radio head (SRH), or the like). The DU may be located at edges of the network with radio frequency (RF) functionality. FIG.4illustrates example components of the BS110and UE120illustrated inFIG.1, which may be used to implement aspects of the present disclosure. As described above, the BS may include a TRP. One or more components of the BS110and UE120may be used to practice aspects of the present disclosure. For example, antennas452, Tx/Rx222, processors466,458,464, and/or controller/processor480of the UE120and/or antennas434, processors460,420,438, and/or controller/processor440of the BS110may be used to perform the operations described herein and illustrated with reference toFIGS.9-10. FIG.4shows a block diagram of a design of a BS110and a UE120, which may be one of the BSs and one of the UEs inFIG.1. For a restricted association scenario, the base station110may be the macro BS110cinFIG.1, and the UE120may be the UE120y. The base station110may also be a base station of some other type. The base station110may be equipped with antennas434athrough434t, and the UE120may be equipped with antennas452athrough452r. At the base station110, a transmit processor420may receive data from a data source412and control information from a controller/processor440. The control information may be for the Physical Broadcast Channel (PBCH), Physical Control Format Indicator Channel (PCFICH), Physical Hybrid ARQ Indicator Channel (PHICH), Physical Downlink Control Channel (PDCCH), etc. The data may be for the Physical Downlink Shared Channel (PDSCH), etc. The processor420may process (e.g., encode and symbol map) the data and control information to obtain data symbols and control symbols, respectively. The processor420may also generate reference symbols, e.g., for the PSS, SSS, and cell-specific reference signal. A transmit (TX) multiple-input multiple-output (MIMO) processor430may perform spatial processing (e.g., precoding) on the data symbols, the control symbols, and/or the reference symbols, if applicable, and may provide output symbol streams to the modulators (MODs)432athrough432t. For example, the TX MIMO processor430may perform certain aspects described herein for RS multiplexing. Each modulator432may process a respective output symbol stream (e.g., for OFDM, etc.) to obtain an output sample stream. Each modulator432may further process (e.g., convert to analog, amplify, filter, and upconvert) the output sample stream to obtain a downlink signal. Downlink signals from modulators432athrough432tmay be transmitted via the antennas434athrough434t, respectively. At the UE120, the antennas452athrough452rmay receive the downlink signals from the base station110and may provide received signals to the demodulators (DEMODs)454athrough454r, respectively. Each demodulator454may condition (e.g., filter, amplify, downconvert, and digitize) a respective received signal to obtain input samples. Each demodulator454may further process the input samples (e.g., for OFDM, etc.) to obtain received symbols. A MIMO detector456may obtain received symbols from all the demodulators454athrough454r, perform MIMO detection on the received symbols if applicable, and provide detected symbols. For example, MIMO detector456may provide detected RS transmitted using techniques described herein. A receive processor458may process (e.g., demodulate, deinterleave, and decode) the detected symbols, provide decoded data for the UE120to a data sink460, and provide decoded control information to a controller/processor480. On the uplink, at the UE120, a transmit processor464may receive and process data (e.g., for the Physical Uplink Shared Channel (PUSCH)) from a data source462and control information (e.g., for the Physical Uplink Control Channel (PUCCH) from the controller/processor480. The transmit processor464may also generate reference symbols for a reference signal. The symbols from the transmit processor464may be precoded by a TX MIMO processor466if applicable, further processed by the demodulators454athrough454r(e.g., for SC-FDM, etc.), and transmitted to the base station110. At the BS110, the uplink signals from the UE120may be received by the antennas434, processed by the modulators432, detected by a MIMO detector436if applicable, and further processed by a receive processor438to obtain decoded data and control information sent by the UE120. The receive processor438may provide the decoded data to a data sink439and the decoded control information to the controller/processor440. The controllers/processors440and480may direct the operation at the base station110and the UE120, respectively. The processor440and/or other processors and modules at the base station110may perform or direct, e.g., the execution of the functional blocks illustrated inFIG.13, and/or other processes for the techniques described herein. The processor480and/or other processors and modules at the UE120may also perform or direct processes for the techniques described herein. The memories442and482may store data and program codes for the BS110and the UE120, respectively. A scheduler444may schedule UEs for data transmission on the downlink and/or uplink. FIG.5illustrates a diagram500showing examples for implementing a communications protocol stack, according to aspects of the present disclosure. The illustrated communications protocol stacks may be implemented by devices operating in a in a 5G system (e.g., a system that supports uplink-based mobility). Diagram500illustrates a communications protocol stack including a Radio Resource Control (RRC) layer510, a Packet Data Convergence Protocol (PDCP) layer515, a Radio Link Control (RLC) layer520, a Medium Access Control (MAC) layer525, and a Physical (PHY) layer530. In various examples the layers of a protocol stack may be implemented as separate modules of software, portions of a processor or ASIC, portions of non-collocated devices connected by a communications link, or various combinations thereof. Collocated and non-collocated implementations may be used, for example, in a protocol stack for a network access device (e.g., ANs, CUs, and/or DUs) or a UE. A first option505-ashows a split implementation of a protocol stack, in which implementation of the protocol stack is split between a centralized network access device (e.g., an ANC202inFIG.2) and distributed network access device (e.g., DU208inFIG.2). In the first option505-a, an RRC layer510and a PDCP layer515may be implemented by the central unit, and an RLC layer520, a MAC layer525, and a PHY layer530may be implemented by the DU. In various examples the CU and the DU may be collocated or non-collocated. The first option505-amay be useful in a macro cell, micro cell, or pico cell deployment. A second option505-bshows a unified implementation of a protocol stack, in which the protocol stack is implemented in a single network access device (e.g., access node (AN), new radio base station (NR BS), a new radio Node-B (NR NB), a network node (NN), or the like). In the second option, the RRC layer510, the PDCP layer515, the RLC layer520, the MAC layer525, and the PHY layer530may each be implemented by the AN. The second option505-bmay be useful in a femto cell deployment. Regardless of whether a network access device implements part or all of a protocol stack, a UE may implement an entire protocol stack (e.g., the RRC layer510, the PDCP layer515, the RLC layer520, the MAC layer525, and the PHY layer530). FIG.6is a diagram600showing an example of a DL-centric subframe. The DL-centric subframe may include a control portion602. The control portion602may exist in the initial or beginning portion of the DL-centric subframe. The control portion602may include various scheduling information and/or control information corresponding to various portions of the DL-centric subframe. In some configurations, the control portion602may be a physical DL control channel (PDCCH), as indicated inFIG.6. The DL-centric subframe may also include a DL data portion604. The DL data portion604may sometimes be referred to as the payload of the DL-centric subframe. The DL data portion604may include the communication resources utilized to communicate DL data from the scheduling entity (e.g., UE or BS) to the subordinate entity (e.g., UE). In some configurations, the DL data portion604may be a physical DL shared channel (PDSCH). The DL-centric subframe may also include a common UL portion606. The common UL portion606may sometimes be referred to as an UL burst, a common UL burst, and/or various other suitable terms. The common UL portion606may include feedback information corresponding to various other portions of the DL-centric subframe. For example, the common UL portion606may include feedback information corresponding to the control portion602. Non-limiting examples of feedback information may include an ACK signal, a NACK signal, a HARQ indicator, and/or various other suitable types of information. The common UL portion606may include additional or alternative information, such as information pertaining to random access channel (RACH) procedures, scheduling requests (SRs), and various other suitable types of information. As illustrated inFIG.6, the end of the DL data portion604may be separated in time from the beginning of the common UL portion606. This time separation may sometimes be referred to as a gap, a guard period, a guard interval, and/or various other suitable terms. This separation provides time for the switch-over from DL communication (e.g., reception operation by the subordinate entity (e.g., UE)) to UL communication (e.g., transmission by the subordinate entity (e.g., UE)). One of ordinary skill in the art will understand that the foregoing is merely one example of a DL-centric subframe and alternative structures having similar features may exist without necessarily deviating from the aspects described herein. FIG.7is a diagram700showing an example of an UL-centric subframe. The UL-centric subframe may include a control portion702. The control portion702may exist in the initial or beginning portion of the UL-centric subframe. The control portion702inFIG.7may be similar to the control portion described above with reference toFIG.6. The UL-centric subframe may also include an UL data portion704. The UL data portion704may sometimes be referred to as the payload of the UL-centric subframe. The UL portion may refer to the communication resources utilized to communicate UL data from the subordinate entity (e.g., UE) to the scheduling entity (e.g., UE or BS). In some configurations, the control portion702may be a physical DL control channel (PDCCH). As illustrated inFIG.7, the end of the control portion702may be separated in time from the beginning of the UL data portion704. This time separation may sometimes be referred to as a gap, guard period, guard interval, and/or various other suitable terms. This separation provides time for the switch-over from DL communication (e.g., reception operation by the scheduling entity) to UL communication (e.g., transmission by the scheduling entity). The UL-centric subframe may also include a common UL portion706. The common UL portion706inFIG.7may be similar to the common UL portion706described above with reference toFIG.7. The common UL portion706may additional or alternative include information pertaining to channel quality indicator (CQI), sounding reference signals (SRSs), and various other suitable types of information. One of ordinary skill in the art will understand that the foregoing is merely one example of an UL-centric subframe and alternative structures having similar features may exist without necessarily deviating from the aspects described herein. In some circumstances, two or more subordinate entities (e.g., UEs) may communicate with each other using sidelink signals. Real-world applications of such sidelink communications may include public safety, proximity services, UE-to-network relaying, vehicle-to-vehicle (V2V) communications, Internet of Everything (IoE) communications, IoT communications, mission-critical mesh, and/or various other suitable applications. Generally, a sidelink signal may refer to a signal communicated from one subordinate entity (e.g., UE1) to another subordinate entity (e.g., UE2) without relaying that communication through the scheduling entity (e.g., UE or BS), even though the scheduling entity may be utilized for scheduling and/or control purposes. In some examples, the sidelink signals may be communicated using a licensed spectrum (unlike wireless local area networks, which typically use an unlicensed spectrum). A UE may operate in various radio resource configurations, including a configuration associated with transmitting pilots using a dedicated set of resources (e.g., a radio resource control (RRC) dedicated state, etc.) or a configuration associated with transmitting pilots using a common set of resources (e.g., an RRC common state, etc.). When operating in the RRC dedicated state, the UE may select a dedicated set of resources for transmitting a pilot signal to a network. When operating in the RRC common state, the UE may select a common set of resources for transmitting a pilot signal to the network. In either case, a pilot signal transmitted by the UE may be received by one or more network access devices, such as an AN, or a DU, or portions thereof. Each receiving network access device may be configured to receive and measure pilot signals transmitted on the common set of resources, and also receive and measure pilot signals transmitted on dedicated sets of resources allocated to the UEs for which the network access device is a member of a monitoring set of network access devices for the UE. One or more of the receiving network access devices, or a CU to which receiving network access device(s) transmit the measurements of the pilot signals, may use the measurements to identify serving cells for the UEs, or to initiate a change of serving cell for one or more of the UEs. In NR, a UE may be served by one or more BSs or TRPs using single or multiple beams, as depicted inFIG.8.FIG.8shows an exemplary wireless communications system800in which a UE802is being served by a TRP810using a transmit beam820. A receive beam830of the UE is generally aligned with the transmit beam820. The TRP (or, for example, a BS) may be capable of communicating via one or more other transmit beams822a-822f. Similarly, the UE may be capable of communicating via one or more other receive beams832a-832d. Each transmit beam820,822of the BS may be collocated with a receive beam of the BS. Similarly, each receive beam830,832of the UE may be collocated with a transmit beam of the UE. Example Beam Selection and Radio Link Failure During Beam Recovery Some wireless systems (e.g., 5G systems, eMBB systems) encounter difficulties with high path loss in communication links. New techniques, such as hybrid beamforming (e.g., analog and digital beamforming), which are not present in 3G and 4G systems, may be used in newer wireless systems to overcome the difficulties caused by high path loss. Hybrid beamforming may create a beam pattern to users (e.g., UEs) that can enhance link budgets and/or improve signal-to-noise ratio (SNR) for communications to users (e.g., UEs). According to aspects of the present disclosure, in multi-beam operation, active beam pairs (e.g., pairs of transmit and receive beams) used for communication by a nodeB (NB) and a UE may be misaligned due to beam switch failure (e.g., beams being switched to other beams that experience so much interference or deep fade that communications are blocked) or signal blockage (e.g., caused by a UE moving into a shadow of a building). When a beam that an NB and a UE are using for communications becomes misaligned, then the NB and the UE may not be able to communicate control information or data over active beams. In aspects of the present disclosure, when an active beam pair used by an NB (e.g., a serving cell) and a UE becomes misaligned, alternative beams from a serving cell may be available for the UE and NB to use for beam recovery (i.e., recovery of the communications link). Selecting the same beam and/or beam direction as the (misaligned) active beam pair may cause a beam recovery process (e.g., messages used to recover the communications link) to be blocked (e.g., by the same cause as caused the active beam to be misaligned in the first place). According to previously known techniques, alternative beams from a serving cell may not be known to be available when an active beam becomes misaligned. In addition, having a UE perform a radio link failure (RLF) procedure when a beam becomes misaligned requires communications between the UE and a serving cell to wait for the RLF procedure to complete in order to recover a communications link, possibly resulting in a long delay. According to aspects of the present disclosure, a UE may select an alternative beam (when multiple beams are available) for transmitting beam recovery message(s) via a scheduling request (SR) procedure and/or a random access channel (RACH) procedure. In some aspects of the present disclosure, a UE may declare RLF (e.g., perform an RLF procedure) or perform a forward handover procedure when alternative beams are not available from a serving cell of the UE. According to aspects of the present disclosure, an NB may configure a UE with one or more alternative beams to use for beam recovery if an active beam pair becomes misaligned. In some aspects of the present disclosure, a UE may configure a set of alternative beams based on measurements (e.g., reference signal received power (RSRP), reference signal received quality (RSRQ), or signal-to-noise ratio (SNR)) of beams made by the UE. That is, a UE may determine a set of alternative beams to use to send one or more beam recovery messages based on measuring parameters of the radio frequency environment instead of or in addition to obtaining an indication of alternative beams from a serving NB. According to aspects of the present disclosure, a UE may trigger an RLF (e.g., begin an RLF procedure) based on a number of scheduling request (SR) procedures and/or random access channel (RACH) procedures that have failed. That is, a UE may be configured with a threshold number of SR procedure failures and/or RACH procedure failures. If the UE experiences misalignment of an active beam pair, transmits beam recovery messages via an SR procedure and/or a RACH procedure, and detects that the threshold number of SR procedures and/or RACH procedures have failed (e.g., the UE does not receive a response(s)), then the UE may begin an RLF procedure. In aspects of the present disclosure, a UE may detrigger an RLF procedure, based on successfully decoding a PDCCH. That is, if a UE has triggered an RLF based on beam misalignment and failure of SR and/or RACH procedures transmitting beam recovery messages, the UE may terminate the RLF procedure without completing the RLF procedure, if the UE successfully decodes a PDCCH. Alternatively, decoding a control channel (e.g., a PDCCH) may be used to change the counters and/or metrics used for RLF. That is, commands in a control channel may directly change configured counters and/or metrics used in the RLF procedure. A UE that has counters and/or metrics of an RLF procedure changed in a control channel may quickly recover from such a procedure. FIG.9illustrates example operations900for wireless communications, in accordance with aspects of the present disclosure. Operations900may be performed by a BS (e.g., an NB), for example, BS110, shown inFIG.1and TRP810, shown inFIG.8. Operations900begin, at block902, with the BS communicating using beamforming with a user equipment (UE) via a transmit beam and a receive beam of an active beam pair. For example, BS810(shown inFIG.8) may communicate using beamforming with UE802via a transmit beam820and a receive beam830of an active beam pair. In the example, the BS may transmit to the UE via the transmit beam820and receive signals from the UE via a receive beam collocated with the transmit beam; similarly, the UE may receive signals from the BS via the receive beam830and transmit signals to the BS via a transmit beam collocated with the receive beam. At block904, operations900continue with the BS sending an indication to the UE of one or more alternative beams for the UE to use to send a beam recovery message to the BS if the transmit beam and the receive beam of the active beam pair become misaligned. Continuing the example, the BS810may send an indication of a transmit beam822aand a receive beam collocated with the transmit beam for the UE802to use to send a beam recovery message to the BS if the transmit beam820and receive beam830of the active beam pair become misaligned. FIG.10illustrates example operations1000for wireless communications, in accordance with aspects of the present disclosure. Operations1000may be performed by a UE, for example, UE120, shown inFIG.1. Operations1000may be complementary to operations900, described above with reference toFIG.9. Operations1000begin, at block1002, with the UE communicating using beamforming with a base station (BS) via a transmit beam and a receive beam of an active beam pair. For example, UE802(shown inFIG.8) may communicate using beamforming with BS810via a transmit beam820and a receive beam830of an active beam pair. In the example, the UE may receive signals from the BS via the receive beam830and transmit signals to the BS via a transmit beam collocated with the receive beam; similarly, the BS may transmit to the UE via the transmit beam820and receive signals from the UE via a receive beam collocated with the transmit beam. At block1004, operations1000continue with the UE obtaining an indication of one or more alternative beams for the UE to use to send a beam recovery message to the BS if the transmit beam and the receive beam of the active beam pair become misaligned. Continuing the example, the BS810may send an indication of a transmit beam822aand a receive beam collocated with the transmit beam for the UE802to use to send a beam recovery message to the BS if the transmit beam820and receive beam830of the active beam pair become misaligned. According to aspects of the present disclosure, an NB may send a configuration, indicating alternative beams for a UE to use for beam recovery (e.g., as in block904shown inFIG.9), using layer 1 (L1) signaling (e.g., a PHY signal), layer 2 (L2) signaling (e.g., a MAC control element) and/or radio resource control (RRC) messaging. In aspects of the present disclosure, an NB may configure a threshold on a UE for the UE to use in selecting an alternative beam from a set of alternative beams when the UE is attempting beam recovery (e.g., after a transmit and receive beam of an active beam pair becomes misaligned). A UE receiving or otherwise obtaining the threshold may determine which alternative beam to select based on the threshold. According to aspects of the present disclosure, an NB may configure a signal quality threshold on a UE for the UE to use to select (e.g., to use for sending beam recovery messages) one or more of the other alternative beams that are within the threshold of the best alternative beam (e.g., treating the threshold as a relative threshold) and/or greater than or equal to the threshold (e.g., treating the threshold as an absolute threshold). The UE may use a signal-to-noise ratio (SNR), a reference signal received quality (RSRQ), and/or a reference signal received power (RSRP) of the alternative beam(s) when comparing the one or more alternative beams to the threshold. In aspects of the present disclosure, an NB may configure a directional threshold for assisting the UE to select another beam in another direction for sending a beam recovery message using an SR message and/or a RACH message. For example, BS810(shown inFIG.8) may configure UE802with a directional threshold of +/−30 degrees to assist the UE to select another beam in another direction for sending a beam recovery message using an SR message to the BS. In the example, if the active transmit beam820and active receive beam830become misaligned, the UE may determine to send SR messages to the BS via transmit beams collocated with receive beams832band832cbecause those beams are within the directional threshold of +/−30 degrees of the active receive beam830. Still in the example, the UE may determine not to send SR messages via transmit beams collocated with receive beams832aand832dbecause those beams are not within the directional threshold of +/−30 degrees of the active receive beam830. According to aspects of the present disclosure, when a UE is configured with a directional threshold, the UE may determine an alternative beam to use for sending a beam recovery message based on a difference in direction (e.g., measured in degrees of arc) between the failed active beam and an alternative beam. In aspects of the present disclosure, if a UE determines that no alternative beam meeting the direction criteria (e.g., is within the directional threshold) is available, then UE sends a beam recovery message using the best alternative beam. A UE may determine a best alternative beam based on an SNR, RSRQ, and/or RSRP of the alternative beam(s). For example, BS810(shown inFIG.8) may configure UE802with a directional threshold of +/−5 degrees to assist the UE to select another beam in another direction for sending a beam recovery message using a RACH message to the BS. In the example, if the active transmit beam820and active receive beam830become misaligned, the UE may determine that no alternative beam is within the directional threshold of +/−5 degrees of the active receive beam830. Still in the example, the UE may determine that RSRQ of the BS, as measured on receive beam832c, is higher than RSRQ of the BS on other receive beams and the UE may then send a beam recovery RACH message via a transmit beam collocated with receive beam832c. According to aspects of the present disclosure, when beam misalignment occurs on an active beam pair, a UE may send a beam recovery request over one of the alternative beams (e.g., obtained via an indication, as in block1004inFIG.10) that meets a signal quality threshold and/or direction threshold, if there is one, configured on the UE. In aspects of the present disclosure, if a UE determines that no alternative beam meets the direction criteria, then the UE may send one or more beam recovery requests in the same direction as the failed active beam. According to aspects of the present disclosure, when beam misalignment occurs on an active beam pair and no alternative beams greater than or equal to a threshold (e.g., a signal quality threshold) are available for the serving cell (e.g., included in the alternative beams indicated to a UE), then a UE may declare an RLF immediately (e.g., without waiting for a timer to expire as is done with other RLFs) and/or perform a forward handover to another cell. In aspects of the present disclosure, when beam misalignment occurs on an active beam pair and alternative beams from a serving cell are less than a threshold (e.g., a signal quality threshold) and signal quality of one or more beams from a cell neighboring the serving cell are greater than or equal to a threshold, then a UE may perform a forward handover to the neighboring cell (e.g., after declaring an RLF). According to aspects of the present disclosure, an NB may configure a UE to transmit one or more beam recovery messages (e.g., SR messages, RACH messages) via a particular transmit beam shape when the UE determines that a transmit beam and a receive beam of an active beam pair are misaligned. The particular shape may comprise a broad beam or a pseudo-omnidirectional beam. Transmitting a beam recovery message via a particular beam shape may improve the possibility of the NB receiving the beam recovery message from one or more transmission and reception points and/or reduce the time needed for the UE to refine receive and transmit beams for the UE. In aspects of the present disclosure, an NB may configure one or more beams as reference beams and/or reference signals that are quasi co-located with an active beam. The active beam may be used for conveying data and/or control information to a UE. According to aspects of the present disclosure, beams configured as reference beams and/or reference signals may convey one or a combination of new radio synchronization signals (NR-SS), channel state information reference signals (CSI-RS), and other types of reference signals. In aspects of the present disclosure, an NB may send an indication of one or more beams configured as one or more reference beams or reference signals via RRC messages, L1 signaling (e.g., PHY signaling), or L2 messages (e.g., MAC control elements). According to aspects of the present disclosure, an NB may send (e.g., via RRC messages) a signal quality threshold to a UE. If a signal quality of a reference beam falls below the threshold, then the UE may assume that a quality of an active beam corresponding to the reference beam has degraded. For example, an NB may send a signal quality threshold to a UE and a configuration indicating an active beam pair corresponds to a reference beam. In the example, the UE may detect that signal quality of the reference beam is lower than the signal quality threshold before detecting that the signal quality of the active beam pair has degraded (e.g., the active beam pair is semi-persistently scheduled), and the UE may assume that signal quality of the active beam pair is degraded and start a beam recovery procedure, for example. In aspects of the present disclosure, an NB may signal to a UE a configuration indicating a number of failed SR, RACH, and/or other beam recovery attempts that the UE may experience to cause the UE to trigger RLF, speed up the RLF procedure (e.g., by shortening periods between steps of the RLF procedure), or terminate one or more timers associated with the RLF procedure early. According to aspects of the present disclosure, an NB may signal to a UE a configuration indicating a number of PDCCH decoding successes and/or PDSCH decoding successes that the UE may experience to cause the UE to detrigger (e.g., stop completion of) an RLF procedure, speed up detrigger of the RLF procedure (e.g., by shortening periods between steps required to terminate without completing the RLF procedure), or terminate one or more timers associated with the RLF procedure early. After detriggering an RLF procedure and/or terminating the timers associated with the RLF procedure early, the UE remains on the serving cell, instead of handing over to another cell. In aspects of the present disclosure, an NB may signal to a UE a configuration indicating a number of PDCCH decoding successes and/or PDSCH decoding successes that the UE may experience to cause the UE to detrigger (e.g., stop completion of) a beam recovery procedure, speedup detrigger of the beam recovery procedure (e.g., by shortening periods between steps required to terminate without completing the beam recovery procedure), or terminate timers, associated with beam recovery procedure, early. According to aspects of the present disclosure, when a UE detects active beam failure, i.e., when a signal quality of one or more reference beams from an NB falls below a threshold, the UE may try beam recovery using UL resources such as beam recovery resources, scheduling requests, and/or RACH resources. If the beam recovery fails a threshold number of attempts, where the threshold may be indicated in a configuration received from the NB, then the UE may trigger RLF, speed up the RLF procedure, or terminate one or more timers associated with the RLF procedure early. After completing the RLF procedure, the UE may select a suitable cell and perform a forward handover to the suitable cell or enter an idle state. In aspects of the present disclosure, after a UE has triggered an RLF procedure (e.g., due to beam recovery procedures failing a threshold number of times, as described above), if PDCCH decoding and/or PDSCH decoding is successful over one or more transmissions, then the UE may terminate the RLF procedure (e.g., detrigger the RLF procedure), speed up detriggering of the RLF procedure, or terminate timers, associated with the RLF procedure, early. After detriggering an RLF procedure and/or terminating the timers associated with the RLF procedure early, the UE remains on the serving cell, instead of handing over to another cell. According to aspects of the present disclosure, after a UE has triggered a beam recovery procedure, if PDCCH decoding and/or PDSCH decoding is successful over one or more transmissions, then the UE may stop the beam recovery procedure or terminate one or more timers associated with the beam recovery procedure early. After stopping a beam recovery procedure and/or terminating the timers associated with the beam recovery procedure early, the UE remains on the serving cell, instead of handing over to another cell. FIG.11illustrates example operations1100for wireless communications, in accordance with aspects of the present disclosure. Operations1100may be performed by a BS (e.g., an NB), for example, BS110, shown inFIG.1or TRP810, shown inFIG.8. Operations1100begin, at block1102, with the BS communicating using beamforming with a user equipment (UE) via a transmit beam and a receive beam of an active beam pair. For example, BS810(shown inFIG.8) may communicate using beamforming with UE802via an active transmit beam820and receive beam830. In the example, the BS may transmit to the UE via the transmit beam820and receive signals from the UE via a receive beam collocated with the transmit beam; similarly, the UE may receive signals from the BS via the receive beam830and transmit signals to the BS via a transmit beam collocated with the receive beam. At block1104, operations1100continue with the BS sending a configuration to the UE indicating a number of failed beam recovery attempts, of the active beam pair, to cause the UE to take action regarding a radio link failure (RLF) procedure. Continuing the example, the BS810may send (e.g., via RRC signaling) a configuration to the UE802indicating that the UE should make two failed beam recovery attempts of the beams820and830before the UE starts an RLF procedure. FIG.12illustrates example operations1200for wireless communications, in accordance with aspects of the present disclosure. Operations1200may be performed by a UE, for example, UE120, shown inFIG.1or UE802, shown inFIG.8. Operations1200may be complementary to operations1100, described above with reference toFIG.11. Operations1200begin, at block1202, with the UE communicating using beamforming with a base station (BS) via a transmit beam and a receive beam of an active beam pair. For example, UE802(shown inFIG.8) may communicate using beamforming with BS810via a transmit beam820and receive beam830of an active beam pair. In the example, the UE may receive signals from the BS via the receive beam830and transmit signals to the BS via a transmit beam collocated with the receive beam; similarly, the BS may transmit to the UE via the transmit beam820and receive signals from the UE via a receive beam collocated with the transmit beam. At block1204, operations1200continue with the UE obtaining a configuration from the BS indicating a number of failed beam recovery attempts, of the active beam pair, to cause the UE to take action regarding a radio link failure (RLF) procedure. Continuing the example, the UE802may receive (e.g., via RRC signaling) a configuration from the BS810indicating that the UE should attempt two beam recovery attempts of the beams820and830before the UE starts an RLF procedure. At block1206, operations1200continue with the UE taking action regarding the RLF procedure, based on making the indicated number of failed beam recovery attempts of the active beam pair. Continuing the example from above, the UE802sends two beam recovery messages (e.g., SR messages) to attempt to recover the beams820and830. Still in the example, the UE is not successful in recovering the beams (e.g., the UE receives no responses, or receives a response but is not able to reestablish communications with the BS) and starts the RLF procedure. The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. For example, means for transmitting and/or means for receiving may comprise one or more of a transmit processor420, a TX MIMO processor430, a receive processor438, or antenna(s)434of the base station110and/or the transmit processor464, a TX MIMO processor466, a receive processor458, or antenna(s)452of the user equipment120. Additionally, means for generating, means for multiplexing, and/or means for applying may comprise one or more processors, such as the controller/processor440of the base station110and/or the controller/processor480of the user equipment120. The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. If implemented in hardware, an example hardware configuration may comprise a processing system in a wireless node. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement the signal processing functions of the PHY layer. In the case of a user terminal120(seeFIG.1), a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the machine-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For example, the instructions may include instructions for performing the operations described herein and illustrated inFIGS.9-10. Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized. It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
68,199
11943678
DETAILED DESCRIPTION The embodiments set forth below represent information to enable those skilled in the art to practice the embodiments and illustrate the best mode of practicing the embodiments. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the disclosure and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure. Radio Node: As used herein, a “radio node” is either a radio access node or a wireless device. Radio Access Node: As used herein, a “radio access node” or “radio network node” is any node in a Radio Access Network (RAN) of a cellular communications network that operates to wirelessly transmit and/or receive signals. Some examples of a radio access node include, but are not limited to, a base station (e.g., a New Radio (NR) base station (gNB) in a Third Generation Partnership Project (3GPP) Fifth Generation (5G) NR network or an enhanced or evolved Node B (eNB) in a 3GPP Long Term Evolution (LTE) network), a high-power or macro base station, a low-power base station (e.g., a micro base station, a pico base station, a home eNB, or the like), and a relay node. Core Network Node: As used herein, a “core network node” is any type of node in a core network or any node that implements a core network function. Some examples of a core network node include, e.g., a Mobility Management Entity (MME), a Packet Data Network (PDN) Gateway (P-GW), a Service Capability Exposure Function (SCEF), Home Subscriber Server (HSS), or the like. Some other examples of a core network node include a node implementing a Access and Mobility Function (AMF), a User Plane Function (UPF), a Session Management Function (SMF), an Authentication Server Function (AUSF), a Network Slice Selection Function (NSSF), a Network Exposure Function (NEF), a Network Repository Function (NRF), a Policy Control Function (PCF), a Unified Data Management (UDM), or the like. Wireless Device: As used herein, a “wireless device” is any type of device that has access to (i.e., is served by) a cellular communications network by wirelessly transmitting and/or receiving signals to a radio access node(s). Some examples of a wireless device include, but are not limited to, a User Equipment device (UE) in a 3GPP network and a Machine Type Communication (MTC) device. Network Node: As used herein, a “network node” is any node that is either part of the RAN or the core network of a cellular communications network/system. Note that the description given herein focuses on a 3GPP cellular communications system and, as such, 3GPP terminology or terminology similar to 3GPP terminology is oftentimes used. However, the concepts disclosed herein are not limited to a 3GPP system. Note that, in the description herein, reference may be made to the term “cell;” however, particularly with respect to 5G NR concepts, beams may be used instead of cells and, as such, it is important to note that the concepts described herein are equally applicable to both cells and beams. Systems and methods are disclosed herein for performing a controlled handover of a wireless device from an incoming access network to a cellular access network of a visited cellular network. In other words, systems and methods are disclosed herein for control of handover of a wireless device from an incoming access network to a cellular access network of a visited cellular network such that the visited network may gracefully reject handover from a non-allowed incoming access network (e.g., a non-allowed incoming access network type). The present disclosure provides systems and methods that enable a Visited Public Mobile Network (VPMN) to distinguish a graceful handover rejection from rejections resulting from bad network conditions and, as such, Key Performance Indicators (KPIs) are properly accounted for and unwanted roaming call cases are prevented. In addition, in some embodiments, the VPMN is enabled to charge the session as a roaming session. In this regard,FIG.1illustrates one example of a wireless communication system100in which a UE102has the capability to utilize both a cellular access network, which is shown as a 3GPP (Radio) Access Network ((R)AN)104, and a Wireless Local Area Network (WLAN) access network, which is shown as a Non-3GPP (N3GPP) AN106. The 3GPP (R)AN104may be a Fourth Generation (4G) RAN (e.g., a LTE or LTE-Advanced RAN, including a number of base stations which are referred to as eNBs) or a 5G RAN (e.g., a NR RAN, including a number of base stations which are referred to as gNBs). The 3GPP (R)AN104is connected to a core network, which is shown as a 3GPP core network108(e.g., an Evolved Packet Core (EPC) or 5G Core (5GC)). The N3GPP AN106is connected to the 3GPP core network108via a gateway or interworking function, which is shown as an evolved Packet Data Gateway (ePDG)/N3GPP Inter-Working Function (N3IWF)110. Notably, the term “ePDG” is used for 4G, and the term “N3IWF” is used for 5G. The 3GPP core network108is connected to an Internet Protocol (IP) Multimedia Subsystem (IMS)112, as will be appreciated by one of skill in the art. Note that while many of the specific examples described herein relate to handover from a WLAN access network (i.e., the incoming access network) to a 3GPP access network, the present disclosure is not limited thereto. The incoming access network is not limited to being a WLAN access network. Other types of incoming access networks may be used. Embodiments of the present disclosure apply to any handover where it is performed through a registration or attach procedure. For NR and LTE, if there is no N26 interface and the NG RAN and LTE access network also do not support Xn or X2, embodiments of the present disclosure could apply as well if the UE has to “handover” when moving from 5G RAN to LTE or vice versa. FIGS.2and3illustrate two specific examples of the wireless communication system100ofFIG.1in which the 3GPP core network108is a 5GC200inFIG.2and an EPC300inFIG.3. Looking first atFIG.2, as will be appreciated by one of skill in the art, the 5GC200includes a number of Network Functions (NFs) connected by service-based interfaces in the control plane. An NF may be implemented either as a network element on a dedicated hardware, as a software instance running on a dedicated hardware, or as a virtualized function instantiated on an appropriate platform, e.g., a cloud infrastructure. As illustrated, the 5GC200includes a UPF202, an SMF204, an AMF206, an AUSF208, a NSSF210, a NEF212, a NRF214, a PCF216, a UDM218, and an Application Function (AF)220. Note that whileFIG.2illustrates the 5GC200as a service-based architecture, a reference point representation may alternatively be used. Reference point representations of the 5G network architecture are used to develop detailed call flows in the normative standardization. The 5GC200aims at separating a user plane and a control plane. The user plane carries user traffic while the control plane carries signaling in the network. InFIG.2, the UPF202is in the user plane and all other NFs, i.e., the SMF204, AMF206, AUSF208, NSSF210, NEF212, NRF214, PCF216, UDM218, and AF220, are in the control plane. Separating the user and control planes guarantees each plane resource to be scaled independently. It also allows UPFs202to be deployed separately from control plane functions in a distributed fashion. In this architecture, the UPFs202may be deployed very close to UEs102to shorten the Round Trip Time (RU) between the UEs102and the data network for some applications requiring low latency. The core 5G network architecture is composed of modularized functions. For example, the AMF206and SMF204are independent functions in the control plane. Separating the AMF206and SMF204allows for independent evolution and scaling. Other control plane functions like the PCF216and AUSF208can be separated as shown inFIG.2. Modularized function design enables the 5G core network to support various services flexibly. Each NF interacts with another NF directly. It is possible to use intermediate functions to route messages from one NF to another NF. In the control plane, a set of interactions between two NFs is defined as a service so that its reuse is possible. This service enables support for modularity. The user plane supports interactions such as forwarding operations between different UPFs202. The service(s) that an NF provides to other authorized NFs can be exposed to the authorized NFs through the service-based interface. InFIG.2, the service-based interfaces are indicated by the letter “N” followed by the name of the NF (e.g., Namf for the service based interface of the AMF206and Nsmf for the service based interface of the SMF204, etc.). Some properties of the NFs shown inFIG.2may be described in the following manner; however, the interested reader can find additional details in 3GPP Technical Specification (TS)23.501. The AMF206provides UE-based authentication, authorization, mobility management, etc. A UE102even using multiple access technologies is basically connected to a single AMF206because the AMF206is independent of the access technologies. The SMF204is responsible for session management and allocates IP addresses to UEs102. It also selects and controls the UPF202for data transfer. If a UE102has multiple sessions, different SMFs204may be allocated to each session to manage them individually and possibly provide different functionalities per session. The AF220provides information on the packet flow to the PCF216responsible for policy control in order to support Quality of Service (QoS). Based on the information, the PCF216determines policies about mobility and session management to make the AMF206and SMF204operate properly. The AUSF208supports an authentication function for the UEs102or similar and thus stores data for authentication of the UEs102or similar while the UDM218stores subscription data of the UE102. In addition to the NFs described above, the 5GC200includes a N3IWF222that provides an interface between the N3GPP AN106and the 5GC200. While not necessary for understanding the present disclosure, for additional details regarding the N3IWF222, the interested reader is directed to 3GPP TS 23.501 and 23.502. As will be understood by those of skill in the art, the IMS112includes various IMS entities such as, for example, a Proxy Call Session Control Function (P-CSCF)224, an Interrogating Call Session Control Function (I-CSCF)226, a Serving Call Session Control Function (S-CSCF)228, an Access Transfer Control Function (ATCF)230, and an Access Gateway (AGW)232. The operational details of the P-CSCF224, the I-CSCF226, the S-CSCF228, the ATCF230, and the AGW232are well known to those of skill in the art and are therefore not described here. Now turning toFIG.3, as will be appreciated by one of skill in the art, the EPC300includes a number of core network entities such as, e.g., a Serving Gateway (S-GW)302, a P-GW304, an MME306, a HSS308, and a Policy and Charging Rules Function (PCRF)310. The operational details of the S-GW302, the P-GW304, the MME306, the HSS308, and the PCRF310are well known to those of skill in the art and therefore are not repeated here. In addition, the EPC300includes an ePDG312that provides an interface between the EPC300and the N3GPP AN106. While not necessary for understanding the present disclosure, for additional details regarding the ePDG312, the interested reader is directed to 3GPP TS 23.402. Now, turning to embodiments of the present disclosure. In a first embodiment, a UE (e.g., a UE102) includes an indication of the incoming access network (e.g., an indication of the incoming access network type as, e.g., a WLAN access network) during attachment (with handover indication) or Protocol Data Unit (PDU) session establishment (e.g., in addition to an handover indication indicating whether the handover is from a N3GPP or 3GPP access network) with the core network, depending on the case. This allows the network to decide whether to accept or reject the handover based on the incoming access network. In some embodiments, the network allows the attach (with handover indication) only if the UE provides the indication of the incoming access network such that attach (with handover indication) from an unsupportive UE will be always rejected. The supportive UE may provide the indication of the incoming access network only in case the procedure is for the purpose of handover. FIG.4illustrates one example implementation of at least some aspects of the first embodiment of the present disclosure. Note that while this example uses EPC as the core network, the same principle applies to a 5GC, regardless of the handover target being a 5GC or EPC. The steps illustrated inFIG.4are as follows. In step400, UEa while roaming attaches to a WLAN, establishes a Virtual Private Network (VPN) with the Home Public Mobile Network (HPMN), then establishes a Voice over LTE (VoLTE) session with UEb. In step402, UEa decides to handover the session to 3GPP access, which can be EPC or 5GC. In this example, it is assumed EPC. In step404, UEa starts an initial attach for handover purposes by sending an initial attach request. In this example, the initial attach request includes a handover (HO) indication that that initial attach request is for a handover. This is currently described in 3GPP TS 23.401 in the initial attach procedure and 3GPP TS 23.502, where the HO indication is mandatory for handover from N3GPP access to 3GPP access. In 3GPP TS 23.502, the HO indication is also used in case of interworking with N26 for mobility between 3GPP accesses. In this embodiment, in addition to the HO indication, the initial attach request includes an indication (referred to herein as an incoming access indication) of the incoming access network from which the handover is performed. In some embodiments, the incoming access indication is an indication of a network type of the incoming access network. In this example, the incoming access network is a WLAN and, as such, the incoming access indication is an indication that the incoming access network is a WLAN. Note that while the present disclosure describes the HO indication and, in some embodiments, the incoming access indication as being included within the initial attach request (or some other message such as a PDN connection establishment request), it should be understood that the HO indication and, when applicable, the incoming access indication can be more generally understood as being included within a Non-Access Stratum (NAS) message (e.g., an initial attach request in which the HO indication and the incoming access indication are included in a PDN connection request embedded within the initial attach request). Note that, in order to maintain backward compatibility, if the initial attach request in step404includes both the HO indication and the incoming access indication, then the HO indication in the initial attach request is interpreted as a handover of UEa from an incoming access network of the type indicated by the incoming access indication. Otherwise, if the initial attach request includes the HO indication but not an incoming access indication, then the HO indication is interpreted as an indication that the initial attach request is for a handover of UEa from WLAN to 3GPP access (i.e., the current meaning of the HO indication). In step406, upon receiving the initial attach request from UEa, the MME validates whether UEa is from a domain with which a roaming agreement exists. If a roaming agreement exists, the MME further validates whether the handover from the particular (type of) access network indicated by the incoming access indication (which in this example is WLAN) is allowed, e.g., based on operator policy, i.e., if the handover from the incoming domain/access network is allowed. If handover is allowed, then, in this example, the MME makes a decision to allow the handover. Otherwise, the MME makes a decision to reject the handover. Note that while in this example, the MME decides whether to allow or reject the handover based on the incoming access indication, the present disclosure is not limited thereto. The decision as to whether to allow or reject the handover may be made by any suitable entity in the VPMN. Further, whether the decision is made by the MME or some other entity in the VPMN, the decision is made based on the incoming access indication and, optionally, one or more additional criteria (e.g., Access Point Name (APN)) associated with the handover (e.g., the APN provided by UEa during PDN connection establishment during the handover, or whether the VPMN supports Voice over IMS (VoIMS)). In step408, if the VPMN decides to reject the handover, the VPMN (e.g., the MME) rejects the handover gracefully, e.g., rejects the attachment with a special code that indicates that the rejection is due to a “bad” incoming access network (type), e.g., such that this failed attachment/handover is not included in the KPIs. If the VPMN decides to allow the handover, then the handover is completed, e.g., in the conventional manner. However, in some embodiments, if handover is allowed, the VPMN (e.g., the MME) includes the incoming access indication in generated Charging Data Records (CDRs) so the session can be charged as a roaming session, given that it started as a non-roaming session over WLAN. In a second embodiment of the present disclosure, the UE stores access information (i.e., information regarding the access network such as, e.g., an indication of the access network type) that the UE is currently using. Then, during a handover, the core network retrieves the incoming access indication from the UE. Note that whileFIG.4illustrates one example of including the incoming access indication in the initial attach request for the handover, the present disclosure is not limited thereto. In some other embodiments, the incoming access indication (e.g., and the HO indication) are included in a PDU session establishment request from UEa during PDU session establishment for the handover. FIG.5illustrates one example implementation of at least some aspects of the second embodiment of the present disclosure. Note that while this example uses EPC as the core network, the same principle applies to a 5GC, regardless of the handover target being a 5GC or EPC. The steps illustrated inFIG.5are as follows. In step500, UEa while roaming attaches to a WLAN, establishes a VPN with the HPMN, then establishes a VoLTE session with UEb. In step502, UEa stores the current access information. This information includes information that indicates the network type of the current access network, which is WLAN in this example. In step504, UEa decides to handover the session to 3GPP access, which can be EPC or 5GC. In this example it is assumed EPC. In step506, UEa starts an initial attach for handover purposes by sending an initial attach request. In this embodiment, the initial attach request includes the HO indication but not the incoming access indication. In step508, the MME fetches, from UEa, (e.g., sends a request to UEa for) the incoming access information. In this example, the incoming access information is the incoming access indication. In one example embodiment, the exiting identity request is extended with this additional capability, but other options are also possible. In step510, UEa sends the incoming access information (e.g., the incoming access indication) to the MME. Note that, in order to maintain backward compatibility, if the UE returns incoming access information in step510, then the HO indication in the initial attach request is interpreted as a handover of UEa from the incoming access indicated by the incoming access information; otherwise, if UEa does not return incoming access information in response to the fetch, then the HO indication is interpreted as an indication that the initial attach request is for a handover of UEa from WLAN to 3GPP access (i.e., the current meaning of the HO indication). In step512, the MME validates whether UEa is from a domain with which a roaming agreement exists. If a roaming agreement exists, the MME further validates whether handover from the incoming access network (type) is allowed, e.g., based on operator policy, i.e., if the handover from the incoming domain/access network is allowed. If handover is allowed, then, in this example, the MME makes a decision to allow the handover. Otherwise, the MME makes a decision to reject the handover. Note that while in this example, the MME decides whether to allow or reject the handover based on the incoming access indication, the present disclosure is not limited thereto. The decision as to whether to allow or reject the handover may be made by any suitable entity in the VPMN. Further, whether the decision is made by the MME or some other entity in the VPMN, the decision is made based on the incoming access indication and, optionally, one or more additional criteria (e.g., APN) associated with the handover (e.g., the APN provided by UEa during PDN connection establishment during the handover, or whether the VPMN supports VoIMS). In step514, if the VPMN decides to reject the handover, the VPMN (e.g., the MME) rejects the handover gracefully, e.g., rejects the attachment with a special code that indicates that the rejection is due to a “bad” incoming access network (type), e.g., such that this failed attachment/handover is not included in the KPIs. If the VPMN decides to allow the handover, then the handover is completed, e.g., in the conventional manner. However, in some embodiments, if handover is allowed, the VPMN (e.g., the MME) includes the incoming access indication in generated CDRs so the session can be charged as a roaming session, given that it started as a non-roaming session over WLAN. Note that whileFIG.5illustrates one example of fetching the incoming access indication during attachment for the handover, the present disclosure is not limited thereto. In some other embodiments, the incoming access indication is fetched from UEa during a PDU session establishment for the handover. In a third embodiment of the present disclosure, the HSS (or HSS/UDM) in the UE's HPMN stores information about the access network of the UE, e.g., together with the P-GW identity, and the HSS provides this information to the MME (or AMF) when the MME receives information on the PDNs that the UE is connected to over the non-3GPP access in the subscriber data obtained from the HSS. FIG.6illustrates one example implementation of at least some aspects of the third embodiment of the present disclosure. Note that while this example uses EPC as the core network, the same principle applies to a 5GC, regardless of the handover target being a 5GC or EPC. In essence, the process ofFIG.6is a normal attach with handover, the difference being that the incoming access network information (e.g., the incoming access indication) is returned with the subscriber profile. This option requires that there is a roaming agreement with the home domain. In step600, UEa while roaming attaches to a WLAN, establishes a VPN with the HPMN, then establishes a VoLTE session with UEb. In step602, UEa decides to handover the session to 3GPP access, which can be EPC or 5GC. In this example, it is assumed EPC. In step604, UEa starts an initial attach for handover purposes by sending an initial attach request. In this embodiment, the initial attach request includes a HO indication but not an incoming access indication. In step606, upon receiving the initial attach request from UEa, the MME validates whether UEa is from a domain with which a roaming agreement exists. In step608, if a roaming agreement exists, the MME fetches a subscriber profile of UEa from the HSS. In step610, the HSS returns the subscriber profile of UEa to the MME. In this embodiment, the subscriber profile includes the incoming access indication that indicates the incoming access network (type) for the handover. Note that while steps608and610involve the HSS in the example ofFIG.6, the present disclosure is not limited thereto. This functionality may additionally or alternatively be performed at the PCRF or PCF. In step612, the MME further validates whether the handover from the particular (type of) access network indicated by the incoming access indication (which in this example is WLAN) is allowed, e.g., based on operator policy, i.e., if the handover from the incoming domain/access network is allowed. If handover is allowed, then, in this example, the MME makes a decision to allow the handover. Otherwise, the MME makes a decision to reject the handover. Note that while in this example, the MME decides whether to allow or reject the handover based on the incoming access indication, the present disclosure is not limited thereto. The decision as to whether to allow or reject the handover may be made by any suitable entity in the VPMN. Further, whether the decision is made by the MME or some other entity in the VPMN, the decision is made based on the incoming access indication and, optionally, one or more additional criteria (e.g., APN) associated with the handover (e.g., the APN provided by UEa during PDN connection establishment during the handover, or whether the VPMN supports VoIMS). In step614, if the VPMN decides to reject the handover, the VPMN (e.g., the MME) rejects the handover gracefully, e.g., rejects the attachment with a special code that indicates that the rejection is due to a “bad” incoming access network (type), e.g., such that this failed attachment/handover is not included in the KPIs. If the VPMN decides to allow the handover, then the handover is completed, e.g., in the conventional manner. However, in some embodiments, if handover is allowed, the VPMN (e.g., the MME) includes the incoming access indication in generated CDRs so the session can be charged as a roaming session, given that it started as a non-roaming session over WLAN. Note that whileFIG.6illustrates one example of obtaining the incoming access indication from the HPMN during initial attach, the present disclosure is not limited thereto. In some other embodiments, the incoming access indication (e.g., and the HO indication) is obtained from the HPMN during PDU session establishment for the handover. Lawful intercept is a desired function in wireless communication systems. In this regard, the discussion now turns to additional functionality added to any of the embodiments described above to deal with intercept for a UE engaged in a session that starts over WiFi with an ePDG at home (i.e., in the UE's home PLMN) and that can potentially be handed over to 3GPP access in the visited domain (i.e., in the VPMN). More specifically, in some embodiments, if the UE that established a WiFi session with an ePDG in the UE's HPMN is subject to lawful intercept, then the following can apply:1. If the UE is intercepted in the UE's HPMN in either the P-GW or the IMS AGW, then nothing is required.2. If the UE is intercepted in the ePDG, then the HPMN updates the UE configuration to configure the UE to disallow handover to 3GPP (since it is not possible to intercept the UE in the VPMN once the session is handed over to the VPMN). However, HPLM operator policies do apply here as well to control such a configuration. If the HPMN does not have any agreement with the VPMN operator to intercept inbound roamers to the VPMN, then the HPMN may allow handover and do nothing. Importantly, it should be noted that the embodiments described here related to lawful intercept are described using a 4G system, i.e., EPC comprising an ePDG; however, the same embodiments apply to a 5G system comprising a 5GC network that includes the N3IWF which is equivalent to the ePDG in 4G. Looking back at the embodiments ofFIGS.4,5, and6, the additional functionality to deal with lawful intercept can be included in steps400,500, and600, respectively.FIG.7illustrates the details of steps400,500, and600to provide this additional functionality in scenario #1 described above (i.e., the scenario in which the UE is intercepted in the UE's HPMN in either the P-GW or the IMS AGW). As illustrated, UEa establishes IP Security (IPSec) with the ePDG per existing procedures (e.g., existing procedures in 3GPP TS 24.302) (step700). UEa then sends a Session Initiation Protocol (SIP) INVITE to the IMS (e.g., to an IMS Call Session Control Function (CSCF) node) (step701). A series of messages (i.e., 180 Ringing, SIP 200 OK, and Acknowledgement (ACK)) are then exchanged between UEa and the IMS CSCF nodes in the conventional manner (steps702through704). At this point, an IMS session has been established. The IMS CSCF node(s) detects that the IMS session is subject to lawful intercept and instructs the P-GW or IMS AGW to perform interception and provide needed information for that purpose, e.g., in the conventional manner (step705). In this case, since the UE is intercepted in the UE's HPMN in either the P-GW or the IMS AGW, then nothing additional is required. FIG.8illustrates the details of steps400,500, and600to provide this additional functionality in scenario #2 described above (i.e., the scenario in which the UE is intercepted in the UE's HPMN in the ePDG). As illustrated, UEa establishes IPSec with the ePDG per existing procedures (e.g., existing procedures in 3GPP TS 24.302) (step800). UEa then sends an SIP INVITE to the IMS (e.g., to an IMS CSCF node) (step801). A series of messages (i.e., 180 Ringing, SIP 200 OK, and ACK) are then exchanged between UEa and the IMS CSCF nodes in the conventional manner (steps802through804). At this point, an IMS session has been established. The IMS CSCF node(s) detects that the IMS session is subject to lawful intercept in the ePDG (step805). The IMS CSCF node(s) instructs the ePDG to start lawful interception for the session (step806). The ePDG then sends a request to a UE management node to configure UEa to disallow handover to 3GPP from WLAN (step807) and performs lawful intercept (step808). The UE management node then configures UEa to disallow handover (step809). Once handover is disallowed, the UEa will not allow the handover to 3GPP in the VPMN as described above with respect to steps402-408ofFIG.4, steps504-514ofFIG.5, and steps602-614ofFIG.6. FIG.9is a schematic block diagram of a network node900according to some embodiments of the present disclosure. The network node900may be, for example, a core network node (e.g., a HSS, MME, etc.) or a network node implementing a core network function (e.g., an AMF, SMF, SCEF, etc.). As illustrated, the network node900includes one or more processors904(e.g., Central Processing Units (CPUs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and/or the like), memory906, and a network interface908. The one or more processors904are also referred to herein as processing circuitry. The one or more processors904operate to provide one or more functions of a network node900as described herein. In some embodiments, the function(s) are implemented in software that is stored, e.g., in the memory906and executed by the one or more processors904. FIG.10is a schematic block diagram that illustrates a virtualized embodiment of the network node900according to some embodiments of the present disclosure. This discussion is equally applicable to other types of network nodes. Further, other types of network nodes may have similar virtualized architectures. As used herein, a “virtualized” network node is an implementation of the network node900in which at least a portion of the functionality of the network node900is implemented as a virtual component(s) (e.g., via a virtual machine(s) executing on a physical processing node(s) in a network(s)). As illustrated, in this example, the network node900includes one or more processing nodes1000coupled to or included as part of a network(s)1002. Each processing node1000includes one or more processors1004(e.g., CPUs, ASICs, FPGAs, and/or the like), memory1006, and a network interface1008. In this example, functions1010of the network node900described herein are implemented at the one or more processing nodes1000or distributed across two or more processing nodes1000in any desired manner. In some particular embodiments, some or all of the functions1010of the network node900described herein are implemented as virtual components executed by one or more virtual machines implemented in a virtual environment(s) hosted by the processing node(s)1000. In some embodiments, a computer program including instructions which, when executed by at least one processor, causes the at least one processor to carry out the functionality of network node900or a node according to any of the embodiments described herein is provided. In some embodiments, a carrier comprising the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium (e.g., a non-transitory computer readable medium such as memory). FIG.11is a schematic block diagram of the network node900according to some other embodiments of the present disclosure. The network900includes one or more modules1100, each of which is implemented in software. The module(s)1100provide the functionality of the network node900described herein. This discussion is equally applicable to the processing node1000ofFIG.10where the modules1100may be implemented at one of the processing nodes1000or distributed across multiple processing nodes1000. FIG.12is a schematic block diagram of a UE1200according to some embodiments of the present disclosure. As illustrated, the UE1200includes one or more processors1202(e.g., CPUs, ASICs, FPGAs, and/or the like), memory1204, and one or more transceivers1206each including one or more transmitters1208and one or more receivers1210coupled to one or more antennas1212. The transceiver(s)1206includes radio-front end circuitry connected to the antenna(s)1212that is configured to condition signals communicated between the antenna(s)1212and the processor(s)1202, as will be appreciated by on of ordinary skill in the art. The processors1202are also referred to herein as processing circuitry. The transceivers1206are also referred to herein as radio circuitry. In some embodiments, the functionality of the UE1200described above may be fully or partially implemented in software that is, e.g., stored in the memory1204and executed by the processor(s)1202. Note that the UE1200may include additional components not illustrated inFIG.12such as, e.g., one or more user interface components (e.g., an input/output interface including a display, buttons, a touch screen, a microphone, a speaker(s), and/or the like and/or any other components for allowing input of information into the UE1200and/or allowing output of information from the UE1200), a power supply (e.g., a battery and associated power circuitry), etc. In some embodiments, a computer program including instructions which, when executed by at least one processor, causes the at least one processor to carry out the functionality of the UE1200according to any of the embodiments described herein is provided. In some embodiments, a carrier comprising the aforementioned computer program product is provided. The carrier is one of an electronic signal, an optical signal, a radio signal, or a computer readable storage medium (e.g., a non-transitory computer readable medium such as memory). FIG.13is a schematic block diagram of the UE1200according to some other embodiments of the present disclosure. The UE1200includes one or more modules1300, each of which is implemented in software. The module(s)1300provide the functionality of the UE1200described herein. Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as Read Only Memory (ROM), Random Access Memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure. While processes in the figures may show a particular order of operations performed by certain embodiments of the present disclosure, it should be understood that such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.). Some example (and non-limiting) embodiments of the present disclosure are as follows:Embodiment 1: A method performed by a wireless device to perform handover from an incoming access network to a cellular access network of a visited cellular network while in roaming, the method comprising sending a message to a core network of a visited cellular network, wherein the message is related to a handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network and the message comprises an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired.Embodiment 2: The method of embodiment 1 wherein the message is a NAS message (e.g., an initial attach request, e.g., in which a PDN connection request is embedded within the initial attach request), and the NAS message comprises the incoming access indication (e.g., the incoming access indication is included in the PDN connection request embedded within the initial attach request).Embodiment 3: The method of embodiment 1 wherein the message is a NAS message (e.g., an initial attach request, e.g., in which a PDN connection request is embedded within the initial attach request), and the NAS message comprises the incoming access indication and a handover indication (e.g., the handover indication and the incoming access indication are included in the PDN connection request embedded within the initial attach request), the handover indication being an indication that the initial attach request is for a handover from a non-cellular access network (e.g., a non-3GPP radio access network).Embodiment 4: The method of embodiment 1 wherein the message is a PDU session establishment request.Embodiment 5: The method of any one of embodiments 1 to 4 wherein the incoming access indication is an indication of a network type of the incoming access network.Embodiment 6: The method of any one of embodiments 1 to 5 wherein the incoming access network is a WLAN access network.Embodiment 7: The method of any one of embodiments 1 to 6 further comprising receiving a second message indicating handover failure based on the access network type of the incoming access network.Embodiment 8: A method performed by a wireless device to perform handover from an incoming access network to a cellular access network of a visited cellular network while in roaming, the method comprising: sending a request to a core network of a visited cellular network, the request being related to a handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network; receiving, from the core network of the visited cellular network, a request for information regarding the incoming access network; and, upon receiving the request, sending, to the core network of the visited cellular network, a message comprising an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired.Embodiment 9: The method of embodiment 8 wherein the request sent to the core network of the visited cellular network is a NAS message (e.g., an initial attach request).Embodiment 10: The method of embodiment 9 wherein the NAS message comprises a handover indication (e.g., the handover indication being in a PDN connection request embedded within the initial attach request), the handover indication being an indication that the initial attach request is for a handover from a non-cellular access network.Embodiment 11: The method of embodiment 8 wherein the message is a PDU session establishment request.Embodiment 12: The method of any one of embodiments 8 to 11 wherein the incoming access indication is an indication of a network type of the incoming access network.Embodiment 13: The method of any one of embodiments 8 to 12 wherein the incoming access network is a WLAN access network.Embodiment 14: The method of any one of embodiments 8 to 13 further comprising receiving a second message indicating handover failure based on the access network type of the incoming access network.Embodiment 15: A wireless device for performing a handover from an incoming access network to a cellular access network of a visited cellular network while in roaming, the wireless device adapted to perform the method of any one of embodiments 1 to 14.Embodiment 16: A wireless device for performing a handover from an incoming access network to a cellular access network of a visited cellular network while in roaming, the wireless device comprising a radio interface and processing circuitry operable to cause the wireless device to perform the method of any one of embodiments 1 to 14.Embodiment 17: A method performed by a core network node in a visited cellular network of a wireless device to accept or reject handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network while the wireless device is in roaming, the method comprising:receiving a message, wherein:the message is related to a handover of a wireless device from an incoming access network to a cellular access network of the visited cellular network; andthe message comprises an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired; andmaking a decision as to whether to accept or reject the handover based on the incoming access indication.Embodiment 18: The method of embodiment 17 wherein the message is a NAS message (e.g., an initial attach request, e.g., in which a PDN connection request is embedded within the initial attach request), and the NAS message comprises the incoming access indication (e.g., the incoming access indication is included in the PDN connection request embedded within the initial attach request).Embodiment 19: The method of embodiment 17 wherein the message is a NAS message (e.g., an initial attach request, e.g., in which a PDN connection request is embedded within the initial attach request), and the NAS message comprises the incoming access indication and a handover indication (e.g., the handover indication and the incoming access indication are included in the PDN connection request embedded within the initial attach request), the handover indication being an indication that the initial attach request is for a handover from a non-cellular access network (e.g., a non-3GPP radio access network).Embodiment 20: The method of embodiment 17 wherein the message is a PDU session establishment request from the wireless device.Embodiment 21: The method of any one of embodiments 17 to 20 wherein the incoming access indication is an indication of a network type of the incoming access network.Embodiment 22: The method of any one of embodiments 17 to 21 wherein the incoming access network is a WLAN access network.Embodiment 23: The method of any one of embodiments 17 to 22 wherein making the decision as to whether to accept or reject the handover based on the incoming access indication comprises deciding to accept the handover based on the incoming access information and, upon successful handover, storing the incoming access indication in one or more associated charging data records.Embodiment 24: The method of any one of embodiments 17 to 22 wherein making the decision as to whether to accept or reject the handover based on the incoming access indication comprises deciding to reject the handover based on the incoming access information and, upon deciding to reject the handover, rejecting the handover using a code that indicates that the handover failed due to a non-allowed incoming access network.Embodiment 25: The method of any one of embodiments 17 to 22 wherein making the decision as to whether to accept or reject the handover based on the incoming access indication comprises making the decision as to whether to accept or reject the handover based on the incoming access indication and one or more additional criteria.Embodiment 26: The method of embodiment 25 wherein the one or more additional criteria comprise an indication of one of a PDN or a service the wireless device is requesting access.Embodiment 27: The method of embodiment 26 wherein the indication comprises an APN associated with the handover.Embodiment 28: A method performed by a core network node in a visited cellular network of a wireless device to accept or reject handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network while the wireless device is in roaming, the method comprising: receiving a request from a wireless device, the request being related to a handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network; sending, to the wireless device, a request for information regarding the incoming access network; receiving, from the wireless device, a message comprising an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired; and making a decision as to whether to accept or reject the handover based on the incoming access indication.Embodiment 29: The method of embodiment 28 wherein the request received from the wireless device is a NAS message (e.g., an initial attach request).Embodiment 30: The method of embodiment 29 wherein the NAS message comprises a handover indication (e.g., the handover indication being in a PDN connection request embedded within the initial attach request), the handover indication being an indication that the initial attach request is for a handover from a non-cellular access network.Embodiment 31: The method of embodiment 28 wherein the message is a PDU session establishment request from the wireless device.Embodiment 32: The method of any one of embodiments 28 to 31 wherein the incoming access indication is an indication of a network type of the incoming access network.Embodiment 33: The method of any one of embodiments 28 to 32 wherein the incoming access network is a WLAN access network.Embodiment 34: The method of any one of embodiments 28 to 33 wherein: making the decision as to whether to accept or reject the handover based on the incoming access indication comprises deciding to accept the handover based on the incoming access information; and, upon successful handover, storing the incoming access indication in one or more associated charging data records.Embodiment 35: The method of any one of embodiments 28 to 33 wherein: making the decision as to whether to accept or reject the handover based on the incoming access indication comprises deciding to reject the handover based on the incoming access information; and, upon deciding to reject the handover, rejecting the handover using a code that indicates that the handover failed due to a non-allowed incoming access network.Embodiment 36: The method of any one of embodiments 28 to 33 wherein making the decision as to whether to accept or reject the handover based on the incoming access indication comprises making the decision as to whether to accept or reject the handover based on the incoming access indication and one or more additional criteria.Embodiment 37: The method of embodiment 36 wherein the one or more additional criteria comprise an indication of one of a PDN or a service the wireless device is requesting access.Embodiment 38: The method of embodiment 37 wherein the indication comprises an APN associated with the handover.Embodiment 39: A method performed by a core network node in a visited cellular network of a wireless device to accept or reject handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network while the wireless device is in roaming, the method comprising: receiving a request from a wireless device, the request being related to a handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network; sending, to a home cellular network of the wireless device, a request for information regarding the wireless device; receiving, from the home cellular network of the wireless device, a message comprising an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired; and making a decision as to whether to accept or reject the handover based on the incoming access indication.Embodiment 40: The method of embodiment 39 wherein the request received from the wireless device is a NAS message (e.g., an initial attach request).Embodiment 41: The method of embodiment 40 wherein the NAS message comprises a handover indication (e.g., the handover indication being in a PDN connection request embedded within the initial attach request), the handover indication being an indication that the initial attach request is for a handover from a non-cellular access network.Embodiment 42: The method of embodiment 39 wherein the message is a PDU session establishment request from the wireless device.Embodiment 43: The method of any one of embodiments 39 to 42 wherein the incoming access indication is an indication of a network type of the incoming access network.Embodiment 44: The method of any one of embodiments 39 to 43 wherein the incoming access network is a WLAN access network.Embodiment 45: The method of any one of embodiments 39 to 44 wherein: making the decision as to whether to accept or reject the handover based on the incoming access indication comprises deciding to accept the handover based on the incoming access information; and, upon successful handover, storing the incoming access indication in one or more associated charging data records.Embodiment 46: The method of any one of embodiments 39 to 44 wherein: making the decision as to whether to accept or reject the handover based on the incoming access indication comprises deciding to reject the handover based on the incoming access information; and, upon deciding to reject the handover, rejecting the handover using a code that indicates that the handover failed due to a non-allowed incoming access network.Embodiment 47: The method of any one of embodiments 39 to 44 wherein making the decision as to whether to accept or reject the handover based on the incoming access indication comprises making the decision as to whether to accept or reject the handover based on the incoming access indication and one or more additional criteria.Embodiment 48: The method of embodiment 43 wherein the one or more additional criteria comprise an indication of one of a PDN or a service the wireless device is requesting access.Embodiment 49: The method of embodiment 48 wherein the indication comprises an APN associated with the handover.Embodiment 50: The method of any one of embodiments 39 to 49 wherein the request for information is a request for a subscriber profile of the wireless device.Embodiment 51: A core network node for a visited cellular network of a wireless device for accepting or rejecting handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network while the wireless device is in roaming, the core network node adapted to perform the method of any one of embodiments 17 to 50.Embodiment 52: A core network node for a visited cellular network of a wireless device for accepting or rejecting handover of the wireless device from an incoming access network to a cellular access network of the visited cellular network while the wireless device is in roaming, the core network node comprising processing circuitry operable to case the core network node to perform the method of any one of embodiments 17 to 50.Embodiment 53: A method performed by a network node in a home cellular network of a wireless device to provide information during handover of the wireless device from an incoming access network to a cellular access network of a visited cellular network of the wireless device while the wireless device is in roaming, the method comprising: receiving, from the visited cellular network of the wireless device, a request for information regarding the wireless device; and sending, to the visited cellular network of the wireless device, a message comprising an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired.Embodiment 54: A network node for a home cellular network of a wireless device for providing information during handover of the wireless device from an incoming access network to a cellular access network of a visited cellular network of the wireless device while the wireless device is in roaming, the network node adapted to: receive, from the visited cellular network of the wireless device, a request for information regarding the wireless device; and send, to the visited cellular network of the wireless device, a message comprising an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired.Embodiment 55: A network node for a home cellular network of a wireless device for providing information during handover of the wireless device from an incoming access network to a cellular access network of a visited cellular network of the wireless device while the wireless device is in roaming, the network node comprising processing circuitry operable to cause the network node to: receive, from the visited cellular network of the wireless device, a request for information regarding the wireless device; and send, to the visited cellular network of the wireless device, a message comprising an incoming access indication wherein the incoming access indication is an indication of the incoming access network from which the handover of the wireless device is desired.Embodiment 56: A method performed by an ePDG/N3IWF to disallow handover of a wireless device from an incoming access network to a cellular access network of a visited cellular network while in roaming when the wireless device is subject to lawful intercept, the method comprising: receiving (806) an instruction to perform lawful intercept for the wireless device; and sending (808), to a management node, a request to configure the wireless device to disallow handover from the incoming access network to the cellular access network of the visited cellular network.Embodiment 57: The method of embodiment 56 wherein the incoming access network is a WLAN access network.Embodiment 58: A method performed by a management node to disallow handover of a wireless device from an incoming access network to a cellular access network of a visited cellular network while in roaming when the wireless device is subject to lawful intercept, the method comprising: receiving (808), from another node, a request to configure the wireless device to disallow handover from the incoming access network to the cellular access network of the visited cellular network; and configuring (808) the wireless device to disallow handover from the incoming access network to the cellular access network of the visited cellular network.Embodiment 59: The method of embodiment 58 wherein the other node is an ePDG/N3IWF.Embodiment 60: The method of embodiment 58 or 59 wherein the incoming access network is a WLAN access network.Embodiment 61: A ePDG/N3IWF for disallowing handover of a wireless device from an incoming access network to a cellular access network of a visited cellular network while in roaming when the wireless device is subject to lawful intercept, the ePDG/N3IWF adapted to: receive an instruction to perform lawful intercept for the wireless device; and send, to a management node, a request to configure the wireless device to disallow handover from the incoming access network to the cellular access network of the visited cellular network.Embodiment 62: The ePDG/N3IWF of embodiment 61 wherein the incoming access network is a WLAN access network.Embodiment 63: A management node for disallowing handover of a wireless device from an incoming access network to a cellular access network of a visited cellular network while in roaming when the wireless device is subject to lawful intercept, the management node adapted to: receive, from another node, a request to configure the wireless device to disallow handover from the incoming access network to the cellular access network of the visited cellular network; and configure the wireless device to disallow handover from the incoming access network to the cellular access network of the visited cellular network.Embodiment 64: The management node of embodiment 63 wherein the other node is a ePDG/N3IWF.Embodiment 65: The management node of embodiment 63 or 64 wherein the incoming access network is a WLAN access network. At least some of the following abbreviations may be used in this disclosure. If there is an inconsistency between abbreviations, preference should be given to how it is used above. If listed multiple times below, the first listing should be preferred over any subsequent listing(s).3GPP Third Generation Partnership Project4G Fourth Generation5G Fifth Generation5GC Fifth Generation CoreACK AcknowledgementAF Application FunctionAGW Access GatewayAMF Access and Mobility FunctionAN Access NetworkAPN Access Point NameASIC Application Specific Integrated CircuitATCF Access Transfer Control FunctionAUSF Authentication Server FunctionCDR Charging Data RecordCPU Central Processing UnitCSCF Call Session Control FunctionDSP Digital Signal ProcessoreNB Enhanced or Evolved Node BEPC Evolved Packet CoreePDG Evolved Packet Data GatewayFPGA Field Programmable Gate ArraygNB New Radio Base StationHO HandoverHPMN Home Public Mobile NetworkHSS Home Subscriber ServerI-CSCF Interrogating Call Session Control FunctionIMS Internet Protocol Multimedia SubsystemIP Internet ProtocolIPSec Internet Protocol SecurityKPI Key Performance IndicatorLTE Long Term EvolutionMME Mobility Management EntityMTC Machine Type CommunicationN3GPP Non-Third Generation Partnership ProjectN3IWF Non-Third Generation Partnership Project Inter-Working FunctionNAS Non-Access StratumNEF Network Exposure FunctionNF Network FunctionNR New RadioNRF Network Repository FunctionNSSF Network Slice Selection FunctionPCF Policy Control FunctionPCRF Policy and Charging Rules FunctionP-CSCF Proxy Call Session Control FunctionPDN Packet Data NetworkPDU Protocol Data UnitP-GW Packet Data Network GatewayQoS Quality of ServiceRAM Random Access MemoryRAN Radio Access NetworkROM Read Only MemoryRTT Round Trip TimeSCEF Service Capability Exposure FunctionS-CSCF Serving Call Session Control FunctionS-GW Serving GatewaySIP Session Initiation ProtocolSMF Session Management FunctionTS Technical SpecificationUDM Unified Data ManagementUE User EquipmentUPF User Plane FunctionVoIMS Voice over Internet Protocol Multimedia SubsystemVoLTE Voice over Long Term EvolutionVoWiFi Voice over WiFiVPMN Visited Public Mobile NetworkVPN Virtual Private NetworkWLAN Wireless Local Area Network Those skilled in the art will recognize improvements and modifications to the embodiments of the present disclosure. All such improvements and modifications are considered within the scope of the concepts disclosed herein.
61,327