hash
stringlengths 32
32
| doc_id
stringlengths 5
12
| section
stringlengths 4
595
| content
stringlengths 0
6.67M
|
---|---|---|---|
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.7.5 Existing features partly or fully covering the use case functionality | The performance requirements for high data rate AR services have been captured in TS 22.261 clause 7.6. The performance requirements for UE to network relaying in 5G systems have been captured in TS 22.261 clause 7.7. The functional and performance requirements for tactile and multi-modal communication services have been captured in TS 22.261 clauses 6.43 and 7.11, respectively.
However, existing requirements still need to consider the power consumption of the 5G UE onboard AR terminals. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.7.6 Potential New Requirements needed to support the use case | [PR 5.7.6-1] Subject to operator policy, the 5G system shall support a means to provide high data rate transmission to a UE during an extended period of time, including when in high-speed mobility.
NOTE 1: Metaverse service experience over an extended period of time (e.g. 2h) requires significant power consumption by the UE. In some cases, a device with no external power supply cannot sustain downloading and rendering of media over a long interval, e.g. for the duration of an entire feature film or athletic event.
[PR 5.7.6-2] Subject to operator policy, the 5G system shall support a mechanism that enables flexible adjustment of communication services based on the type of devices (e.g., wearables), such that the services can be operated with reduced energy utilization.
[PR 5.7.6-3] Subject to operator policy, the 5G system shall support a means to enable interactive immersive multiparty communications in the metaverse service.
NOTE 2: The multiparty immersive communication (e.g. amongst multiple friends) could be location related or location agnostic.
Table 5.7.6-1 – Potential key performance requirements for Immersive AR interactive experience: tethered link
Characteristic parameter (KPI)
Influence quantity
Max allowed end-to-end latency
Service bit rate: user-experienced data rate
Reliability
# of UEs
UE Speed
Service Area
Viewports streaming from rendering device to AR glasses through direct device connection
(tethered/relaying case)
(note 1)
10 ms (i.e., UL+DL between AR Glasses display and the rendering UE) (note 2)
[200-2000] Mbit/s
99,9 % (note 2)
1-2
Stationary or pedestrian
Pose information from AR glasses to rendering device through direct device connection
(tethered/relaying case)
(note 1)
5 ms (note 2)
[100-400] Kbit/s (note 2)
99,99 % (note 2)
1-2
Stationary or pedestrian
Note 1: These KPIs are only valid for cases where the viewport rendering is done in the tethered device and streamed down to the AR glasses. In the case of rendering capable AR glasses, these KPIs are not valid.
Note 2: These values are aligned with the tactile and multi-modal communication KPI table in TS 22.261 [5], cl 7.11
Table 5.7.6-2 – Potential key performance requirements for Immersive AR interactive experience: NG-RAN multimodal communication link
Characteristic parameter (KPI)
Influence quantity
Max allowed end-to-end latency
Service bit rate: user-experienced data rate
Reliability
# of UEs
UE Speed
Service Area
Movie streaming from metaverse server to the rendering device
(note 2)
Only relevant for live streaming.
[1-5] s in case of live streaming
[0,1-50] Mbit/s (i.e., covering a complete OTT ladder from low resolution to 3D-8K)
(note 1)
99,9 %
1 to [10]
[up to 500 km/h]
-
Avatar information streaming between remote UEs (end to end)
(note 3)
20 ms (i.e., UL between UE and the interface to metaverse server + DL back to the other UE)
[0,1-30] Mbit/s
99,9 %
1 to [10]
[up to 500 km/h]
-
Interactive data exchange: voice and text between remote UEs (end to end)
(note 4)
20 ms (i.e., UL between UE and the interface to metaverse server + DL back to the other UE)
[0,1-0,5] Mbit/s
99,9 %
1 to [10]
[up to 500 km/h]
-
Note 1: These values are aligned with “high-speed train” DL KPI from TS 22.261 [5] cl 7.1
Note 2: To leverage existing streaming assets and delivery ecosystem, it is assumed that the legacy streaming data are delivered to the rendering device, which incrusts this in the virtual screen prior to rendering. For a live streaming event, the user-experience end-to-end latency is expected to be competitive with traditional live TV services, typically [1-5] seconds.
Note 3: For example, the glTF format [60] can be used to deliver avatar representation and animation metadata in a standardized manner. Based on this format, the required bitrate for transmitting such data is highly dependent on avatar’s complexity (e.g., basic model versus photorealistic).
Note 4: These values are aligned with “immersive multi-modal VR” KPIs in TS 22.261 [5], cl 7.11. End-to-end latency in this table is calculated as twice the value of the DL “immersive multi-modal VR” latency in TS 22.261 [5], cl 7.11. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.8 Use Case on multi-service coordination in one mobile metaverse service | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.8.1 Description | There’s a major difference between a metaverse service and traditional multi-media service. A mobile metaverse service provides a platform which supports different applications to complete a task such as gaming, online-working, online-education, etc. Users will have no limitations on the terminals they use. In existing XR applications, specific brand of VR glasses or gloves are required to be used in a game, different brands of VR glasses and gloves will be very hard to map and coordinate in a same game. But in mobile metaverse services, the nature of the standard will support the coordination between different equipment belonging to different applications or brands.
Figure 5.8.1-1: multi-service coordination in one mobile metaverse service |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.8.2 Pre-conditions | John has a pair of VR glasses and a pair of tactile gloves. Usually, he uses VR glasses for VR games and tactile gloves for vertical painting where he can feel the brushstrokes. These two activities were running on two different network slices. As the VR glasses was bought to play VR games, the VR game application has a network slice A which is better support the game service. Tactile gloves belong to Brand B which has another network slice B.
In the mobile metaverse, there are many different types of services such as games, concerts, education, etc. And the mobile metaverse services has subscribed different network slice for these different types of service, and different QoS for different flows accordingly for better user experience. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.8.3 Service Flows | 1. John opens a mobile metaverse service, in which both VR glasses and tactile gloves are needed, and he would like to draw a picture with tactile gloves and see a live music show at the same time.
2. In the subscription between mobile metaverse service (which can be hosted by operators or other companies) and network, the video flow, audio flow in live music condition were subscribes to QoS 1 and QoS2 in slice A, the video flow and tactile flow in painting condition were subscribes to QoS 3 and QoS4 in slice B in art painting.
3. In John’s VR glasses he can see the singer and other listeners and at the same time. At the same time, he can see his painting on a virtual easel and use a virtual brush to paint, while he can feel the brushstrokes with the tactile feedback.
4. In this case, the mobile metaverse service will have a policy to use a same QoS level for the video flows in live music condition and painting condition and inform network on this decision.
5. As John is painting and enjoying the live show at the same time, the coordination between video flow, audio flow in live music mobile metaverse service and the video flow and tactile flow in painting mobile metaverse service need to be coordinated. This coordination information need to be share with the network for policy modification.
6. Network will do this dynamic policy modification for John. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.8.4 Post-conditions | John used both the VR glasses and the tactile gloves in distinct mobile metaverse services with very good user combined experience. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.8.5 Existing features partly or fully covering the use case functionality | The 5G system can support different communication performance policies for services and provides some support for resolving conflicts between the policies of different services.
There is however no way to for the 5G system to coordinate the communication performance delivered so that divergence in communication performance is reduced for distinct services (i.e. from different service providers).
3GPP TS 23.503 [63] clause 4.3.1 includes the following general requirement "The PCC framework shall allow the resolution of conflicts which would otherwise cause a subscriber's Subscribed Guaranteed Bandwidth QoS to be exceeded.".
3GPP TS 23.503 [63] clause 6.1.3.7 explains that "Service pre-emption priority enables the PCF to resolve conflicts where the activation of all requested active PCC rules for services would result in a cumulative authorized QoS which exceeds the Subscribed Guaranteed bandwidth QoS.".
A note in 3GPP TS 23.503 [63] clause 6.1.3.7 includes the following sentence: "Normative PCF requirements for conflict handling are not defined." |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.8.6 Potential New Requirements needed to support the use case | [PR 5.8.6-1] The 5G system shall provide the capability of reducing the differences between different mobile metaverse services communication performance for a given UE to prevent inconsistency of experience due to XR media with divergent or conflicting characteristics, e.g., resolution, latency or packet loss.
NOTE: The UE can provide communication services for more than one terminal equipment. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.9 Use Case on Synchronized predictive avatars | 5.9.1 Description
In this first use case, three users are using the 5GS to join an immersive mobile metaverse activity (which may be an IMS multimedia telephony call using AR/MR/VR). The users Bob, Lukas, and Yong are located in the USA, Germany and China, respectively. Each of the users can be served by a local mobile metaverse service edge computing server (MECS) hosted in the 5GS, each of the mobile metaverse servers is located close to the user it is serving. In case of IMS such MECS could be an AR Media Function that provides network assisted AR media processing. When a user joins a mobile metaverse activity, such as a joint game or teleconference, the avatar of the user is loaded in the MECS of the other users. For instance, the MECS close to Bob hosts the avatars of Yong and Lukas.
The distance between the users, e.g., the distance between USA and China is around 11640 Km, determines minimum communication latency, e.g., 11640/c = 38 msec. This latency might also be higher due to different causes such as, e.g., hardware processing. This latency might also be variable due to multiple reasons, such as, e.g., congestion or delays introduced by (variable processing time of) hardware components such as sensors or rendering devices. Since this value maybe too high and variable for a truly immersive joint location agnostic metaverse service experience, each of the deployed avatars includes one or more predictive models of the person it represents and that allow rendering in the local edge server a synchronized predicted (current) digital representation (i.e. avatar) of the remote users. Similar techniques have been proposed for example in [28].
Figure 5.9.1-1 shows an exemplary scenario in which a MECS at location 3 (USA) runs the predictive models of remote users (Yong and Lukas) and takes as input the received sensed data from all users (Yong, Lukas, and Bob) as well as the current end-to-end communication parameters (e.g., latency) and generates a synchronized predicted (current) avatar digital representation (i.e. avatar) of the users to be rendered in local rendering devices of Bob. A particular example of such scenario might be about gaming: Yong, Lukas, and Bob are playing baseball in an immersive mobile metaverse activity , and it is Yong’s turn to hit the ball that is going to be thrown by Lukas. If Yong hits the ball, then Bob can continue running since Yong and Bob are playing in the same team. In this example, the digital representation (e.g. avatar) predictive models of Lukas and Yong (deployed at the MECS close to Bob) will allow creating a combined synchronized prediction at Location 3 of Lukas throwing the ball and Yong reacting to the ball and hitting the ball so that Bob can start running without delays and can enjoy a great immersive mobile metaverse experience.
This example aims at illustrating how predictive models can improve the location agnostic service experience in a similar was as in [28]. Synchronized predictive digital representation (e.g. avatars) are however not limited to the gaming industry and can play a relevant role in other metaverse services, e.g., immersive healthcare or teleconferencing use cases. This scenario involving synchronized predictive digital representation (e.g. avatars) assumes to require synchronization of user experiences to a single clock.
Figure 5.9.1-1: Example of a joint metaverse experience with synchronized predicted avatar representation. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.9.2 Pre-conditions | The following pre-conditions and assumptions apply to this use case:
1. Up to three different MNOs operate the 5GS providing access to mobile metaverse services.
2. The users, Bob, Lukas, and Yong have subscribed to the mobile metaverse services.
3. Each of the users, e.g., Bob, decide to join the immersive mobile metaverse service activity. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.9.3 Service Flows | The following service flows need to be provided for each of the users:
1. Each of the users, e.g., Bob, decide to join the immersive mobile metaverse service activity and give consent to the deployment of their avatars.
2. Sensors at each user sample the current representation of each of the users where sampling is done as required by the sensing modalities. The sampled representation of each of the users is distributed to the metaverse edge computing servers of the other users (which may be an AR Media Function in case of IMS) in the metaverse activity.
3. Each of the edge computing servers applies the incoming data stream representing each of the far located users to the corresponding digital representation (e.g. avatar) predictive models – taking into account the current communication parameters/performance, e.g., latency – to create a combined, synchronized, and current digital representation of the remote users that is provided as input to rendering devices in the local environment. The predictive model also ensures that it correctly synchronizes with the actual state of the remote users based on which it can make the necessary corrections to the digital representation in case of differences between a predicted state and the actual state.
The service flows for the other users (i.e., Yong in China and Lukas in Germany) are the mirrored equivalent. For instance, even if not shown in Figure 5.9.1-1, the local edge computing server associated to Lukas will run the digital representation (e.g. avatar) predictive models of Yong and Bob and consume the data streams coming from those users. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.9.4 Post-conditions | The main post-condition is that each of the users enjoy an immersive metaverse service activity. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.9.5 Existing features partly or fully covering the use case functionality | TS 22.261 includes in Clause 6.40.2 the following requirement related to AI/ML model transfer in 5GS:
“Based on operator policy, 5G system shall be able to provide means to predict and expose predicted network condition changes (i.e. bitrate, latency, reliability) per UE, to an authorized third party.”
This requirement is related to requirement [PR 5.9.6.2], but not exactly the same since the usage of predictive digital representation (e.g. avatar) models requires the knowledge of the end-to-end network conditions, in particular, latency. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.9.6 Potential New Requirements needed to support the use case | [PR 5.9.6-1] the 5G system (including IMS) shall provide a means to synchronize the incoming data streams of multiple (sensor and rendering) devices associated to different users at different locations.
[PR 5.9.6-2] the 5G system (including IMS) shall provide a means to expose predicted network conditions, in particular, latency, between remote users.
[PR 5.9.6-3] The 5G system (including IMS) shall provide a means to support the distribution, configuration, and execution in a local Service Hosting Environment of a predictive digital representation model associated to a remote user involved in multimedia conversational communication.
[PR 5.9.6.4]
The 5G system (including IMS) shall provide a means to predict the rendering of a digital representation of a user (e.g. an avatar) and/or of an object based on the latency of a multimedia conversational communication, and to render the predicted digital representation.
NOTE: The predicted rendering is expected to be updated/synchronized with real world information received about the user and/or object. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.10 Use Case on mobile metaverse for Critical HealthCare Services | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.10.1 Description | Mobile metaverse for Critical HealthCare Services. Immersive interactive mobile services encompass multiple services such as gaming, education, healthcare, shopping, recreation etc. Healthcare is a lifesaving critical service, which will benefit the most from mobile metaverse services. Remote surgery and surgeon training are already emerging examples, for geographically spread specialized surgeons, students and patients. Healthcare making use of mobile metaverse services can save lives by providing healthcare services at the earliest, train students better and free surgeons/doctors from being physically present at patient’s location. This area will encompass services such as surgery, medical student training, surgeon training. It will enable remote physician and patient examination. Mobile metaverse services can be characterized as a typical class of application that involves a server and a client device. This class of application will typically exchange haptic signals (forces, torques, position, velocity, vibration etc.), video and audio signals. Mobile metaverse services largely depend on low latency, highly reliable, secure wireless communication networks. [14, 32, 33, 36, 37]
Mobile metaverse-Surgeries. Normally, surgeons have to be physically present at hospital to perform their surgeries. Surgeons and patients may have to travel a great distance for surgeries, which is resource intensive and burdensome. The outbreak of coronavirus has further proves the case for remote surgeries. At present, there are increasingly more surgical rooms adapted with advanced surgical robots and doctors can remotely operate on the patients by controlling these surgical robots as shown in Figure-5.10.1-1b [35]. Mobile metaverse services can bring doctors and patients closer virtually, which further improves the accuracy and surgery flexibility. It can also facilitate consultation to provide suggestions and domain knowledge to reduce risks in actual operation. Recently, a real time remote breast cancer surgery was performed using private 5G network and head mounted display. Dr. G, physically present in the operating room, wore a head mounted display and could see the crucial information displayed by the goggles and performed breast cancer surgery remotely. Dr. G received constant advice from Dr. A, who was seated on the stage at the congress of Spanish association of Breast Surgeons, 900 km away from the surgery room [34, 35].
Mobile metaverse Physician Consultation. With the outbreak of COVID, virtual consultation through audio and video conference calls has gained considerable momentum. Mobile metaverse services enabled by tactile sensors along with video and audio media can provide a rich and successful experience to both physician and patient. The ability to perform mobile metaverse service consultation without the need for the doctors and patient to be physically co-located is an extremely efficient prospect. Doctors can potentially use mobile metaverse services to examine and interact with representations of aspects and views of the patient, and benefit from a plethora of medical advice database as shown in Figure-5.10.1-1.c. 5G communication is one the key factors to enable mobile metaverse service based physician consolation. Doctors and patients both need high throughput, ultra-reliable and low latency 5G communication for these services to succeed. [30, 32].
Mobile metaverse Body scan and vitals. Mobile metaverse services can significantly improve and change the way current body scan diagnostics and vital statistics are gathered. Mobile metaverse services can be utilized to see the real-time diagnostic data of the patient such as body temperature, heart rate, blood pressure, breathing rate along with MRI, CT and 3D scans as show in Figure-5.10.1-1.d. Medical challenges such as vein detection for IV, shots etc., can easily be resolved using mobile metaverse services. These will help medical professionals to detect, diagnose and treat a patient [30, 32].
Figure-5.10.1-1: Mobile metaverse service examples
The examples in Figure 5.10.1-1 feature the use of mobile metaverse services for a) Training Surgeons, b) remote surgery, c) physician consultation d) remote medical diagnostics. (Image source: healthcareoutlook.net, courthousenews.com, gmw3.com, ourplnt.com)
Mobile metaverse Training Medical Students. Surgeries performed by surgeons all over the world can be used to train students using mobile metaverse services. Students can observe a live surgery with almost all the important vital and view on display, as they would be displayed to a surgeon. Further, the students can view the live surgery from different viewing perspectives, hear surgeon’s instruction, display of suggestions and domain knowledge as show in Figure-5.10.1-1.a. In May 2021, a live lung surgery was performed using an extended reality (XR) technology platform. More than 200 thoracic surgeons from Asian countries attended the outreach program and received training. The participants wore a head-mounted display (HMD) at their respective locations and participated in the program virtually represented by a digital representation (e.g. an avatar). They participants viewed the live lung surgery with lecture and 360deg high resolution surgical scenes as show in Figure-5.10.1-2 [31].
Figure-5.10.1-2: Live lung surgery training through metaverse
The source of figure 5.10.1-2 is [31] (Image source: Journal of Educational Evaluation for Health Professions (Jeehp)) |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.10.2 Pre-conditions | Hospital Y has an enterprise subscription with MNO X, with which MNO provides the hospital and its staff with fault-tolerant and ultra-high reliable 5G communication as well as mobile metaverse services. Dr. Alex and Dr. Bob are renowned surgeons of Hospital Y, who are also trained to perform surgeries using immersive interactive mobile healthcare services.
Dr. Alex and Dr. Bob both have 5G based head mounted device (HMDs) and tactile gloves, which can communicate with those in the hospital surgery room virtually using digital representation of themselves (an avatar). The doctors are able to make remote use of actuators in operating room, though remote. The service requires extremely high reliability as a patient's life is at risk. 5G allocates sufficient communications resources, e.g. through the use of GBR QoS policy, to both the surgeons communications for the entire duration of the surgery. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.10.3 Service Flows | 1) Dr. Alex and Dr, Bob are surgeons and senior surgeons of a renowned hospital. Dr. Alex is on a vacation with the family and Dr. Bob is attending a conference 300 miles away from Madrid, Spain.
2) A patient, David, has had an accident and has been rushed to the emergency room of a local hospital. David has a bad head injury and needs an urgent brain surgery.
3) The hospital requests that the surgeons Dr. Alex and Dr. Bob perform the surgery.
4) Dr. Alex and Dr. Bob wear HMDs and tactile gloves and connect to the hospital surgery room virtually making use of the 5G system.
5) David is in an operating room of the hospital equipped with advanced surgical robots and attended by the local doctors and nurse.
6) The vital diagnostic information such as heart rate, BP reading, ECG etc., are virtually displayed to both the surgeons. The surgeons are able to view the surgery room and able to see each other using digital representations (e.g. avatars.)
7) Dr. Alex and Dr. Bob perform the surgery remotely with the assistance of the robots and doctors and nurses at the hospital.
8) The surgery successfully ends, after three hours of surgery and remote treatment without an interruption in the 5G connections.
9) After the surgery, the patient is shifted from the surgery bed to an Intensive Care Unit (ICU) by the doctors and nurses in the hospital. Dr. Alex and Dr. Bob return to their vacation and conference respectively. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.10.4 Post-conditions | After the surgery, the doctors and surgeons stay connected virtually to read the vitals of the patients. Once the patient’s condition has stabilized, Dr. Alex and Dr. Bob disconnect their devices. The 5G system then releases dedicated communication resources (GBR QoS policy) from the devices used for surgery using 5GS. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.10.5 Existing feature partly or fully covering use case functionality | 1) URLLC system design in clause 5.33 of 23.501 [39] has proposed dual redundant system to achieve ultrahigh reliability. Though - in this system design- there are dual RAN connections, PDU session established, SMF and UPF but has a common DN and AMF nodes, which is a single point failure in the system architecture. URLLC system design is ultrareliable but not end to end ultra-reliable and fault tolerant. It also relies on the upper layer protocol such as IEEE 802.1 TSN (Time sensitive Network) FRER (Frame replication & elimination for reliability), to manage dual redundant systems such as replication and elimination of redundant packets or frames. A typical fault-tolerant system should not have single point of failures and manage system flow. Moreover, the use of FRER implies that the content is exchanged over both paths continuously, thus doubling the resources used over the radio.
2) Redundant user plane paths based on multiple UEs per device has been proposed in Annex F of 23.501[39]. In this system design, the device is expected to have two UE(s) and they independently connect to their RAN and have their own PDU sessions with a common DN. This system is not End-to-End fault tolerant since it has a common DN – a single point of failure- and requires dual UE(s) to achieve ultrahigh reliability. This architecture too assumes that some upper layer protocol (e.g. FRER) is used for replication and frame elimination, thus doubling the resources used over the radio.
3) As per Multimedia Priority Service (MPS), mentioned in clause 5.16.5 of TS 23.501[39], allows service users priority access to the system resources under congestion, creating the ability to deliver or complete session of a high priority nature. Service users are priority users such as government officials, authorized users. Currently there are no requirements to identify mission critical users and its priorities such as emergency (under surgeries), surgeon training, scans and physician consultation.
4) RRC controls the scheduling of user data in the uplink by associating each logical channel with a logical channel priority, a prioritised bit rate (PBR), and a buffer size duration (BSD), mentioned in clause 10.5 of TS 38.300 [40]. We can extend this to allocate logical channels for mission critical services such as metaverse for Critical-HealthCare services.
5) The existing CMED in TR 22.826 specifies various possible healthcare support using ultra high-definition videos, tactile sensors and audio. Though this TR specifies the reliability of 99.999% but does not specify the fault tolerant end-to-end reliability, and also does not specify for metaverse, which is an interactive VR 360˚ streaming [41]. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.10.6 Potential New Requirements needed to support the use case | [PR 5.10.6-1] The 5G system shall provide a means to synchronize multiple service data flows (e.g., heart rate, video, audio) of multiple UEs associated with Critical HealthCare services. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.11 Use case of IMS-based 3D Avatar Communication | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.11.1 Description | This use case identifies two fundamental scenarios and one sub-scenario for 3D Avatar Communication by means of the IMS. The intention of the proposal is to fully specify this system in 3GPP, to provide a standard for a new form of media to be used in telecommunication by mobile users. In the terminology of this use case, the avatar is a digital representation of a user, and this digital representation is exchanged (with other media, notably audio), with one or more users as mobile metaverse media.
An Avatar Call is similar to a Video Call in that both are visual, interactive, provide live feedback to participants regarding their emotions, attentiveness and other social information. They differ in that an Avatar call can be more private - neither revealing the environment where the caller is, nor their actual appearance. An avatar may be preferable to display to one's own face in a call for a number of reasons - a user may not feel presentable, may want to make a specific impression, or may have to communicate when only limited data communication is possible. The key difference between an Avatar Call and a Video Call is that the Avatar Call requires only a very constrained data rate, e.g. 5 kbps, to support.
This use case is timely because the key enabling technologies have reached a sufficient maturity. The key avatar technologies are the means to (1) capture facial and calculate values according to a model, (2) efficiently send both media and model components through a communication channel, both initially and over time, (3) produce media for presentation to a user for the duration of the communication. We anticipate services will be increasingly available in the coming months and years. The current approaches under development are effectively proprietary and they are not integrated with the IMS.
The scenarios considered in this use case are:
1(a). IMS users initiate an avatar call.
1(b). An IMS users initiate a video call, but one (or both) users decide instead to provide Avatar Call representation instead of video representation.
For both 1(a) and 1(b) the goal is to capture sensing data of the communicating users (especially facial data) to create an animated user digital representation (avatar). This media is provided to communicating users as a new teleservice user experience enabled by the IMS.
2. A user interacts with a computer-generated system. Avatar communication is used to generate an appearance for a simulated entity with whom the user communicates. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.11.2 Pre-conditions | Users Adonis and Aphrodite are 3GPP subscribers. Both have terminal equipment sufficient to capture their facial expression and movements adequately for computing avatar modeling information. The terminal equipment also includes a display, e.g. a screen, to display visual media. The terminal equipment is capable of initiating and terminating the IMS multimedia application 'avatar call.' The terminal equipment is also capable of capturing the facial appearance and movements sufficiently to produce data required by a Facial Action Coding System (FACS).
A network accessible service is capable of initiating and terminating an IMS session and the IMS multimedia application 'avatar call.' |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.11.3 Service Flows | 1(a). Avatar Call
i. Adonis is on a business trip and filthy after a day servicing industrial equipment. He calls Aphrodite, who is several time zones away and reading in bed after an exhausting day.
ii. Adonis doesn't want to initiate a video call since he hasn't had a chance to clean up and is still at work, surrounded by ugly machines. He initiates an 'avatar call' explicitly with his terminal equipment interface.
iii. Aphrodite, several time zones away, reading in bed after an exhausting day, is alerted of an incoming call. She sees it is from Adonis and that it is an avatar call. She accepts the call, pleased that she will be presented on the call as an avatar.
Figure 5.11.3-1: Avatar media prepared for an avatar call
In more detail, the media that is provided uplink is generated on each terminal. This is analogous to the way in which speech and video codecs operate today.
Figure 5.11.3-2: Avatar generation on each UE
Once the avatar call is established the communicating parties provide information uplink. The terminal (a) captures facial information of the call participants and (b) locally determines an encoding that captures the facial information (e.g. consisting of data points, colouring and other metadata.) This information is transmitted as a form of media uplink, and provided by the IMS to the other participant(s) in the avatar call. When (c) the media is received by a participant, the media is rendered as a two (or three) dimensional digital representation, shown above as the 'comic figure' on the right.
In this use case, the UE performs processing of the data acquired by the UE to generate the avatar codec. It is possible to send the acquired data, e.g. video data from more than one camera, uplink so that the avatar codec could be rendered by the 5G network. It is however advantageous from a service perspective to support this capability on the UE. First, the uplink data requirement is greatly reduced. Second, the confidentiality of the captured data could prevent the user from being willing to expose it to the network. Third, the avatar may not be based on sensor data at all, if it is a 'software generated' avatar (as by a game or other application, etc.) and in this case there is no sensor data to send uplink to be rendered.
1(b). Video call falls back to an Avatar call
i. Adonis is striking as can be, and standing at an awe-inspiring vista on mount Olympus. He initiates a video call with Aphrodite.
ii. Unfortunately, Adonis has forgotten to consider the time zone difference. For Aphrodite, it is the middle of the night. What's more, Aphrodite has been up for several hours in the middle of the night to clean up a mess made by her sick cat. While she wants to take the call from Adonis, she prefers to be presented by an avatar, and not to take the call as a video call from her side. She explicitly requests an 'avatar presentation' instead of a 'video presentation' and picks up Adonis' call.
iii. The call between Adonis and Aphrodite is established. Adonis sees Aphrodite's avatar representation. Aphrodite sees Adonis in the video media received as part of the call.
iv. Adonis walks further along the mountain trail while still speaking to Aphrodite. The coverage gets worse and worse until it is no longer possible to transmit video uplink adequately. Rather than switching to a voice-only call, Adonis activates 'avatar call' representation. This requires very little data throughput.
v. Adonis and Aphrodite enjoy the rest of their avatar call.
2. Aphrodite calls automated customer service. Aphrodite calls a customer service of company "Inhabitabilis" to initiate a video call.
ii. Inhabitabilis customer service employs a 'receptionist' named Nemo, who is actually not a person at all. He is a software construct. There is an artificial intelligence algorithm that generates his utterances. At the same time, an appearance is generated as a set of code points using a FACS system, corresponding to the dialog and interaction between Aphrodite and Nemo.
iii. Aphrodite is able to get answers to her questions and thanks Nemo. In all the above scenarios, the following applies.
3. Aphrodite uses a terminal device without cameras, or whose cameras are insufficient and/or Adonis uses a terminal device without avatar codec support
In this scenario, the UE used by either calling party is not able to support an IMS 3D avatar call. Through the use of transcoding, this lack of support can be overcome. In the service flow shown below, as an example, Aphrodite's UE cannot capture her visually so as to generate an avatar encoding, so she expresses herself in text.
i. Aphrodite calls Adonis and wants to share an avatar call. She cannot however be captured via FACS due to a lack of sufficient camera support on her UE. Instead she uses text-based avatar media.
ii. The text-based avatar media is transported to the point at which this media is rendered as a 3D avatar media codec.
Figure 5.11.3-3: Example of a text-based Avatar Media enables avatar call without camera, etc. support on a UE
The transcoding rendering of the avatar media to 3D avatar media could be at any point in the system - Aphrodite's UE, the network, or Adonis' UE.
iii. Adonis' UE is able to display an avatar version of Aphrodite and hear it speak (text to voice.) To the extent that the avatar configuration and voice generation configuration is well associated with Aphrodite, Adonis can hear and see her speaking, though Aphrodite only provides text as input to the conversation.
Other examples (not further described here) could, for example, transcode the media provided by Aphrodite (e.g. text, binary, avatar encoding, etc.) and transcode it to video for presentation to Adonis. This would be useful if Adonis' UE did not have support for avatar encoding. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.11.4 Post-conditions | In each of the scenarios above, avatar media provides an acceptable interactive choice for a video call experience. The advantages are privacy, efficiency and ease of integration with computer software to animate a simulated conversational partner. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.11.5 Existing feature partly or fully covering use case functionality | TS 22.228 define the service requirements for IMS. IMS supports different IMS multimedia applications. IMS supports a wide range of services, notably voice and video calls. There is extensive support for services, tightly integrated with the 3GPP system, with extensive support for roaming and integration with both PSTN and ISDN telephony, emergency services and more. The requirements for a 3D avatar application are largely covered by existing requirements in the 5G standard for IMS.
TS 22.173 defines the media handling capabilities of the IMS Multimedia Telephony service
The specific gaps that are addressed in 5.A.6 include: extended feature negotiation, enabling the user to decide whether to present video or avatar communication, the ability to support Avatar communication and content efficiently, the ability to support standardized Avatar media in the 5G system.
The following KPIs are easily supported by the 5G system. They are included in order to contrast the requirements of an avatar call with a video call.
Use Case
Characteristic parameter (KPI)
End-to-end latency
Service bit rate: user-experienced data rate
Avatar call
[NOTE1]
<5 kbps [45]
Video call
< 150 msec preferred
<400 msec limit
Lip-synch: < 100 msec [46]
32-384 kb/s [46]
NOTE1: The latency requirement for real time immersive service experience would be the same as the video call below. For some user experiences (smaller devices or an embedded icon-sized representation in other application, etc.) the latency tolerance could be greater.
NOTE2: The video call KPIs are from TS 22.105 and have not changed since Rel-99. Actual transactional video call parameters may be higher now. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.11.6 Potential New Requirements needed to support the use case | [PR 5.11.6-1] The IMS shall allow multimedia conversational communications between two or more users providing real time conversational transfer of animated user digital representation and speech data.
[PR 5.11.6-2] The 5G system shall support a means for UEs to produce 3D avatar media to be sent uplink, and to receive this media downlink.
NOTE 1: In some scenarios, avatar media transmission entails a significantly lower data transfer rate than video.
[PR 5.11.6-3] The 5G system shall support a means for the production of 3D avatar media to be accomplished on a UE to support confidentiality of the data used to produce the 3D avatar (e.g. from the UE cameras, etc.)
[PR
5.11.6-4] Subject to user consent, the 5G system shall support a means to provide bidirectional transitioning between video and avatar media for parties of an IMS call.
NOTE 2: An example where an IMS call could transition to an IMS based 3D avatar call is where the communication performance of one or more parties declines to the extent that video is no longer of sufficient quality or even possibility. In this case, an avatar call between the same parties can replace the video call.
[PR 5.11.6-5] The 5G system shall support a means to enable locally generated media (e.g. text or video) of a party to be transcoded before it is rendered for the receiving party.
NOTE 3: The locally generated media could allow a party to control the appearance of its avatar, e.g. to express behavior, movement, affect, emotions, etc.
NOTE 4: The transcoding of media enables 3D avatar communication to be supported in scenarios in which UE participating in the IMS call does not support e.g. FACS, encoding avatar media, presenting avatar media, etc.
[PR 5.11.6-6] The 5G system shall support collection of charging information associated with initiating and terminating an IMS-based 3D avatar call. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.12 Use Case on Virtual humans in metaverse | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.12.1 Description | Virtual humans (or digital representations of humans, also referred to as 'avatars' in this use case) are simulations of human beings on computers [47]. There is a wide range of applications, such as games, film and TV productions, financial industry (smart adviser), telecommunications (avatars), etc.
In the coming era, the technology of virtual humans is one of foundations of mobile metaverse service. A virtual human can be a digital representation of a natural person in a mobile metaverse service, which is driven by the natural person. Or a virtual human also can be a digital representation of a digital assistant driven by AI model.
Mobile metaverse services offer an important opportunity for socialization and entertainment, where user experience of the virtual world and the real world combine. This use case focuses on the scenario of a natural person's digital embodiment in a metaverse as a location agnostic service experience. A virtual human is customized according to a user's personal characteristics and shape preferences. Users wear motion capture devices, vibrating backpacks, haptic gloves, VR glasses to drive the virtual human in a meta-universe space for semi-open exploration. The devices mentioned above are 5G UEs, which need to collaborate with each other to complete the actions of user and get real-time feedback.
Figure 5.12.1-1: Virtual humans in metaverse (Source: https://vr.baidu.com/product/xirang, https://en.wikipedia.org/wiki/Virtual_humans)
For smooth experience, the motion-to-photon latency should be less than 20ms [48]. The motion-to-photon latency requires that the latency between the moment that players do one movement and the corresponding new videos shown in VR glasses and tactile from vibrating backpacks or haptic gloves should be less than 20ms. As the asynchrony between different modalities increases, users’ experience will decrease because uses are able to detect asynchronies. Therefore, the synchronisation among audio, visual and tactile is also very important. The synchronisation thresholds regarding audio, visual and tactile modalities measured by Hirsh and Sherrick are described as follows [49]. The obtained results vary, depending on the kind of stimuli, biasing effects of stimulus range, the psychometric methods employed, etc.
- audio-tactile stimuli: 12 ms when the audio comes first and 25 ms when the tactile comes first to be perceived as being synchronous.
- visual-tactile stimuli: 30 ms when the video comes first and 20 ms when the tactile comes first to be perceived as being synchronous.
- audio-visual stimuli: 20 ms when the audio comes first and 20 ms when the video comes first to be perceived as being synchronous.
NOTE 1: Taking audio-tactile stimuli as an example, when the audio comes first, users are not able to detect asynchronies if the tactile comes within 12ms. Accordingly, when the tactile comes first, users are not able to detect asynchronies if the audio comes within 25ms. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.12.2 Pre-conditions | Alice’s virtual human exists as a digital representation in a mobile metaverse service. Alice’s virtual human wants to explore a newly opened area, including both natural environment and the humanities environment. The equipment Alice wears are all connected to 5G network. The mobile metaverse service interacts with 5G network to provide QoS requirements. The network provides the pre-agreed policy between the mobile metaverse service provider and operator on QoS requirements appropriate to each mobile metaverse media data flow. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.12.3 Service Flows | 1. Alice’s virtual human digital representation exists as part of the mobile metaverse service.
2. Alice’s virtual human digital representation can interact with other virtual humans. These could correspond to virtual humans representing other players or to machine generated virtual humans. Interactions could include a handshake, shopping, visiting an exhibition together, etc.
3. When someone or something touches Alice’s virtual human (e.g. Alice's virtual human's hand or back touches some virtual object or human in the mobile metaverse service), Alice can see the object and feel the temperature and weight of the object at the same time. For example, when a virtual leaf falls on the hand of Alice’s virtual human, Alice should see the leaf fall on her hand and sense the presence of a leaf at the same time. It means that the tactile impression from haptic gloves should come within 30ms after the video in VR glasses (assuming the video media precedes the haptic media.) Or, the video in VR glasses should come within 20ms if the tactile impression resulting from tactile media presented by the haptic gloves, if the tactile media precedes the video media. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.12.4 Post-conditions | Alice can physically experience what is represented in mobile metaverse services. The experience is very realistic and consistent. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.12.5 Existing features partly or fully covering the use case functionality | 3GPP TS 22.261 [6] specifies KPIs for high data rate and low latency interactive services including Cloud/Edge/Split Rendering, Gaming or Interactive Data Exchanging, Consumption of VR content via tethered VR headset, and audio-video synchronization thresholds.
Support of audio-video synchronization thresholds has been captured in TS 22.261:
Due to the separate handling of the audio and video component, the 5G system will have to cater for the VR audio-video synchronisation in order to avoid having a negative impact on the user experience (i.e. viewers detecting lack of synchronization). To support VR environments, the 5G system shall support audio-video synchronisation thresholds:
- in the range of [125 ms to 5 ms] for audio delayed and
- in the range of [45 ms to 5 ms] for audio advanced.
Existing synchronization requirements in current SA1 specification are only for data transmission of one UE. Existing specifications do not contain requirements for coordination of synchronization transmission of data packet for multiple UEs. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.12.6 Potential New Requirements needed to support the use case | [PR 5.12.6-1] The 5G system shall provide a mechanism to support coordination and synchronization of multiple data flows transmitted via one UE or different UEs, e.g., subject to synchronization thresholds provided by 3rd party.
[PR 5.12.6-2] The 5G system shall provide means to achieve low end-to-end round-trip latency (e.g., [20ms]). |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.13 Use Case on digital asset container information access and certification | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.13.1 Description | The network operators offer the digital asset management services for the users, with which some information (e.g. IDs) can be certified by the operator. The digital asset management services can also include
- The management of the digital asset container is performed according to the applicable regulations.
- The digital asset container has security properties (cannot be spoofed, access control with a policy determined by the user, etc.).
In the case of immersive XR media services, the user can choose, in the digital asset container, his/her digital representation and the related information, for example, the digital representation of the user (e.g. avatar), electronic money and associated financial services, identity, purchased items (the format of this information is at application layer and is not studied in 3GPP). This information can be used when accessing immersive XR media services or for real life services as the presentation of the identity. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.13.2 Pre-conditions | Alice has a service subscription with the operator. As part of the service, the network operator provides
- The digital asset container management according to the future applicable regulations;
- Security protection options of the digital asset container (e.g. cannot be spoofed, access control with a policy determined by the user, etc.). |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.13.3 Service Flows | 1. Alice accesses the digital asset container data services. The digital asset container is initiated with Alice information (digital representation (e.g. avatar) profile, IDs ...). The digital assets are completed and modified over time. The service (allowing to store and update information) can be provided by the network operator or by a third party using an operator’s trusted API.
2. Alice wants to dispose of old paint and solvent at a local dump. She has to identify herself as being a local resident, authorized to use the dump. She has to provide payment information, to pay the fee to dispose of toxic waste. She interacts with the dump (services) and the ID and payment information is shared with the service. The authorities that run the facility now allowing Alice to put down the paint and solvent.
Alice access to her digital asset container to select the list of information (local resident, payment information, ID) that she has already configured and saved (e.g. her digital representation (e.g. avatar) and other information like her electronic money and associated financial services, ID, purchased items,…). The choice of information can be automated (without action on the part of the user).
3. She then connects to the digital service, e.g. mobile metaverse service, with the information she authorises to share for the successful provision of the service. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.13.4 Post-conditions | Alice is authorized to access and use the dump. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.13.5 Existing features partly or fully covering the use case functionality | This feature is currently not documented in the 3GPP specifications.
Concerning the user identity related aspects, the features described in the document TR 22.904 [X] can be applied. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.13.6 Potential New Requirements needed to support the use case | [PR 5.13.6-1] The 5G system shall support allow a user to securely manage a digital asset container (e.g. store and update the information associated with this user).
[PR 5.13.6-2] The 5G system shall support mechanisms to retrieve the information of a digital asset container associated with a user by an authorized third party.
[PR 5.13.6-3] According to the service invoked when a user accesses an application platform, the 5G system shall support mechanisms to provide the information associated with the user to a third party.
[PR 5.13.6-4] The 5G system shall support mechanisms to certify the authenticity of the information of a digital asset container associated with a user.
[PR 5.13.6-5] The 5G system shall protect against spoofing attacks of the customer’s digital asset container. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.14 Use Case on interconnection of mobile metaverse services | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.14.1 Description | The concept "mobile metaverse" and "metaverse" became popular during the coronavirus pandemic as lockdown measures and work-from-home policies pushed more people online for both business and pleasure, increasing demand for ways to make online interaction more lifelike. The term covers a wide variety of location agnostic service experiences, from workplace tools to games and community platforms. It generally refers to shared and immersive digital service experiences (i.e. mobile metaverse services) that people can experience by means of using XR devices. By 2026, 25% of people are estimated to spend at least one hour a day using services that provide immersive XR media for work, shopping, education, social media and/or entertainment, according to the latest study by Gartner, Inc. (a U.S.-based technology research and consulting company).
Mobile metaverse service technologies are still in the early stages of being adopted. Currently there are already many digital environments (i.e. mobile metaverse services that offer location agnostic and location related service experiences), which typically run in silos and are not interconnected. From end users’ view point, there are several basic requirements to be addressed:
- Depending on the immersive XR media service the user wants to connect to, he/she can choose his/her digital representation and the related information when needed: avatar (one or more), e-money (e.g. financial services as payment, his/her means of payment), ID, purchased items…
- A user is able to transition between immersive mobile metaverse services using the similar digital representation seamlessly and taking into account the constraints of the mobile metaverse services accessed. The transfer of information via the operator's network ensures the semantic compatibility – possibly through abstraction - between the origin and the destination. This also ensures, if necessary, the confidentiality of the origin and the destination.
In the following use case, Alice uses an immersive mobile metaverse service of a travel company, and a trip interests her. She would like to verify if she has enough money to buy the trip. She needs to link the immersive mobile metaverse service of her bank with the current immersive mobile metaverse service of the travel company so that her profile is automatically shared between these two services (she has previously given authorisation).
In this use case 'digital representations' are expected to interwork with specific services. The use case does not propose to define a new 'standard' for digital representations (avatars, electronic money and financial service, IDs, purchased items.) Rather personal information is stored and retrieved to improve service delivery and to assure privacy and security. If some formats, etc. are shared between different service providers, this would present an opportunity for consistency and continuity for the user of those two services, enabled by this use case. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.14.2 Pre-conditions | Alice has a service subscription with the network operator M4Mobile, which is for communication services. When visiting immersive mobile metaverse service Alice uses digital representation which contains her avatars and other information like her electronic money and associated financial services, IDs, purchased items …
Alice has chosen a profile (a subset of her digital representation) for her session to access universes. The choice of information can be automated (without action on the part of the user) depending on the universe visited or already visited, the user configurations, privacy options, etc. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.14.3 Service Flows | 1. Alice would like to buy a travel using mobile metaverse service.
2. Alice connects to the immersive mobile metaverse service A of a travel company with the information she authorises to share for the successful provision of the service (the purchase of the travel).
3. During her session Alice is interested in buying a trip, for which she needs to interact with the mobile metaverse service B of her bank.
4. When moving between these immersive mobile metaverse services A and B, the network operator provides the same user information (for instance regarding the digital representation used to connect to the original mobile metaverse service…) in accordance with the configurations and the rights granted by the user.
5. Information may be coded differently in the immersive mobile metaverse service A and in the immersive mobile metaverse service B (e.g. the level of graphical accuracy of an avatar). In this case, a negotiation by the network operator may be necessary to adapt the information received from A to B. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.14.4 Post-conditions | Alice appears in the universe with the digital information chosen in her wallet (with some certified via the network operator). She keeps that information as she travels from universe to universe. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.14.5 Existing features partly or fully covering the use case functionality | This feature is currently not documented in the 3GPP specifications.
Concerning the user identity related aspects, the features described in the document TR 22.904 [X] can be applied. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.14.6 Potential New Requirements needed to support the use case | [PR 5.14.6-1] The 5G system shall support suitable APIs to securely provide information of a user to an immersive mobile metaverse service when the user accesses the service.
[PR 5.14.6-2] The 5G system shall support mechanisms to adapt the user assets and information stored by one immersive mobile metaverse service with the information needed, or requested, by another immersive XR media service. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.15 Use Case on Access to avatars | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.15.1 Description | Mobile metaverse services often involve the use of digital representation (e.g. avatars, which are discussed throughout this use case.) Given different use cases, the data associated with the avatars of a user are generated and stored in different mobile metaverse servers. For example, a user uses life-like avatars for e-commerce and cartoonish avatars for gaming. Network operators enabling users to obtain diverse mobile metaverse services should support avatar management. For example, network operators can leverage their existing connections to extensive mobile metaverse servers and provide access to avatars across these servers acting as a proxy. Compared to the model where two mobile metaverse servers define direct access APIs, the interconnect model described in the use case can utilise the 5G system capability of authenticating and authorizing the third-party entities.
The advantage of having a central storage of the information related to avatars is that the same avatar could potentially be used in different mobile metaverse services. The information exposed by the central point, i.e. the 5G system, to different mobile metaverse services helps them share and use the same avatar for a user. Users would therefore benefit from using their UE/ mobile access for metaverse services because there are enablers (like this one) that provide consistency between different metaverse services.
It is noted that the storage location of avatars is subject to service agreement between the operator and the third-party entities, and hence it is out of scope of 3GPP. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.15.2 Pre-conditions | ClothingA and ClothingB are two small clothing companies that both have virtual stores and provide avatar-based shopping services. Online shoppers can use immersive real-time technology to virtually try-on apparel, accessories, or full looks on the digital representation of themselves, i.e. avatars. Their avatars are stored in mobile metaverse servers, and interoperable data formats between these servers are used for avatars.
T is a mobile network operator. Based on its service level agreements with ClothingA and ClothingB, it provides multimedia communication services to enable virtual shopping. Moreover, T behaves like a proxy and supports the exchange of avatars stored in the databases of ClothingA, ClothingB, and any other companies that have agreements with T.
Shaun, an online shopper, has used the virtual try-on service provided by ClothingA several times. His avatar–a 3D actual visual representation of himself is stored in the ClothingA database. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.15.3 Service Flows | 1. ClothingA and ClothingB register with the operator T. Shaun registers with T by a UE that has a subscription with T.
2. Shaun visits the ClothingA virtual store using his avatar stored in the ClothingA database. He is authenticated by T and ClothingA, and a multimedia communication session is established between Shaun, a shop assistant, and associated devices (e.g. AR glasses). Shaun tries on some products and sees 3D digital clothing automatically appear on himself.
3. Shaun terminates the session with ClothingA. His user profile on T’s system is updated with the information that an avatar is stored in the ClothingA database. Parameters linked to this avatar in the user profile may include
- last access time. This could potentially help a user select which avatar to use and help the 5G system determine if the avatar is still available;
- authorised mobile metaverse services;
- address (e.g. IP address). This could potentially help a trusted third party retrieve the avatar.
4. Having authenticated by T and ClothingB, Shaun visits the ClothingB virtual store for the first time. Since it is his first visit, Shaun has no avatars available in ClothingB. ClothingB requests the 5G system for the avatar-related information of Shaun.
5. The 5G system accepts the request and exposes the selected avatar-related information in Shaun’s user profile to ClothingB. The decision of what information to be exposed is subject to user consent and service agreement between third parties and T. As pre-agreed by ClothingA, the information related to the avatar stored in its database is exposed to ClothingB. The information is then provided to Shaun, based on which Shaun decides to reuse the avatar stored in the ClothingA database.
6. ClothingB sends a request for Shaun’s avatar to the 5G system. The 5G system authorises the request and provides ClothingB with the IP address of the avatar.
7. ClothingB retrieves the avatar from the ClothingA database using the given IP address. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.15.4 Post-conditions | Shaun tries on products in the ClothingB virtual store with his avatar.
Shaun’s user profile on T’s system is updated to record the use of his avatar in the ClothingB virtual store.
T charges ClothingA, ClothingB, and Shaun for supporting virtual shopping sessions. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.15.5 Existing features partly or fully covering the use case functionality | The functional requirements for user identity are captured in TS 22.101 clause 26a [4]. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.15.6 Potential New Requirements needed to support the use case | [PR 5.15.6-1] Subject to user consent, operator policy, and regulatory requirements, the 5G system shall be able to store and update the information related to digital representations for a user (e.g. last access time and address).
[PR 5.15.6-2] Subject to user consent, operator policy, and regulatory requirements, the 5G system shall support mechanisms to expose the information related to the digital representations of a user to a trusted third party.
[PR 5.15.6-3] Subject to user consent and operator policy, the 5G system shall be able to authorise a trusted third party to use the digital representations of a user. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.16 Use Case on virtual store in a mobile metaverse marketplace | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.16.1 Description | 5G technologies especially the XR communication technologies make it possible to run business online (with or without physical offices/stores) to offer various services. Examples include fashion store online, drop-shipping business, virtual real estate agency, virtual assistants, teaching an online course, and online fitness training. Running a business online is particularly attractive for start-ups and small even medium sized businesses. For end consumers, visiting a market is a real feast for the senses. It is a multisensory experience that combines tradition with human contact and involves a series of decisions about which products to choose for the shopping basket.
Mobile metaverse services are expected to help to transfer a world so rich in sensations to the virtual sphere, which offer rich XR enabled multimedia communication services together with security mechanisms for data protection, user identity/profile management as well as digital asset management and protection. IMS based avatar calls are among these new features/services, where AI avatar can be used to help facilitate social interactions. An AI avatar [61] [62] is a digital character powered by artificial intelligence, which lives in a virtual setting, like a game, social network or online world. More frequently avatars are designed as human-like bots that can be controlled by real human using AI technologies and can easily engage with real humans and maintain relationship - to varying degrees - with them. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.16.2 Pre-conditions | Magnificent Muggles, a niche fashion company, has set up virtual stores in a metaverse marketplace (a mobile metaverse service) provided by GreenMobile. The corresponding 5G communication subscriptions provided by GreenMobile include the XR enabled multimedia communication services, including IMS based avatar calls) which enable the efficient near-real life interaction between the virtual shop assistant and the online shoppers, an immersive location agnostic service experience. As a fashion retailer, Magnificent Muggles expect to provide similar if not better experience to their online shoppers in their virtual stores. To enable this, they have their shop assistants equipped with XR devices to interact with online buyers. In their virtual stores, the end consumers can “see” the products as if they were buying them face-to-face and they can interact live with the people selling them.
As part of the service level agreement, GreenMobile provides storage and communication services to Magnificent Muggles:
- to store the digital representations (e.g. avatars) for the virtual shop assistance;
- to assist the authentication of their employees to use the digital representations (e.g. avatars) in the XR communication when assisting online buyers;
- to render the digital representations (e.g. avatars) based on the voice, facial expression or body motion of a human user.
The service flows below illustrate how the virtual shop assistant and an online shopper interact with each other using services provided by 3GPP system. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.16.3 Service Flows | 0.1 Magnificent Muggles register with GreenMobile (the provider of 5G communication services and the mobile metaverse service) the digital representation (e.g. avatar) to be used by the virtual shop assistants. Subject to regulatory requirements, the digital representation are then certified to be legally used in a certain region. The digital representations are stored at GreenMobile’s edge sites.
0.2 Mrs. Dursley, an end consumer, registers and stores her digital representations with GreenMobile to be used in the metaverse marketplace. Subject to regulatory requirements, the digital representations are then certified to be legally used in a certain region.
1. Humphrey is the shop assistant in the Magnificent Muggles virtual store. Due to the ongoing pandemic he works from home. As part of the security requirements, he needs to be verified as an employee of Magnificent Muggles before having an XR communication with online buyers.
2. Mrs. Dursley decides to pay a visit at the virtual store of Magnificent Muggles to check out the new clothing. Humphrey, the shop assistant, greets her and offers to set up an XR communication to show her around the new lines. Mrs. Dursley thinks a good idea and agrees. Having completed the authentication of the participants, the multimedia communication session is set up between Mrs. Dursley and Humphrey as well as the associated XR devices.
3.1 For this session Humphrey uses one of the Magnificent Muggles registered digital representations. During the session, the terminal sends the audio and video data to the network. The network does the rendering of the digital image based on the voice, facial expression as well as the gesture, then sends to Mrs. Dursley’s terminal.
1) The body motion or facial expressions of Humphrey are captured at UE1, which is transmitted to the network.
2) With the received information about the user’s motion or facial expressions, the network renders the avatar (the dynamic 3D object).
3) The media data (converted from the 3D object) is then transmitted to the recipient, UE2 of Mrs. Dursley.
4) The video image (with the rendered avatar) is displayed at the screen of Mrs. Dursley’s terminal.
Figure 5.16.3-1: An example of avatar call functional flow (image rendering at the network)
NOTE: it is also possible for UE1 to send video stream to the network, with which the network can render the avatar (the dynamic 3D object). This is particularly useful for UEs with limited capability.
3.2 Mrs. Dursley downloads, from the network, one of her registered digital representations to be used for the XR communication. During the session, the rendering is done at the terminal side. An example of the functional flow from Mrs. Dursley to Humphrey is illustrated in figure 5.16.3-2. Note that in this option the required 3D avatar model needs to be made available at all the recipients (UE1 in this option).
1) The body motion or facial expressions of Mrs. Dursley are captured at UE2, which is transmitted to the recipient via the network.
2) With the received information about Mrs. Dursley’s motion or facial expressions, UE1 renders the avatar (the dynamic 3D object). The video image (with the rendered avatar) is displayed at the screen of Humphrey’s terminal.
Figure 5.16.3-2: An example of avatar call functional flow (image rendering at the receiving side)
NOTE: this use case shows the example of rendering of the video-based avatar media to 3D avatar. It is also possible to render other media to 3D avatar. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.16.4 Post-conditions | The 3GPP system with a combination of various technologies offers the users an immersive shopping experience equivalent to a face-to-face purchase in a crowded market. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.16.5 Existing features partly or fully covering the use case functionality | The service requirements on IMS Multimedia Telephony Service and supplementary services are well documented in TS 22.173 since Rel-7, many of which have been implemented in stage-2 and stage-3 WGs. The requirements on 3GPP IMS Multimedia Telephony Service are captured in TS 22.261 [5] clause 6.39 as a result of Rel-18 work.
On the user identity related aspects, there are several features defined including:
- Support of Multi-device and Multi-Identity in IMS MMTEL service is captured in TS 22.173 clause 4.6 [3]:
The support of multiple devices is inherent in IMS. In addition, a service provider may allow a user to use any public user identities for its outgoing and incoming calls. The added identities can but do not have to belong to the served user. Identities may be part of different subscriptions and different operators.
- TS 22.101 [4] has specified in clause 26a a set of service requirements on User Identity:
Identifying distinguished user identities of the user (provided by some external party or by the operator) in the operator network enables an operator to provide an enhanced user experience and optimized performance as well as to offer services to devices that are not part of a 3GPP network. The user to be identified could be an individual human user, using a UE with a certain subscription, or an application running on or connecting via a UE, or a device (“thing”) behind a gateway UE.
Network settings can be adapted and services offered to users according to their needs, independent of the subscription that is used to establish the connection. By acting as an identity provider, the operator can take additional information from the network into account to provide a higher level of security for the authentication of a user.
The 3GPP System shall support to authenticate a User Identity to a service with a User Identifier.
- Clause 8 of TS 22.261 [5] specifies the security related requirements covering aspects such as authentication and authorization, identity management, and data security and privacy.
The functional requirement and performance KPIs in support of XR applications are mainly captured in TS 22.261 [5]:
- clause 7.6.1 AR/VR;
- clause 6.43 Tactile and multi-modal communication service;
- clause 7.11 KPIs for tactile and multi-modal communication service
In support of metaverse services, additional considerations need to be given on the following aspects:
- securely register and store the digital representations (e.g. avatars) for the users. The user could be an individual human user using a UE with a certain subscription, or an application running on or connecting via a UE, or a device behind a gateway UE. The user could also be a third party, which is typically an enterprise customer having service level agreement with the operator and interacting with the 3GPP network via an application server.
- assist the authorization of the use of third party’s digital assets (e.g. the digital representations (e.g. avatars) in the XR communication. The third party is also involved in the procedure to certify the user identity (e.g. an employee of the company).
- when required render the digital representations (e.g. avatars) based on the voice, facial expression or gesture in the live communication video |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.16.6 Potential New Requirements needed to support the use case | [PR 5.16.6-1] Subject to user consent, the 5G system shall support mechanisms to securely register, store and update the digital assets for a user.
NOTE 1: The user could be a human user using a UE with a certain subscription, or an application running on or connecting via a UE, or a device behind a gateway UE. The user could also be a third party, which is typically an enterprise customer having service level agreement with the operator and interacting with the 3GPP network via an application server.
[PR 5.16.6-2] Subject to regulatory requirements and operator’s policy, the 5G system shall provide suitable and secure means to allow trusted third-party to authorize the use of the digital assets (that belong to the third-party enterprise customer) by a user.
NOTE 2: In a typical example the user is an employee of the third-party enterprise customer.
[PR 5.16.6-3] The 5G system shall be able to collect charging information per UE for managing (e.g. register, store and update) the digital assets for an end user (e.g. typically a human user with a certain subscription).
[PR 5.16.6-4] The 5G system shall be able to collect charging information per application for managing (e.g. register, store and update) the digital assets for the third party (e.g. typically an enterprise customer having service level agreement with the operator).
[PR 5.
16.6-5] Subject to regulatory requirements and user consent, the 5G system shall support real-time transmission, between a UE and the network, of the body movement information (e.g. body motion or facial expressions) of a human user in order to ensure immersive voice/audio and visual experience.
NOTE 3: The body movement information (e.g. body motion or facial expressions) of a human user is used for rendering of the avatar of this user.
[PR 5.16.6-6] Subject to regulatory requirements, user consent and operator’s policy, the IMS shall support the capabilities of rendering the avatar based on the body movement information (e.g. body motion or facial expression) of a human user. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.17 Use Case on Work delegation to autonomous virtual alter ego | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.17.1 Description | Artificial Intelligence (AI) is becoming more and more popular in many areas where especially humans cannot handle complicated tasks well (e.g., factory, vehicle, robot, mobile). This trend is likely to continue, and AI will be applied to even more areas. In addition to the rapid expansion of AI, the AI technology itself is also improving. AI that can express emotions like humans and AI that can communicate naturally are now emerging. Given these trends, AI could one day be used not only for industrial use cases, but also as our personal partner and personal assistant to perform many of the tasks around us.
This use case proposes a communication with an autonomous virtual alter ego, which is an AI-based digital representation acting autonomously on behalf of a user herself/himself in the mobile metaverse services. For example, user's autonomous virtual alter ego autonomously sends a mail to clients on user’s behalf. Also, the alter ego can autonomously communicate with the user, other physical users, and other alter ego by using the network capabilities based on the user’s 3GPP subscription. Therefore, the use of network by the alter ego has to be captured correctly by the network from charging point of view.
NOTE: The term "autonomous virtual alter ego" means an AI-based digital representation behaving autonomously on behalf of a user herself/himself in the mobile metaverse services.
All the experience and knowledge performed both in physical world and metaverse will be shared between the alter ego and its user, thus creating more than double the opportunities to play multiple roles simultaneously. This autonomous virtual alter ego concept aims to improve emotional well–being, health, and life satisfaction by enabling users to perceive many opportunities in life, such as balancing work and family and participating in many communities simultaneously. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.17.2 Pre-conditions | John has a UE which has connectivity to 5GS based on subscription to MNO and a contract with an autonomous virtual alter ego service provider. The service setting and parameters for this alter ego service are stored in the user’s subscription data.
John’s virtual alter ego has been trained by the autonomous virtual alter ego service provider, so he can enjoy the alter ego application via his UE.
There are two kinds of application servers. One is for alter ego application. The other is for other applications which the alter ego application connects for executing tasks.
The autonomous virtual alter ego service provider is trusted by MNO, and the alter ego can use the network capabilities autonomously based on user’s subscription to MNO.
NOTE: The autonomous virtual alter ego application server doesn’t always have connectivity to internet (e.g., the case that alter ego service is operated on edge servers). Therefore, there would be the case that alter ego application server connects to other application servers via 5GS not via internet.
The MNO offers a service enabler that allows John to request that the network limit how much resources his alter ego is able to consume on his behalf. The service enabler also provides John with storage space that he can use to store application specific data and information about himself in the network. John is able to configure the enabler to give the virtual alter ego limited access to John’s information. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.17.3 Service Flows | 1. John has a F2F appointment with his important client at client's office, so he cannot attend an internal web meeting scheduled on the same time. Then, he connects to the alter ego application via his UE and asks his alter ego to show up and participate in the internal web meeting on his behalf.
2. His virtual alter ego checks the network resources on 5GS and computing resources on the alter ego server. Then it judges whether the requested task is able to be process or not. The virtual alter ego checks with the enabler server to see what information about John it is able to access.
3. If the alter ego can complete the tasks, the information from the enabler server is used to train the alter ego and the alter ego starts the tasks. Otherwise, the alter ego proposes task examples it can do with current resources to him, so that John can reconsider the request. For example, the alter ego can only listen during the meeting and take some notes but can’t say anything. Once John and the alter ego agree on what task(s) the alter ego will perform, John silences, or turns his off his UE so that he can focus on the F2F meeting.
4. Before the alter ego attends the meeting, it accesses the web meeting server as John’s alter ego via internet or 5GS by receiving the permission from John and web meeting server. When the company internal information or some other information is needed for the meeting, the alter ego asks the enabler to permit access to the information or asks John to permit the access to them.
5. At the meeting, the alter ego explains something, makes some questions/comments to other attendees (including physical humans and other virtual alter egos).
6. After the meeting, the alter ego autonomously reports to John via 5GS by message or on call. He looks or listens to the report and returns feedback and new requests by message or on call if any.
7. When the alter ego communicates via 5GS, the charging information is collected and linked to user’s charging information to charge the user subscribing the alter ego service.
Figure 5.17.3-1: Alter Ego Service Flow |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.17.4 Post-conditions | After receiving the feedback from John, the alter ego is retrained. Then, the autonomous virtual alter ego becomes more accurate and finishes the tasks much more immediately. As a result, the time available to him in life is more than doubled. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.17.5 Existing features partly or fully covering the use case functionality | AIML model transfer frameworks documented in TR 23.700-80 [51] can be applied to this use case.
IMS MMTEL services documented in TS 22.173 [3] can be applied to this use case. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.17.6 Potential New Requirements needed to support the use case | [PR 5.17.6-1] The 5G system shall be able to provide a means for a subscriber to authorize a third party to use a subscriber’s digital representation (e.g., avatar) and to access multimedia communication services on behalf of the subscriber.
[PR 5.17.6-2] The 5G system shall be able to collect charging information associated with communication involving a digital representation associated with the subscriber. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.18 Use Case on virtual meeting room in financial services | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.18.1 Description | Meeting rooms in banks provide a private place for customers and financial manager to communicate, financial manager can provide customized information on the financial products that suitable for the customers. Customers may find a dedicated room for consulting and signing contact safer and the user experiences are better. While meeting rooms are limited resource in bank and customers need to go to bank by themselves for consulting, which will take more time and resources. A virtual bank meeting room offered by a mobile metaverse service can solve this limitation.
The virtual banking space can be designed by the consumers based on their user preference. Consumers can be represented by their digital representations (e.g. avatars) as they use these mobile metaverse services. Consumers can have eye contact or observe each others' body movements in a virtual environment, generating a friendly face-to-face service experience. With this service option, bank branches are freed from physical limitations of space and location. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.18.2 Pre-conditions | Each user has a unique digital representation (e.g. avatar) in the mobile metaverse service. Bank R provides consumers a virtual bank as a mobile metaverse service as a location agnostic service experience. This service requires a high level of security in mobile communication as the content is sensitive. Users have their own digital representations (e.g. avatars) that they use to represent themselves when they use the mobile metaverse service, and these avatars are mapped with their real identification. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.18.3 Service Flows | 1. Frank is a very active user of mobile metaverse service X, he does his daily work and entertainment making use of this mobile metaverse service X using his avatar.
2. Bank R has a virtual branch offered as mobile metaverse service X, in which the bank provides financial services and can provides different financial products to consumers based on their individual preferences. Frank is a VIP customer of Bank R. Frank is considering to have some financial products and he needs to consult with a professional financial manager in the virtual bank.
3. Frank enters the virtual bank branch using his avatar, Bank R will identify the user Frank, represented by his digital representation (e.g. avatar,) and authorize these by means of the 5GS to make sure the real identification of this avatar is the same with Frank.
4. 5GS will inform Bank R that this avatar is authenticated and authorized to represent Frank, and this digital representations (e.g. avatar) is authorized to represent Frank to perform financial actions. Bank R receives this information and provides this digital representation (e.g. avatar) representing Frank access to a customized VIP consulting room. In this room, Bank R can provide consulting and financial services to Frank.
5. After the authorization, the 5GS will increase automatically update the security mechanisms (such as encryption algorithms) associated with the PDU session to guarantee the security of the communication services used to deliver this financial service. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.18.4 Post-conditions | Frank had a safe and realistic experience using his digital representations (e.g. avatar) in the virtual meeting room. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.18.5 Existing features partly or fully covering the use case functionality | None. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.18.6 Potential New Requirements needed to support the use case | [PR 5.18.6-1] Subject to operator policy and national or regional regulation, the 5G system shall support identification of digital representations (e.g. avatars) associated with users, for mobile metaverse services.
[PR 5.18.6-2] Subject to operator policy and national or regional regulation, the 5G system shall support different communication security mechanisms according to the security requirements of different services. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.19 Use Case on Privacy-Aware Dynamic Network Exposure in Immersive Interactive Experiences | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.19.1 Description | With the proliferation of APIs in existing mobile applications already creating an extensive market for application exposure, API integration in emerging Metaverse applications and features is likely to emerge as a major functionality for enhancing experiences across extended reality functions that builds upon already-existing API development. Given the importance of consistent, reliable network access and the low-latency connections necessary to generate and maintain immersive experiences in Metaverse immersive experiences, one could reasonably expect the development of APIs supporting network exposure for configuring and optimizing network features for a diverse array of emerging functions in extended reality interactions. As 5G begins to support VR, AR, and MR interactions through the cellular network, questions surrounding the efficiency and trustworthiness of network exposure to application developers abound.
In particular, the exposure of network characteristics through and the development of network-focused applications raises important questions around the privacy of user data with respect to the use of sensitive data around their internet usage, which could potentially reveal personally identifiable information about their location, environment, behaviour, or specific activities through such exposure. This concern extends beyond industry best practices and into emerging requirements from regulations such as the GDPR [52], CCPA [53], and other emerging national and international privacy regulation frameworks which specify the right of individuals to privacy across the lifecycle of data that could reveal personally identifiable information across a broad specification of contexts. It is thus incumbent on this body to proactively standardize the privacy features of the emerging 5GS in the context of APIs to ensure that such network exposure in application contexts does not expose providers or users to undue risks or liability. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.19.2 Pre-conditions | The following pre-conditions and assumptions apply to this use case:
1. Jenna is developing an application that uses potentially personally identifiable information.
2. Jenna is aware of the existence and relevance of tuneable network characteristics to improve or augment an immersive experience, e.g., sufficient tools exist to modify characteristics like streaming bitrate in immersive contexts.
3. Jenna has access to exposed APIs allowing her to deploy these features in relevant experiences for immersive interaction. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.19.3 Service Flows | 1. Jenna develops an application that uses sensitive data, e.g., an application that uses the real-time location and/or environmental features of users’ appearance and surroundings to generate a personal digital representation (e.g. avatar) in a mobile metaverse service activity.
2. Jenna uses an API exposing tuneable network characteristics to carry out some function, e.g., dynamically adjust the streaming resolution of generated mobile metaverse media (e.g. avatar/hologram,) or the streaming bitrate of the mobile metaverse media (e.g. avatar) in motion, based on higher-level network characteristics accessible in real time through the API.
3. Jenna develops an application that sends user information through the application to the network provider. Jenna does so in a way that is compliant with existing privacy transmission, storage, and processing standards. This means that Jenna’s application considers relevant privacy-preserving features such as informed consent to process, transmit, store, and appropriately delete any personally identifiable information collected and ingested during the flow.
4. The application uses this information to optimize a network-level feature such as streaming bitrate corresponding to a tuneable knob through the API. The network provider also considers relevant privacy-preserving features ingested as part of the data exchanged during this process.
5. When ingesting potential personally identifiable information at the network and/or application level, application provider, user, and network provider receive transparent, verifiable guarantees that data has been processed, stored, and transited in compliance with existing regulations within the user’s jurisdiction |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.19.4 Post-conditions | 1. Jenna’s digital representations (e.g. avatars) and other personally identifiable information generated through her application are able to safely exchange information through network exposure APIs without compromising the privacy of users or the network.
2. Network providers remain compliant with existing privacy regulations and best practices. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.19.5 Existing feature partly or fully covering use case functionality | Not applicable. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.19.6 Potential New Requirements needed to support the use case | [PR 5.19.6-1] Subject to national/regional regulations, and user consent, the 5G System shall be able to process and expose information from UEs related to user’s location, user’s body, and user’s environment, e.g., user’s home, user’s immediate vicinity.
NOTE: This requirement does not affect the ability of regulatory services, e.g., legal intercept service, to access such information without consent of the user. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.20 Use Case on Immersive Tele-Operated Driving in Hazardous Environment | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.20.1 Description | Operating vehicles, lifting devices, or machines in an industrial environment is hazardous when achieved manually and locally by a human. Depending on the environment, operators are exposed to dangerous material, toxic fumes, extreme temperatures, landslide risks, radioactivity, etc.
AGVs already exist, although it is expected that human operators can take remote control to remotely operate such moving vehicles.
In this use case, it is proposed to leverage 5G to provide an end-to-end system in which a remote user controls a moving device (vehicle, lifting device, robot, etc.) with an immersive cockpit displayed on a virtual reality head-mounted display and haptic gloves for control. Furthermore, the cockpit is complemented with information from the digital twin of the place in where the user operates (e.g., sensors in a factory, type of material around, other moving vehicles or persons).
The use case improves user safety and makes the operations even more accurate by merging additional information from a digital twin. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.20.2 Pre-conditions | Bob works in a seaport; he operates a lifting device. The place in which he is operating is surrounded by cranes, machines, containers, pipes, and barrels containing hazardous substances.
A new mobile metaverse service is available: instead of locally controlling the device, Bob is installed in a safe remote location from which he is working. The surrounding information is available through a digital twin of the seaport and can come from various sources (IoT sensors, CCTV cameras, connected machines, and other vehicles).
In order to maximize Bob’s efficiency, the metaverse service experience delivered by the system is real-time with non-noticeable latency. This use case includes both location related and location agnostic service experience examples.
The mobile metaverse service Bob uses for teleoperation is running on a mobile metaverse server. In addition, Bob is equipped with a head-mounted display and haptic gloves to remotely control the vehicle. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.20.3 Service Flows | 1. This morning, Bob stayed home as his boss informed him about a potential hazard at the factory that was identified through some sensor on a pipe. Unfortunately, the exact nature and location of the hazard on the pipe are not known. So, Bob decides to remotely inspect the factory before his boss and local public authorities arrive to check.
2. He puts on his head-mounted display on which a cockpit environment is displayed from the mobile metaverse server: a virtual control panel appears in front of him. He can see his hands and the control panel in the cockpit. Bob’s application is connected to the mobile metaverse server which enables him to use the service.
3. Bob can tell the mobile metaverse server to configure which surrounding information from the digital twin he wants to monitor. He decides to focus on the 3D representation of the pipe and get real-time sensor information from it, as well as live data from the ambient temperature and gas sensors. The mobile metaverse media displays additional predicted data that temperature is growing, gas concentration is increasing, and that there is a high risk of explosion in less than 10min if this continues. This surrounding information is integrated with other display elements in the cockpit, but he can anchor it in his FOV.
4. While driving along the seaport by remotely controlling the lifting device via its digital twin in the metaverse server, Bob can also see the (hidden) content of other pipes. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.20.4 Post-conditions | Thanks to the 5G mobile metaverse “Tele-operated Driving” service, Bob has been able to drive the vehicle remotely in a reactive way avoiding dangers and finding the leak with the help of the information provided via the digital twins. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.20.5 Existing feature partly or fully covering use case functionality | The use case related to traffic flow simulation in clause 5.2 already provides requirements and KPIs related to the operation of a moving UE, similar to an AGV. However, that use case does not envision the use of remote control, e.g., using haptic devices and HMD, which trigger new requirements.
The use case related to critical healthcare services in clause 5.10 captures the usage of HMD and haptic devices with related requirements and KPIs, which can be generalized to industrial operations. However, this use case does not consider time-critical decisions based on surrounding moving objects in an open area. Neither it relies on real-time digital twin updates to track the characteristics of the environment (e.g., information about pipe content, etc.) |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.20.6 Potential New Requirements needed to support the use case | [PR 5.20.6-1] The 5G system shall be able to provide a means to associate data flows related to one or multiple UEs with a single digital twin maintained by the mobile metaverse service.
[PR 5.20.6-2] The 5G system shall be able to provide a means to support data flows from one or multiple UEs to update a digital twin maintained by the mobile metaverse service.
[PR 5.20.6-3] Subject to regulatory requirements and operator’s policy, the 5G system shall be able to support data flows directed towards one or multiple UEs as a result of a change in a digital twin maintained by the mobile metaverse service, so that physical objects could be affected via actuators.
NOTE 1: How an application actually operates on physical objects upon receiving a command via the mobile metaverse service, e.g. using actuators, changing environmental controls configuration, etc is out of scope of the 5G system. In addition, regulations and/or other standards could apply to remote operations (e.g. based on a specific industry).
[PR 5.20.6-4] The 5G system shall be able to support the following KPIs for remotely controlling physical objects via the mobile metaverse service.
Use Cases
Characteristic parameter (KPI)
Influence quantity
Max allowed end-to-end latency
Service bit rate: user-experienced data rate
Reliability
Area Traffic capacity
Message Data Volume (bits)
Transfer interval
Position accuracy
UE speed
Service Area
Remarks
Metaverse-based Tele-Operated Driving
[100] ms [25] (NOTE 1)
[10~50 Mbit/s] [25]
99%
[25]
[~360 Mbit/s/km2 ]
(NOTE 4)
~8Mbps video stream. Four cameras per vehicle (one for each side): 4*8=32Mbps.
Sensor data (interpreted objects).
Assuming 1 kB/object/100 ms and 50 objects: 4 Mbps [25]
20~100 ms [25]
(NOTE 2)
[10] cm [25]
[10-50] km/h (vehicle) [25]
Stationary/Pedestrian (user)
Up to 10km radius [25]
(NOTE 3)
UL (NOTE 5)
[20] ms [25]
[0.1~0.4 Mbit/s] [25]
99,999% [25]
[~4 Mbit/s/km2 ]
(NOTE 4)
Up to 8Kb
per message [25]
20 ms [25]
(NOTE 2)
[10] cm [25]
[10-50] km/h (vehicle) [25]
Stationary/Pedestrian (user)
Up to 10km radius [25]
(NOTE 3)
DL (NOTE 5)
1-20ms
(NOTE 6)
16 kbit/s -2 Mbit/s
(without haptic compression encoding);
0.8 - 200 kbit/s
(with haptic compression encoding)
(NOTE 6)
99.999%
(NOTE 6)
[~20 Mbit/s/km2 ]
(NOTE 4)
2-8 (1 DoF) (NOTE 6)
Stationary/Pedestrian (user)
Up to 10km radius [25]
(NOTE 3)
Haptic feedback
NOTE 1: The end-to-end latency refers to the transmission delay between a UE and the mobile metaverse server or vice-versa, not including sensor acquisition or actuator control on the vehicle side, processing, and rendering on the user side (estimated additional 100ms total). Target e2e user experienced max delay depends on reaction time of the remote driver (e.g. at 50km/h, 20ms means 27cm of remote vehicle movement).
NOTE 2: UL data transfer interval around 20ms (video) to 100ms (sensor), DL data transfer interval (commands) around 20ms.
NOTE 3: The service area for teleoperation depends on the actual deployment; for example, it can be deployed for a warehouse, a factory, a transportation hub (seaport, airport etc.), or even a city district or city. In some cases, a local approach (e.g., the application servers are hosted at the network edge) is preferred to satisfy low latency and high-reliability requirements.
NOTE 4: The area traffic capacity is calculated for one 5G network, considering 4 cameras + sensors on each vehicle. Density is estimated to 10 vehicles/km2, each of the vehicles with one user controlling them. [25]
NOTE 5: Based on [25]. UL is real-time vehicle data (video streaming and/or sensor data), DL is control traffic (commands from the remote driver)
NOTE 6: KPI comes from [5] cl 7.11 “remote control robot” use case
Table-5.20.6-1: Key Performance Indicator (KPI) for mobile metaverse Tele-Operated Driving |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.21 Use Case on Virtual Emergency Drill over 5G Metaverse | |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.21.1 Description | An Emergency Drill is a crucial activity for governments, local municipalities, and citizens to prepare for potential disasters such as earthquakes, fires, and floods. To make the drills more effective, it is important for a wide range of people, organizations, and government entities to participate and create simulations that are as close to real-life disaster scenarios as possible. The use of a metaverse environment is expected to significantly enhance the value of these drills. With the ability to provide a more realistic experience, the Emergency Drill in the metaverse is expected to not only improve response to direct damage from emergencies, but also provide valuable data on human thoughts, decisions, and actions in actual crisis situations.
It is also important for mobile operators to anticipate traffic patterns related to confirming people's safety or evacuation actions during an emergency, and take measures to address potential data traffic congestion, overload, or failure of base stations or network equipment. The mobile network operator should be prepared not only for disasters but also large-scale network failures. They has to be able to quickly and accurately assess the extent of damage and impact and take timely action to recover their networks. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.21.2 Pre-conditions | City A, known for its beautiful beaches, attracts many visitors each year. However, it is located near the sea and is at risk of suffering significant tsunami damage in the event of a major earthquake. With the challenges of providing rapid evacuation guidance for residents, saving lives, and restoring infrastructure, City A holds an annual comprehensive emergency drill. Although the drill is typically held on a holiday, the number of participants has been decreasing in recent years due to work, leisure, or COVID-19. This year, City A has decided to conduct the emergency drill in the metaverse environment to address this issue. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.21.3 Service Flows | 1. City A is planning a virtual emergency drill that participants can access from any location, such as their office, home, or even the beach.
2. Mobile operator B will provide the 5G system and anticipated operational and maintenance data for the emergency drill in the metaverse environment.
3. In the metaverse environment, a virtual disaster, such as an explosion of Mt. Fuji, is simulated, and participants, including citizens, organizations, and governments, will immediately respond by assessing the damage, conducting evacuation and rescue activities, and taking other necessary actions in the virtual space.
4. City A and designated organizations will collect various types of data during the emergency drill.
5. Additionally, Mobile operator A will collect data in the virtual network environment, taking into account actual operational and maintenance data from the real environment, such as UE mobility, overload, and out of coverage, to evaluate the impact of the network in the event of a disaster and implement necessary countermeasures in the virtual environment. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.21.4 Post-conditions | By participating in emergency drills, citizens and organizations learn how to respond to disaster scenarios, such as evacuations and rescues, and this information can be incorporated into local government disaster preparedness plans. Additionally, mobile operators can take effective measures to counteract potential network failures and other adverse impacts. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.21.5 Existing features partly or fully covering the use case functionality | No existing features are identified. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.21.6 Potential New Requirements needed to support the use case | No potential new requirements have been identified. |
8fc4e7e237d7663b7a5c6a2b6436bde3 | 22.856 | 5.22 Use case of Mobile Metaverse Live Concert |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.