hash
stringlengths
32
32
doc_id
stringlengths
5
12
section
stringlengths
4
595
content
stringlengths
0
6.67M
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.5.1 Description
This work item implements the conclusions of the Rel-18 study on the 5GS architectural and functional extensions to enable 5GS to assist the Application AI/ML operations. The normative text will beis defined based on the agreed conclusions on 6 key issues, including unfinished discussions on proposals on how to define the aspects that are left to be finalized during normative work, and ensuring consistency with other 5GS features. The agreed conclusions focus on the following aspects: - Monitoring of network resource utilization to support the Application AI/ML operations. - Exposure of 5GC information to authorized 3rd party for Application AI/ML operations. - Enhancement of external parameter provisioning in 5GC to assist the Application AI/ML operations. - Enhancement in 5GC to enable Application AI/ML traffic transport. - Enhancement of QoS and Policy control to support Application AI/ML data transport over 5GS. - 5GS assistance to federated learning operation.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.5.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details. This work item specifies a list of principles that apply when the 5GS assists the AI/ML operation at the application layer as specified in clause 5.46 of TS 23.501 [22], namely: - AF requesting 5GS assistance to AI/ML operations in the application layer shall be authorized by the 5GC using the existing mechanisms. - Application AI/ML decisions and their internal operation logic reside at the AF and UE application client and is out of scope of 3GPP. - Based on application logic, it is the application decision whether to request assistance from 5GC, e.g. for the purpose of selection of Member UEs that participate in certain AI/ML operation. The activities of this work item are limited to providing assistance to AI/ML-based applications when the participating UEs are not roaming and the AI/ML operations in the application layer are conducted within a single slice. Policy and charging control as defined in TS 23.503 [24] are assumed to be used for traffic related to application AI/ML operations. The overall objective of this item is to provide assistance by the 5GC to AI/ML operations in the application layer, which are described in clause 5.2.2.1 and specified in clause 6.40 of TS 22.261 [6]. A brief description of the specified capabilities in this work item can be found below, with further details provided in clause 5.46 of TS 23.501 [22] and clause 11.1 in TR 21.918 [2] and the references therein: • Planned Data Transfer with QoS: this capability is used to enable the AF to negotiate a variable time window for the planned AI/ML operation e.g. application data transfer with specific QoS requirements and operational conditions via the support of the NEF. • Enhanced external parameter provisioning: this capability enables an AF hosting an AI/ML based application to provision enhanced Expected UE Behaviour parameters and/or Application-Specific Expected UE Behaviour parameter(s) to the 5GC by including corresponding confidence and/or accuracy levels to the expected parameters, which UDM could check against a threshold. • Member UE selection assistance functionality: this capability provided by NEF is used to assist the AF to select member UE(s) for AI/ML application operations (e.g. Federated Learning) according to the AF's request including a list of target member UEs and a set of filtering criteria. • Multi-member AF session with required QoS: this capability enables the NEF to map a request for Multi-member AF session with required QoS to individual requests for AF session with required QoS per UE address, and interact with each of the UE's serving PCFs on a per AF session basis. • End-to-end data volume transfer time analytics: this analytics provides the consumer (e.g. AF, NEF) with analytics (i.e. statistics, predictions or both) referring to a time delay for completing the transmission of a specific data volume from UE to AF, or from AF to UE. The data volume may be the expected or observed data volume from UE to AF or from AF to UE. • Enhanced NEF monitoring events: new NEF monitoring events are specified relevant to the operation of AI/ML based operations, namely session inactivity time, traffic volume exchanged between the UE and the AF and UL/DL consolidated data rate which is the aggregated data rate across all traffic flows corresponding to the list of UE addresses of the Multi-member AF session with required QoS. 5.2.2.5.2.1 AI/ML related LCM activities No AI/ML related LCM activities or functional entities were specified defined as part of this work. Instead, the activities summarized in clause 5.2.2.5.2 specify 5GS support for AI/ML related LCM activities (e.g. AI/ML model training and inference) assumed to be conducted at the application layer.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.6 Rel-19 SA WG2 SID - Core Network Enhanced Support for Artificial Intelligence (AI)/Machine Learning (ML) (FS_AIML_CN)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.6.1 Description
The aim of this study is to investigate and identify potential architectural and system-level enhancements to support AI/ML enhancements. Specifically, the objectives include: - AI/ML Cross-Domain Coordination Aspects: Investigate enhancements to support AI-enabled RAN based on the conclusions of the RAN study in TR 38.843 [3]. This task will discuss whether and how to support cross-domain (i.e. UE, RAN, 5GC, OAM, and AF) collaborative AI/ML mechanisms for the aspects described below: - Enhancements to LCS for AI/ML-Based Positioning: Examine whether and how to consider enhancements to LCS to support AI/ML-based positioning. - Collaborative AI/ML Operations for Vertical Federated Learning (VFL): Determine potential enhancements needed to enable the 5G system to assist in collaborative AI/ML operations involving 5GC/NWDAF and/or AF for "Vertical Federated Learning (VFL)." This work will be based solely on and limited to the scope of justified use cases. - Enhancements to Support NWDAF-Assisted Policy Control and Address Network Abnormal Behaviour: - Investigate additional support needed to enhance 5GC NF operations (i.e. policy control and QoS) assisted by NWDAF. This task will first identify specific use cases to define the appropriate scope. It will analyse the impacts on NWDAF (e.g. the need to understand specific NF functionality) and the compatibility of new solutions with existing analytics to determine the necessity and benefits of new solutions. - Study the prediction, detection, prevention, and mitigation of network abnormal behaviours, such as signalling storms, with the assistance of NWDAF.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.6.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details. 5.2.2.6.2.1 AI/ML related LCM activities Refer to activities in clause 5.2.2.7.2.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.7 Rel-19 SA WG2 WID - Core Network Enhanced Support for Artificial Intelligence (AI)/Machine Learning (ML) (AIML_CN)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.7.1 Description
The objective of this work item is to specify the following enhancements to 5GS as per the conclusions reached within the Rel-19 study for the following aspects: - Enhancements to LCS to Support Direct AI/ML-Based Positioning: - LMF Enhancements: The LMF will be enhanced to perform location calculations based on an ML model. The triggers for data collection and model training within the LMF will be implementation-specific. - MTLF and LMF Enhancements: Both the MTLF and the LMF will be enhanced to facilitate ML model training for AI/ML-based positioning. - Procedure Development: Related procedures for data collection will be developed in coordination with RAN WGs. - 5GC Support for Vertical Federated Learning: - 5GC Enhancements: The 5GC will be enhanced to support vertical federated learning (VFL), a technique that does not involve exchanging or sharing local datasets or ML models, in the following scenarios: - VFL among NWDAFs within a single PLMN. - VFL between NWDAF(s) within a single PLMN and AF(s). - NWDAF-Assisted Policy Control and QoS Enhancement: - Assistance Information: Based on PCF requests, the NWDAF may provide assistance information to the PCF to aid in the determination and modification of QoS parameters. - NWDAF Enhancements to Support Network Abnormal Behaviours Mitigation and Prevention: - Signalling Storm Mitigation: The NWDAF will support signalling storm mitigation and prevention by providing analytics related to the detection and prediction of signalling storms caused by massive signalling from UEs and/or NFs.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.7.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details. 5.2.2.7.2.1 AIML related LCM activities As part of the AIML_CN work the following enhancements are supported: Data collection/exposure Data collection from Direct AIML positioning. Data is collected to train an ML model for LMF-based AIML positioning and to support inference. AI/ML model training - ML model training for LMF-side Direct AIML positioning. LMF or MTLF support model training for LMF-Side Direct AIML positioning. - Collaborative ML model training using Vertical Federated Learning. Vertical Federated Learning is supported between NWDAFs or cross-domain between NWDAF and Application Functions AI/ML model inference - LMF is supports inference based on using trained ML Model for Direct AIML positioning provisioned by an MTLF or locally trained at LMF. Performance evaluation and accuracy monitoring - LMF or MTLF evaluates the performance of a trained ML Model by comparing the inference output against ground truth information.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.8 Rel-18 SA WG3 WID - Security aspects of enablers for Network Automation for 5G - phase 3 (eNA_SEC_PH3)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.8.1 Description
The main objective of this work is to produce normative specification based on the conclusions from Rel-18 study. More specifically, the following objectives are expected to be specified: - Protection of data and analytics exchange in roaming case. - Security for AI/ML model storage and sharing. - Authorization of selection of participant NWDAF instances in the Federated Learning group.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.8.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.9 Rel-19 SA WG3 SID - Security aspects of Core Network Enhanced Support for AIML (FS_AIML_CN_SEC)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.9.1 Description
The objectives of this study are the following: - Security Aspects on Enhancements to LCS: Study security aspects on enhancements to LCS to support AI/ML-based positioning, considering the conclusions in TR 38.843 [3] and TR 23.700-84 [4]. - Security Aspects of Cross-Domain Vertical Federated Learning (VFL): - Authorization of VFL Group Members: Examine the authorization of members of the VFL group. - Security Aspects of Enhancements on SA WG2 Architecture: Investigate security aspects of enhancements on SA WG2 architecture to support VFL.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.9.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.10 Rel-19 SA WG4 SID - Artificial Intelligence (AI) and Machine Learning (ML) for Media (FS_AI4Media)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.10.1 Description
The primary objective of this study item is to identify relevant interoperability requirements and implementation constraints of AI/ML in 5G media services. The specific objectives include: - Use Cases for Media-Based AI/ML Scenarios: List and describe the use cases for media-based AI/ML scenarios, based on those defined in TR 22.874 [5]. - Media Service Architecture and Service Flows: Describe the media service architecture and relevant service flows for the scenarios. Identify the impacts on the architecture for each use case, including any potential gaps with existing 5G media service architectures. Also, describe the model operation configurations for each use case, including split AI/ML operations, and identify where certain AI/ML operations occur. - Data Formats and Protocols: Identify and document the available data formats and suitable protocols for the exchange of different data components of various AI/ML models, such as model data, metadata, media data, and intermediate data necessary for such model operation configurations. Investigate the data traffic characteristics of these data components for delivery over the 5G system, including any needs and potentials for data rate reduction. - Key Performance Indicators (KPIs): Identify and study key performance indicators for such scenarios, based on the initial considerations in TS 22.261 [6]. Emphasize the use cases, model operation configurations, and data components identified in earlier objectives, focusing on objective performance metrics considering the identified KPIs. - Normative Work and Collaboration: Identify potential areas for normative work as the next phase. Communicate and align with SA WG2 and other potential 3GPP working groups on relevant aspects related to the study.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.10.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.11 Rel-18 SA WG5 WID - AI/ML management (AIML_MGT)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.11.1 Description
The objective of this work is to specify the AI/ML management capabilities, including use cases, requirements and solutions for each phase of the AI/ML operational workflow for managing the AI/ML capabilities in 5GS (i.e. management and orchestration, 5GC and NG-RAN), including: - Management capabilities for ML training phase, which includes control of producer-initiated ML training, data management for ML training, performance evaluation for ML training, ML entity validation, ML context management, ML entity capability discovery and ML entity testing. - Management capabilities for ML deployment phase, including management of ML entity loading. - Management capabilities for AI/ML inference phase. To describe the deployment scenarios of the AI/ML management capabilities, with consideration of alignment with other relevant 3GPP WGs (e.g. RAN WG3, SA WG2) and ETSI ISG ZSM.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.11.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details. 5.2.2.11.2.1 ML model life cycle management (LCM) Rel-18 specification in TS 28.105 [9] addressed the AI/ML LCM management capabilities (including wide range of use cases, corresponding requirements (stage 1) and solutions (stage 2 NRMs & stage 3 OpenAPIs) for the ML model, including ML model training (which also includes validation), ML model testing, AI/ML inference emulation, ML model deployment and AI/ML inference steps of the lifecycle. The specification defined operational workflow as shown in Figure 5.2.2.11.2.1-1 below highlighting the main steps of an ML model lifecycle. Figure 5.2.2.11.2.1-1: ML model lifecycle 5.2.2.11.2.2 ML model lifecycle management capabilities Each step in the ML model lifecycle, defined in TS 28.105 (see clause 6.1) [9] i.e., the ML model training, ML model testing, AI/ML emulation, ML model deployment and AI/ML inference correspond to number of dedicated management capabilities. The specified capabilities are developed based on corresponding use cases and requirements. 5.2.2.11.2.3 AI/ML functionalities management scenarios (relation with managed AI/ML features) The Rel-18 specification TS 28.105 (see clause 4a.2) [9] also documented AI/ML functionalities management scenarios in relation with managed AI/ML features which describe the possible locations of ML training function and AI/ML inference function involving the various 3GPP system domains.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.12 Rel-19 SA WG5 SID - AI/ML management - phase 2 (FS_AIML_MGT_Ph2)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.12.1 Description
The objectives of this study item include: - Continuation of AI/ML Studies: Continue the study on AI/ML emulation, AI/ML inference coordination, and ML knowledge transfer that are leftover from Rel-18. - Management Aspects of AI/ML Functionalities Defined by Other 3GPP WGs: - AI/ML Model Transfer in 5GS: Study the management aspects (LCM, CM, and PM) of AI/ML model transfer in 5GS, as defined in SA WG1. - 5GS Support for AI/ML-Based Services: Investigate the management aspects of 5GS support for AI/ML-based services, as defined in SA WG2. - Support for AI/ML Services at Application Enablement Layer: Examine the management aspects of support for AI/ML services at the application enablement layer, as defined in SA WG6. - Management Aspects of AI/ML Functionalities Defined by SA WG5: - Management Data Analytics (MDA) Phase 3: Study the management aspects (LCM, CM, and PM) of AI/ML functionalities defined by SA WG5, including MDA phase 3. - AI/ML Management and Operation Capabilities: Investigate the AI/ML management and operation capabilities to support different types of AI/ML technologies needed for AI/ML in 5GS, such as Federated Learning, Reinforcement Learning, Online and Offline Training, Distributed Learning, and Generative AI. - Sustainability Aspects of AI/ML: - Energy Consumption/Efficiency Impacts: Evaluate the energy consumption and efficiency impacts associated with AI/ML solutions for all operational phases (training, emulation, deployment, inference). - Trustworthiness Aspects Related to AI/ML Functionalities in 5GS: - Concept of Trustworthiness: Further study the concept of trustworthiness for AI/ML in the context of OAM. - Data for Trustworthiness Indicators: Identify and analyze data (e.g. measurements, events) to support the calculation of trustworthiness indicators.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.12.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.13 Rel-19 SA WG6 SID - Application layer support for AI/ML services (FS_AIMLAPP)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.13.1 Description
The objective of this study is to enable support for AI/ML services at the application enablement layer. This includes the following: - Analysis of Rel-18 and Rel-19 Requirements: Analyse the requirements in TS 22.261 [6] related to AI/ML model distribution, transfer, and training. Identify key issues and develop corresponding architectural requirements at the application enablement layer, along with potential enhancements to the application layer architecture. - Architectural and Functional Implications: Study the architectural and functional implications on existing SA WG6 application enablers (e.g. ADAES, other SEAL services, EDGEAPP) for supporting AI/ML lifecycle operations. This includes operations such as data collection, data preparation, training, inference, and federated learning for ML models used in ADAE layer analytics. - Potential Solutions and APIs: Identify potential solutions, including information flows and developer-friendly application enablement APIs, to satisfy the architectural requirements and enhancements identified in the previous points. - Impact on Deployments and Business Models: Investigate the possible impacts of application layer support for AI/ML services on different deployments and business models.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.13.2 Activities summary
In this study, 3GPP TR 23.700-82 [7] described the AI/ML enablement capabilities for supporting vertical use cases. The agreed AIMLE activities which were progressed in normative phase are described in clause 5.2.2.14.2.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.14 Rel-19 SA WG6 WID - Application enablement for AI/ML services (AIML_App)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.14.1 Description
The objectives of this work include the following: Develop Stage 2 normative technical specification for AIML enablement service as a new SEAL service, based on the key issues, architecture, solutions, and conclusions captured in TR 23.700-82 [7]. The Stage 2 normative technical specification will include the following aspects: - Architecture requirements, deployment models and application architecture for AIML service enablement over 3GPP networks. - Procedures, information flows and APIs supporting concluded solutions related to AIML enablement capabilities for AI/ML, FL (e.g. Vertical FL among VAL UEs, Horizontal FL), Transfer Learning. Such capabilities include: - Support AIML client management (e.g. registration, discovery) and selection. - Support AIML service lifecycle management aspects (e.g. training, inference, data management). - Support AIML operation split and ML model distribution operations. - Support AIML operations in edge / distributed deployments. - Procedures information flows and APIs supporting concluded solutions for application layer support capabilities related to new ADAE analytics services. Such new analytics services include: - DN Energy Analytics. - Analytics for supporting FL. Identify potential enhancements to other enablement frameworks (e.g. SEAL, EGDEAPP and CAPIF) based on the specified solutions for the above objectives.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.14.2 Activities summary
5.2.2.14.2.1 AI/ML Functional Entities In AIML_App, the following logical entities have been introduced within SEAL framework: • AIMLE server includes of a common set of services for exposure of AIML functionality, including federated and distributed learning (e.g., FL client registration management, FL client discovery and selection), and reference points. The AIMLE services are offered to the vertical application layer (VAL) and include: ◦ Support for application-layer ML model related aspects, including model retrieval, model training, model monitoring, model selection, model distribution, model update and model storage / discovery. ◦ Assistance in AI/ML task transfer and split AI/ML operations. ◦ Support HFL/VFL operations, including FL member registration, FL grouping and FL-related events notification, VFL feature alignment, HFL training, ML model training capability evaluation for FL (HFL/VFL). ◦ Support for AIMLE client registration, discovery, participation and selection. • AIMLE client functional entity acts as the application client supporting AIMLE services. • ML repository is a logical entity that serves both as a registry for AI/ML members or FL members and as a repository for application layer ML model related information. It can be accessed by the AIMLE server. 5.2.2.14.2.2 AI/ML related LCM activities Model Lifecycle enablement for AI/ML Some AIMLE capabilities are applicable to ML model lifecycle enablement which provides assistance for use cases where an ASP/VAL layer wants to find and use other application entities to perform some ML operations (e.g. ML model inference) and AIMLE server as a mediator to accomplish that. An example including some capabilities related to lifecycle enablement is depicted in Annex C.4 of [34]. The support capabilities are based on AIMLE capabilities identified in this specification. In particular, AIMLE is undertaking: • ML model related support capabilities such as model retrieval, discovery and storage (as covered in procedures in clauses 8.2 and 8.11 of [34] ) • ML operation related support capabilities such as VFL/ HFL and TL enablement, Split AI/ML Operation support, Data management assistance, AI/ML task transfer, FL assistance in member grouping, registration and event notification (as covered in procedures in clauses 8.4, 8.6, 8.12, 8.14, 8.15-8.18 of [34]). • AIMLE client related support capabilities, including AIMLE client registration, discovery, participation, monitoring, selection (as covered in procedures in clauses 8.7-8.10, 8.13 of [34]). Data Collection/Storage/Exposure activities. Analysis of data collection activities as part of AIML_App work. Data Collection in TS 23.482 refers to application data collection from the UE. EVEX mechanism can be reused for data collection as described in 3GPP TS 26.531. ML model performance degradation can be detected in the AI/ML enablement by leveraging ADAES, e.g., based on information collected from analytics consumer. In AIML_App, one possible use of AI/ML enablement is for supporting ML-enabled ADAES analytics services (as specified in TS 23.436). For Data Collection and Storage related to ADAES analytics: • Application layer - Data Collection and Coordination Function (A-DCCF) coordinates the collection and distribution of data requested by the consumer (ADAE server). Data Collection Coordination is supported by a A-DCCF. ADAE server can send requests for data to the A-DCCF rather than directly to the Data Sources. A-DCCF may also perform data processing/abstraction and data preparation based on the VAL server requirements. • Application layer – Analytics and Data Repository Function (A-ADRF) stores historical data and/or analytics, i.e., data and/or analytics related to past time period that has been obtained by the consumer (e.g. ADAE server). After the consumer obtains data and/or analytics, consumer may store historical data and/or analytics in an A-ADRF. Whether the consumer directly contacts the A-ADRF or goes via the A-DCCF is based on configuration. Editor’s Note: The relation of data collection for inference is FFS. AI/ML-related information storage and discovery for AI/ML Analysis of ML model storage and exposure activities as part of AIML_App work. In AIML_App, ML repository has been defined as 1) a registry for AI/ML members or FL members (application layer entities participating in an AI/ML operation) and 2) as a repository for application layer ML model related information. AIMLE server stores the ML model to the ML repository along with the ML model information (e.g. ML model ID). AIMLE server can also discover the ML models under certain filtering criteria (e.g. applicable to an ADAES analytics ID). AIMLE server also registers and stores information on VAL servers, AIMLE servers or AIMLE clients which are expected to serve as AI/ML members or FL members in a model lifecycle operation (e.g. ML training, FL, TL). AIMLE clients or other VAL servers can discover the availability and capabilities of registered AI/ML members or FL members for a given ML model ID. Such discovery allows e.g., the VAL server identifying the candidate FL members to be considered for an FL process. Model training/delivery/ (de)-activation/inference emulation activities In AIML_App, AIMLE server or the AIMLE client (at VAL UE side) can also be used for training an application layer ML model e.g. for given analytics service. Such ML model training can be used to support ADAES analytics services (as provided in TS 23.436). Based on the VAL request to provide ML-enabled analytics, ADAES may consume AIMLE services (e.g., for ML model training for a given analytics ID) to derive application layer data analytics. The trained ML model can be delivered to VAL server or ADAES via the ML model training notification API. 3GPP SA6 has not defined any procedures for model (de)-activation and inference emulation. AI/ML model inference and delivery support for AI/ML Analysis of ML model inference activities as part of AIML_App work. 3GPP SA6 has not defined dedicated procedures for supporting ML model inference; however, it provides assistance for registering and discovering AIMLE clients serving as ML model inference entities for a given analytics ID or model ID or split operation pipeline. Performance evaluation and accuracy monitoring activities Analysis of ML model performance evaluation and monitoring activities as part of AIML_App work. AIMLE server based on VAL request provides a capability for monitoring and detecting a degradation related to an ML operation / analytics operation and translating to an ML model performance degradation (expected or predicted) and performing a trigger action to alleviate this issue (new model training or re-training). Such trigger action may be either an adaptation of the AIMLE service, such as training of a new ML model for the AIMLE by the same or a different AIMLE client, or re-training of the ML model. AIML_App has provided the basic capability for performance monitoring activity, which is expected to be further worked in further release.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.15 Rel-19 CT WG4 WID - Protocol for AI Data Collection from UPF (FS_PAIDC-UPF)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.15.1 Description
In Rel-18, the UPF offers services to the NEF, AF, SMF, NWDAF, DCCF, MFAF via the Nupf service based interface for data collecting in AI/ML related activities. In Rel-19, CT WG4 is studying "Protocol for AI Data Collection from UPF", which aims at studying UPF data Collection for AI/ML and whether alternative protocols, or enhancements to the existing SBI protocol, are needed to optimize the AI/ML data collection while ensuring secure, scalable and reliable data transfers across the core network identifies.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.2.2.15.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3 AI/ML related activities in TSG RAN Working Groups
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.1 AI/ML related terminology
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.1.1 TSG RAN WG1
The following definitions are provided in clause 3 of TR 38.843 [3]: - AI/ML-enabled Feature: refers to a Feature where AI/ML may be used. - AI/ML Model: A data driven algorithm that applies AI/ML techniques to generate a set of outputs based on a set of inputs. - AI/ML model delivery: A generic term referring to delivery of an AI/ML model from one entity to another entity in any manner. Note: An entity could mean a network node/function (e.g. gNB, LMF, etc.), UE, proprietary server, etc. - AI/ML model ID: A logical AI/ML model is identified by a Model ID. The Model ID, if needed, can be used in a Functionality (defined in functionality-based LCM) for LCM operations. - AI/ML model Inference: A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs. - AI/ML model testing: A subprocess of training, to evaluate the performance of a final AI/ML model using a dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model. - AI/ML model training: A process to train an AI/ML Model [by learning the input/output relationship] in a data driven manner and obtain the trained AI/ML Model for inference. - AI/ML model transfer: Delivery of an AI/ML model over the air interface in a manner that is not transparent to 3GPP signalling, either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model. - AI/ML model validation: A subprocess of training, to evaluate the quality of an AI/ML model using a dataset different from one used for model training, that helps selecting model parameters that generalize beyond the dataset used for model training. - Data collection: A process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference. - Federated learning / federated training: A machine learning technique that trains an AI/ML model across multiple decentralized edge nodes (e.g. UEs, gNBs) each performing local model training using local data samples. The technique requires multiple interactions of the model, but no exchange of local data samples. - Functionality identification: A process/method of identifying an AI/ML functionality for the common understanding between the NW and the UE. Note: Information regarding the AI/ML functionality may be shared during functionality identification. Where AI/ML functionality resides depends on the specific use cases and sub use cases. - Management instruction: Information needed to ensure proper inference operation. This information may include selection/(de)activation/switching of AI/ML models or AI/ML functionalities, fallback to non-AI/ML operation, etc. - Model activation: enable an AI/ML model for a specific AI/ML-enabled feature. - Model deactivation: disable an AI/ML model for a specific AI/ML-enabled feature. - Model download: Model transfer from the network to UE. - Model identification: A process/method of identifying an AI/ML model for the common understanding between the NW and the UE. Note: The process/method of model identification may or may not be applicable. Note: Information regarding the AI/ML model may be shared during model identification. - Model monitoring: A procedure that monitors the inference performance of the AI/ML model. - Model parameter update: Process of updating the model parameters of a model. - Model selection: The process of selecting an AI/ML model for activation among multiple models for the same AI/ML enabled feature. Note: Model selection may or may not be carried out simultaneously with model activation. - Model switching: Deactivating a currently active AI/ML model and activating a different AI/ML model for a specific AI/ML-enabled feature. - Model update: Process of updating the model parameters and/or model structure of a model. - Model upload: Model transfer from UE to the network. - Network-side (AI/ML) model: An AI/ML Model whose inference is performed entirely at the network. - Offline field data: The data collected from field and used for offline training of the AI/ML model. - Offline training: An AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference. Note: This definition only serves as a guidance. There may be cases that may not exactly conform to this definition but could still be categorized as offline training by commonly accepted conventions. - Online field data: The data collected from field and used for online training of the AI/ML model. - Online training: An AI/ML training process where the model being used for inference) is (typically continuously) trained in (near) real-time with the arrival of new training samples. - Reinforcement Learning (RL): A process of training an AI/ML model from input (a.k.a. state) and a feedback signal (a.k.a. reward) resulting from the model's output (a.k.a. action) in an environment the model is interacting with. - Semi-supervised learning: A process of training a model with a mix of labelled data and unlabelled data. - Supervised learning: A process of training a model from input and its corresponding labels. - Test encoder/decoder for TE: AI/ML model for UE encoder/gNB decoder implemented by TE. - Two-sided (AI/ML) model: A paired AI/ML Model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network, i.e, the first part of inference is firstly performed by UE and then the remaining part is performed by gNB, or vice versa. - UE-side (AI/ML) model: An AI/ML Model whose inference is performed entirely at the UE. - Unsupervised learning: A process of training a model without labelled data. - Proprietary-format models: ML models of vendor-/device-specific proprietary format, from 3GPP perspective. They are not mutually recognizable across vendors and hide model design information from other vendors when shared. Note: An example is a device-specific binary executable format. - Open-format models: ML models of specified format that are mutually recognizable across vendors and allow interoperability, from 3GPP perspective. They are mutually recognizable between vendors and do not hide model design information from other vendors when shared.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.1.2 TSG RAN WG3
The following definitions are provided in clause 16.20 of TS 38.300 [11]: - AI/ML Model Training follows the definition of the "ML model training" as specified in clause 3.1 of TS 28.105 [9]. - AI/ML Model Inference follows the definition of the "AI/ML inference" as defined in clause 3.1 of TS 28.105 [9].
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2 AI/ML related activities
Editor's note: Description clause will be further clarified whether to be based on WI summaries or WID objectives.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.1 Rel-19 RAN WG1/RAN WG4 WID - Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface (NR_AIML_air)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.1.1 Description
The objective of this work is to provide specification support for the following aspects: - AI/ML general framework for one-sided AI/ML models within the realm of what has been studied in the FS_NR_AIML_Air project [RAN2]: - Signalling and protocol aspects of Life Cycle Management (LCM) enabling functionality and model (if justified) selection, activation, deactivation, switching, fallback: - Identification related signalling is part of the above objective. - Necessary signalling/mechanism(s) for LCM to facilitate model training, inference, performance monitoring, data collection (except for the purpose of CN/OAM/OTT collection of UE-sided model training data) for both UE-sided and NW-sided models. - Signalling mechanism of applicable functionalities/models. - Beam management - DL Tx beam prediction for both UE-sided model and NW-sided model, encompassing [RAN1/RAN2]: - Spatial-domain DL Tx beam prediction for Set A of beams based on measurement results of Set B of beams ("BM-Case1"). - Temporal DL Tx beam prediction for Set A of beams based on the historic measurement results of Set B of beams ("BM-Case2"). - Specify necessary signalling/mechanism(s) to facilitate LCM operations specific to the Beam Management use cases, if any. - Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE. - Positioning accuracy enhancements, encompassing [RAN1/RAN2/RAN3]: - Direct AI/ML positioning: - (1st priority) Case 1: UE-based positioning with UE-side model, direct AI/ML positioning. - (2nd priority) Case 2b: UE-assisted/LMF-based positioning with LMF-side model, direct AI/ML positioning. - (1st priority) Case 3b: NG-RAN node assisted positioning with LMF-side model, direct AI/ML positioning. - AI/ML assisted positioning: - (2nd priority) Case 2a: UE-assisted/LMF-based positioning with UE-side model, AI/ML assisted positioning. - (1st priority) Case 3a: NG-RAN node assisted positioning with gNB-side model, AI/ML assisted positioning. - Specify necessary measurements, signalling/mechanism(s) to facilitate LCM operations specific to the Positioning accuracy enhancements use cases, if any - Investigate and specify the necessary signalling of necessary measurement enhancements (if any) - Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE for relevant positioning sub use cases - Core requirements for the above two use cases for AI/ML LCM procedures and UE features [RAN4]: - Specify necessary RAN WG4 core requirements for the above two use cases. - Specify necessary RAN WG4 core requirements for LCM procedures including performance monitoring. Study objectives with corresponding checkpoints in RAN#105 (Sept '24): - CSI feedback enhancement [RAN1]: - For CSI compression (two-sided model), further study ways to: - Improve trade-off between performance and complexity/overhead: - e.g. considering extending the spatial/frequency compression to spatial/temporal/frequency compression, cell/site specific models, CSI compression plus prediction (compared to Rel-18 non-AI/ML based approach), etc. - Alleviate/resolve issues related to inter-vendor training collaboration. - For CSI prediction (UE-sided model), further study performance gain over Rel-18 non-AI/ML based approach and associated complexity, while addressing other aspects requiring further study/conclusion as captured in the conclusions clause of TR 38.843 [3] (e.g. cell/site specific model could be considered to improve performance gain). - Necessity and details of model Identification concept and procedure in the context of LCM [RAN2/RAN1]. - CN/OAM/OTT collection of UE-sided model training data [RAN2/RAN1]: - For the FS_NR_AIML_Air study use cases, identify the corresponding contents of UE data collection. - Analyse the UE data collection mechanisms identified during the FS_NR_AIML_Air (clause 7.2.1.3.2 of TR 38.843 [3]) study along with the implications and limitations of each of the methods. - Model transfer/delivery [RAN2/RAN1]: - Determine whether there is a need to consider standardised solutions for transferring/delivering AI/ML model(s) considering at least the solutions identified during the FS_NR_AIML_Air study. - Testability and interoperability [RAN4]: - Finalize the testing framework and procedure for one-sided models and further analyse the various testing options for two-sided models, in collaboration with RAN WG1, and including at least: - Relation to legacy requirements. - Performance monitoring and LCM aspects considering use-case specifics. - Generalization aspects. - Static/non-static scenarios/conditions and propagation conditions for testing (e.g. CDL, field data, etc.). - UE processing capability and limitations. - Post-deployment validation due to model change/drift. - RAN WG5 aspects related to testability and interoperability to be addressed on a request basis. - For Beam Management and Positioning Accuracy enhancement use cases, specify performance requirements and test cases for AI/ML LCM procedures (including performance monitoring) and UE features enabled by UE-sided models: - Specify necessary performance requirements and tests (including metrics) for the above-mentioned use cases. - Specify necessary test cases and performance requirements for LCM procedure, including performance monitoring.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.1.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.2 Rel-19 RAN WG2 SID - AIML for mobility in NR (FS_NR_AIML_Mob)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.2.1 Description
The study will focus on mobility enhancement in RRC_CONNECTED mode over air interface by following existing mobility framework, i.e. handover decision is always made in network side. Mobility use cases focus on standalone NR PCell change. UE-side and network-side AI/ML model can be both considered, respectively. The investigation is to evaluate potential benefits and gains of AI/ML aided mobility for network triggered L3-based handover, considering the following aspects: - AI/ML based RRM measurement and event prediction: - Cell-level measurement prediction including intra and inter-frequency (UE sided and NW sided model) [RAN2]: - Inter-cell Beam-level measurement prediction for L3 Mobility (UE sided and NW sided model) [RAN2]. - HO failure/RLF prediction (UE sided model) [RAN2]. - Measurement events prediction (UE sided model) [RAN2]. - Study the need/benefits of any other UE assistance information for the network side model [RAN2]. - The evaluation of the AI/ML aided mobility benefits should consider HO performance KPIs (e.g. Ping-pong HO, HOF/RLF, Time of stay, Handover interruption, prediction accuracy, and measurement reduction) etc.) and complexity trade-offs [RAN2]. - Potential AI mobility specific enhancement should be based on the Rel19 AI/ML-air interface WID general framework (e.g. LCM, performance monitoring etc) [RAN2]. - Potential specification impacts of AI/ML aided mobility [RAN2]. - Evaluate testability, interoperability, and impacts on RRM requirements and performance [RAN4].
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.2.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.3 Rel-18 RAN WG3 WID - Artificial Intelligence (AI)/Machine Learning (ML) for NG-RAN (NR_AIML_NGRAN-Core)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.3.1 Description
The objective of this work is to specify data collection enhancements and signalling support within existing NG-RAN interfaces and architecture (including non-split architecture and split architecture) for AI/ML-based Network Energy Saving, Load Balancing and Mobility Optimization. Support of AI/ML for NG-RAN, as a RAN internal function, is used to facilitate Artificial Intelligence (AI) and Machine Learning (ML) techniques in NG-RAN. The objective of AI/ML for NG-RAN is to improve network performance and user experience, through analysing the data collected and autonomously processed by the NG-RAN, which can yield further insights, e.g. for Network Energy Saving, Load Balancing, Mobility Optimization. Support of AI/ML in NG-RAN requires inputs from neighbour NG-RAN nodes (e.g. predicted information, feedback information, measurements) and/or UEs (e.g. measurement results). Signalling procedures used for the exchange of information to support AI/ML in NG-RAN are use case and data type agnostic, which means that the intended usage of the data exchanged via these procedures (e.g. input, output, feedback) is not indicated. The collection and reporting of information are configured through the Data Collection Reporting Initiation procedure, while the actual reporting is performed through the Data Collection Reporting procedure. Support of AI/ML in NG-RAN does not apply to ng-eNB. For the deployment of AI/ML in NG-RAN, the following scenarios may be supported: - AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the NG-RAN node. - AI/ML Model Training and AI/ML Model Inference are both located in the NG-RAN node.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.3.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.4 Rel-19 RAN WG3 SID - Enhancements for Artificial Intelligence (AI)/Machine Learning (ML) for NG-RAN (FS_NR_AIML_NGRAN_enh)
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.4.1 Description
The objective of this study is to further investigate new AI/ML based use cases and identify enhancements to support AI/ML functionality, and further discussions on the Rel-18 leftovers. The detailed objectives of the study are listed as follows: - Study two new AI/ML based use cases, i.e. Network Slicing and CCO, with existing NG-RAN interfaces and architecture (including non-split architecture and split architecture). - Rel-18 leftovers as candidates for normative work, based on the Rel-18 principles, as follows: - Mobility optimization for NR-DC. - Split architecture support for Rel-18 use cases based on the conclusions from Rel-18 WI. - Energy Saving enhancements, e.g. Energy Cost Prediction. - Continuous MDT collection targeting the same UE across RRC states. - Multi-hop UE trajectory across gNBs.
80eab521a98c34c5f064bf3f6541cd5c
22.850
5.3.2.4.2 Activities summary
Editor's note: This clause describes high-level AI/ML activities e.g. LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring. Clause(s) may be added to capture details.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6 Analysis on AI/ML across 3GPP
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.1 General
This clause will identify any potential misalignments and inconsistencies for AI/ML across 3GPP, based on clause 5. NOTE: Any RAN related aspects are subject to early coordination and feedback from TSG RAN.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2 AI/ML related terminology
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1 Analysis on AI/ML model related terminology consistency
This clause identifies any potential misalignments and inconsistencies for AI/ML terminology across 3GPP, based on clause 5.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.1 Analysis on ML model
The term 'ML model' has been defined differently by SA WG5, SA WG6 and RAN WG1, as illustrated in Table 6.2.1.1-1. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.1.1-1: Definition of ML model as defined across 3GPP WGs TSG (TS/TR) ML model SA WG5 TS 28.105 [9] A manageable representation of an ML model algorithm. NOTE 1: An ML model algorithm is a mathematical algorithm through which running a set of input data can generate a set of inference output. NOTE 2: An ML model algorithm is proprietary and not in scope for standardization and therefore not treated in this specification. NOTE 3: An ML model may include metadata. Metadata may include e.g. information related to the trained model, and applicable runtime context. SA WG6 TR TS 23.482 [34] According to TS 28.105 [9], mathematical algorithm that can be "trained" by data and human expert input as examples to replicate a decision an expert would make when provided that same information. RAN WG1 TR 38.843 [3] A data driven algorithm that applies AI/ML techniques to generate a set of outputs based on a set of inputs. The following unified definition for ‘ML model’ is proposed: ML model: A mathematical algorithm that applies ML techniques to generate a set of outputs based on a set of inputs. It may include metadata which consists of, e.g., information related to the model, and applicable runtime context. NOTE: An ML model can be managed, stored, and transferred as artifacts, which may be containers, images, or proprietary file formats.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.2 Analysis on ML model training
The term 'ML model training' has been defined differently by SA WG5, SA WG6 and RAN WG1, as illustrated in Table 6.2.1.2-1. RAN WG3 follows the definition of SA WG5. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.1.2-1: Definition of ML model training as defined across 3GPP WGs TSG (TS/TR) ML model training SA WG5 TS 28.105 [9] A process performed by an ML training function to take training data, run it through an ML model algorithm, derive the associated loss and adjust the parameterization of that ML model iteratively based on the computed loss and generate the trained ML model. SA WG6 TR TS 23.482 [34] According to TS 28.105 [9], ML model training includes capabilities of an ML training function or service to take data, run it through an ML model, derive the associated loss and adjust the parameterization of that ML model based on the computed loss. RAN WG1 TR 38.843 [3] A process to train an AI/ML Model [by learning the input/output relationship] in a data driven manner and obtain the trained AI/ML Model for inference. RAN WG3 TS 38.300 [11] AI/ML Model Training follows the definition of the "ML model training" as specified in clause 3.1 of TS 28.105 [9]. RAN WG3 TS 38.401 AI/ML Model Training follows the definition of the "ML model training" as specified in clause 3.1 of TS 28.105 [9]. The following unified definition for ‘ML model training’ is proposed: ML model training: A process to train an ML Model by learning the input/output relationship in a data driven manner and obtain the trained ML Model for e.g. inference.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.3 Analysis on ML model re-training
The term 'ML model re-training' has been defined differently by SA WG5 and RAN WG1, as illustrated in Table 6.2.1.3-1. RAN WG1 introduces two new terms, i.e. ML model parameter update and ML model update, which is nothing but ML model re-training. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.1.3-1: Definition of ML model re-training / ML model parameter update as defined across 3GPP WGs TSG (TS/TR) ML model re-training / ML model parameter update / ML model update SA WG5 TS 28.105 [9] ML model re-training: A process of training a previous version of an ML model and generate a new version. RAN WG1 TR 38.843 [3] ML model parameter update: A process of updating the model parameters of a model. Model update: A process of updating the model parameters and/or model structure of a model SA WG6 TS 23.482 [34] ML model update: A process of training a new version of a ML model and updating its parameters. The term ML model re-training is proposed as unified term rather than using different terms such as ‘ML model parameter update’ or ‘ML model update’ which means exactly the same. The following unified definition for ‘ML model re-training’ is proposed: ML model re-training: A process of training a previous version of an ML model and generate a new version.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.4 Analysis on ML model testing
The term 'ML model testing' has been defined differently by SA WG5 and RAN WG1, as illustrated in Table 6.2.1.4-1. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Editor's note: Further analysis of ML model monitoring, as per SA WG2 specifications, may be needed. Table 6.2.1.4-1: Definition of ML model testing as defined across 3GPP WGs TSG (TS/TR) ML model testing SA WG5 TS 28.105 [9] A process of testing an ML model using testing dataA process of evaluating the performance of an ML model using testing data different from data used for model training and validation. RAN WG1 TR 38.843 [3] A subprocess of training, to evaluate the performance of a final AI/ML model using a dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model. The following unified definition for ‘ML model testing’ in TS 28.105 [9] is proposed as a unified definition: ML model testing: A process of evaluating the performance of an ML model using test data different from data used for model training and validation.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.5 Analysis on ML model inference
The term 'ML model inference' has been defined differently by SA WG5, SA WG6 and RAN WG1, as illustrated in Table 6.2.1.5-1. RAN WG3 follows the definition of SA WG5. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.1.5-1: Definition of ML model inference as defined across 3GPP WGs TSG (TS/TR) ML model inference SA WG5 TS 28.105 [9] A process of running a set of input data through a trained ML model to produce set of output data, such as predictions. SA WG6 TR TS 23.482 [34] According to TS 28.105 [9], ML model training includes capabilities of an ML model inference function that employs an ML model and/or AI decision entity to conduct inference. RAN WG1 TR 38.843 [3] A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs. RAN WG3 TS 38.300 [11] AI/ML Model Inference follows the definition of the "AI/ML inference" as defined in clause 3.1 of TS 28.105 [9]. RAN WG3 TS 38.401 AI/ML Model Inference follows the definition of the "AI/ML inference" as defined in clause 3.1 of TS 28.105 [9]. The following unified definition for ‘ML model inference’ is proposed: ML model inference: A process of running a set of inputs through a trained ML model to produce a set of outputs.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.6 Analysis on ML model activation & ML model de-activation
The term 'ML model activation' and 'ML model deactivation' have been defined by RAN WG1, as illustrated in Table 6.2.1.6-1. SA WG5 mentions the terms ML activation and ML deactivation several times in TS 28.105 [9] but does not provide a definition. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.1.6-1: Definition of ML model activation & ML model de-activation as defined across 3GPP WGs TSG (TS/TR) ML model activation & ML model de-activation SA WG5 TS 28.105 [9] AI/ML activation: a process of enabling the inference capability of an AI/ML inference function. AI/ML deactivation: a process of disabling the inference capability of an AI/ML inference function. RAN WG1 TR 38.843 [3] ML Model activation: enable an AI/ML model for a specific AI/ML-enabled feature. ML Model deactivation: disable an AI/ML model for a specific AI/ML-enabled feature. The following unified definition for ‘ML model activation’ and ‘ML model deactivation’ is proposed: ML model activation: A process to enable an ML model for inference. ML model deactivation: A process to disable an ML model for inference. Editor’s Note: Further analysis is required to determine whether ML model activation and deactivation should be associated specifically with inference capability (as per TS 28.105) or with the ML model more broadly (as per TR 38.843).
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.7 Analysis on ML model lifecycle
The term 'ML model lifecycle' has been defined by SA WG6, as illustrated in Table 6.2.1.7-1. However, SA WG2 TS 23.288 [8] SA WG2 TR 23.700-84 [4], SA WG4 TR 26.927 [12], SA WG5 TS 28.105 [9], SA WG6 TR 23.700-82 [7], RAN WG1 TR 38.843 [3] and RAN WG3 also mentions one or more phases of ML model life cycle without providing a clear definition of ML model lifecycle. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.1.7-1: Definition of ML model lifecycle as defined across 3GPP WGs TSG (TS/TR) ML model lifecycle SA WG6 TS 23.482 [34] The lifecycle of an ML model aka ML model operational workflow consists of a sequence of ML operations for a given ML task / job (such job can be an analytics task or a VAL automation task). This definition is aligned with the 3GPP definition on ML model lifecycle according to 3GPP TS 28.105 [9]. SA WG5 TS 28.105 [9] ML model training: includes initial training and re-training, as well as validation of the ML model using training and validation data. If the validation results do not meet expectations (e.g., unacceptable variance), re-training is required. ML model testing: evaluates the performance of a trained ML model using testing data. If the results do not meet expectations, re-training is required before proceeding. AI/ML inference emulation (optional): allows testing the inference performance of an ML model in an emulation environment before deploying it to the target network or system. If the emulation performance does not meet the target requirements, the model may require further re-training. ML model deployment: involves the process of loading a trained ML model to make it available for use at the target AI/ML inference function. Deployment may not be needed if the training and inference functions are co-located. AI/ML inference: performing inference using a trained ML model at the AI/ML inference function. The inference process may trigger model re-training or updates based on performance monitoring and evaluation. The following unified definition for ‘ML model lifecycle’ is proposed: ML model lifecycle: The end-to-end process typically consisting of data processing, model training, model testing, model deployment, model inference, model monitoring and model maintenance. NOTE 1: Data processing includes collecting and preparing the data for model training and model inference. NOTE 2: Model training includes training and validating the model before model deployment. NOTE 3: Model testing includes testing the model before model deployment. NOTE 4: Model deployment includes making a trained ML model available for use in the target environment. NOTE 5: Model monitoring includes observing the performance of the model during the model maintenance process. NOTE 6: Model maintenance includes updating the model, retraining the model and (de-)activating the model.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.1.8 Analysis on ML model lifecycle management
SA WG5 describes the ML model lifecycle in clause 4a.0 of TS 28.105 [9], and ML model lifecycle management capabilities for ML model training, ML model testing, ML inference emulation, ML model deployment and ML inference in clause 6.1 of TS 28.105 [9]. The terms ‘ML model-based lifecycle management’, ‘ML-enabled functionality’, and ‘Functionality-based lifecycle management’ have been defined by RAN1, as illustrated in Table 6.2.1.8-x. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.1.8-x: Definitions of ML model-based lifecycle management, ML-enabled functionality and Functionality-based lifecycle management as defined across 3GPP WGs. TSG (TS/TR) ML model lifecycle management / Functionality-based lifecycle management 3GPP RAN1 TR 38.843 [7] ML model-based lifecycle management: Operates based on identified logical models, where a model may be associated with specific configurations/conditions associated with UE capability of an AI/ML-enabled Feature / Feature Group and additional conditions (e.g., scenarios, sites, and datasets) as determined/identified between UE-side and NW-side. The models are identified at the Network, and Network/UE may activate/deactivate/select/switch individual AI/ML models via model ID. (ML-enabled) Functionality: An AI/ML-enabled Feature/Feature Group enabled by configuration(s), where configuration(s) is(are) supported based on conditions indicated by UE capability. Functionality-based lifecycle management: Signaling procedure where network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g., RRC, MAC-CE, DCI); operates based on, at least, one configuration of AI/ML-enabled Feature/FG or specific configurations of an AI/ML-enabled Feature / Feature Group. SA WG5 TS 28.105 [9] ML model training management: enables requesting, consuming, and controlling ML model training and re-training processes. It includes training performance management and policy setting for producer-initiated training. ML model testing management: allows requesting and receiving ML model testing results, selecting performance metrics, and triggering model re-training based on test performance. ML model loading management: supports triggering, controlling, and monitoring the ML model loading process as part of model deployment. AI/ML inference management: allows managing inference functions and/or ML model(s), including activation/deactivation, output parameter configuration, performance monitoring, and triggering model updates if necessary. The following unified definition for ‘ML model lifecycle management’ is proposed: ML model lifecycle management: The management capabilities allowing a consumer to manage different phases of the ML model lifecycle as defined in clause 6.2.1.7. The following definition for ‘Functionality-based lifecycle management’ is proposed for adoption by all 3GPP RAN Working Groups: Functionality-based lifecycle management: Signaling procedure where network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g., RRC, MAC-CE, DCI); operates based on, at least, one configuration of AI/ML-enabled Feature / Feature Group or specific configurations of an AI/ML-enabled Feature/FG. NOTE 1: In the context of RAN1, RAN2 and RAN4, functionality-based lifecycle management does not consider training, testing and maintenance phases and consider them as implementation-specific. NOTE 2: Applicability of Functionality-based lifecycle management definition to/in TSG SA WGs is optional. Editor’s Note: the following analyses on the key differences between ML Model LC and LCM is to be revised and possibly relocated to different clause in this TR. Key differences between ML Model lifecycle (LC) and ML Model lifecycle management (LCM) TS 28.105 defines both ML model lifecycle (LC) and ML model lifecycle management (LCM) within the scope of AI/ML management in 3GPP networks. The key differences between the two are: ML model lifecycle (LC) describes the essential steps (phases) an ML model undergoes, from training to inference. It consists of: - ML model training (e.g., initial training & re-training) - ML model testing - ML emulation - ML model deployment (including ML model loading) - AI/ML inference ML model lifecycle management (LCM) focuses on the management capabilities that control and optimize each phase of the ML model lifecycle. LCM enables functionalities such as: - Training management (e.g., triggering re-training, setting policies) - Testing management (e.g., evaluating performance, determining retraining needs) - Deployment management (e.g., controlling ML model loading) - Inference management (e.g., monitoring inference results, managing AI/ML inference functions) LCM, as specified in TS 28.105 [9], encompasses the full lifecycle management of both ML models and AI/ML inference functions. This means that LCM does not only manage the ML model while inference remains a separate process; rather, it ensures a unified management approach that includes both: - ML model lifecycle management covers the entire lifecycle of the ML model itself, including its creation, validation, deployment, and use, and, - AI/ML inference function lifecycle management is also part of LCM, ensuring that inference operations are properly activated, configured, monitored, and optimized. For example, LCM in TS 28.105 enables not just the deployment of an ML model but also the continuous management of its inference functions, such as their activation, configuration, and real-time monitoring. This differs from a narrower view of lifecycle (LC), which only considers inference as a step where the ML model is applied, without addressing its ongoing management. 6.2.1.9 Analysis on usage of ML Model identifier in each Working Group 6.2.1.9.1 RAN WG 1 As part of the RAN1 lead work " Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface AI/ML" work in RAN WGs in TR 38.843 [7], RAN is studying two flavours of LCM; Functionality-based and ML model-based (details in clause 4.2.1 of 38.843 [7] and clause 6.2.1.8. - Study on the usage of ML Model identifier is still ongoing and some interim agreements within TR 38.843 [7] are: - For Functionality-based LCM: Model ID, if needed, can be used in a Functionality (defined in functionality-based LCM) for LCM operations. NOTE 1: Functionality-based LCM is most suitable for UE-side ML models - For Model-ID-based LCM of UE-side models and/or UE-part of two-sided models, model-ID-based LCM operates based on identified models, where a model may be associated with specific configurations/conditions associated with UE capability of an AI/ML-enabled Feature/FG and additional conditions (e.g., scenarios, sites, and datasets) as determined/identified between UE-side and NW-side. - For two-side ML models, in order to select a UE-side ML model (CSI generation model) that is compatible with the NW-side ML model (CSI reconstruction model) pairing information (model pairing) between the UE and gNB can be established based on ML Model identifier(s). Analysis on usage of ML Model identifier - Study is ongoing and no concrete conclusions so far. - For two-sided models, model pairing between UE-side ML model and NW-side ML model is based on ML Model identifiers. - How an ML Model identifier is assigned to a trained ML model has not been discussed. - How an ML model identifier is related to different functions has not been discussed' 6.2.1.9.2 RAN WG 3 As part of the RAN3 work in TS 38.300 [11]. The following scenarios are supported: - AI/ML Model Training is located in the OAM and AI/ML Model Inference is located in the NG-RAN node - AI/ML Model Training and AI/ML model inference are both located in the NG-RAN node Analysis on usage of ML Model identifier - For the case where AI/ML model is trained at OAM, ML model ID is used as defined in 3GPP TS 28.105 [9]. 6.2.1.9.3 SA WG 2 As part of the work defined in 3GPP TS 23.288 [8]. - An ML model is trained by the NWDAF MTLF. Figure 6.2.1.9.3-1 - ML model training/identification in AIML related work in SA2 - The training may be triggered by request(s) from one more ML model consumer(s) (i.e. NWDAF AnLF). The NWDAF AnLF indicates the purpose of the trained ML model by including an Analytics identifier (and other parameters) as described in 3GPP TS 23.288 [8]. - The NWDAF MTLF trains an ML model and assigns an ML Model identifier. The trained ML model and assigned ML Model identifier is provisioned to the NWDAF AnLF. - The AnLF associates the trained ML model and its corresponding ML model identifier to a specific analytics request (identified by an Analytics ID). - A trained ML model may be stored at a repository (i.e. ADRF) for use by other analytics consumers. The trained ML model is identified at the ADRF based on the ML Model Identifier. No additional metadata are stored at the ADRF to identify the capabilities (e.g. supported Analytics) of the trained ML model. Analysis on usage of ML Model identifier - The ML Model identifier identifies the provisioned ML model. - Only the ML model consumer (AnLF) is aware of the capabilities of the trained ML model by associating the trained ML model and its corresponding ML model identifier to a specific analytics request, identified by an Analytics ID, during an ML model training request. - When a trained ML model is stored in a repository, the ML Model identifier by its own cannot be used to identify the capabilities of the ML model 6.2.1.9.4 SA WG5 As part of the work defined in 3GPP TS 28.104 [71] and 3GPP TS 28.105 [9], - An ML model is trained by the ML training MnS producer. Figure 6.2.1.9.4-1 - ML model training/identification in AIML related work in SA5 - The training may be triggered by request(s) from one or more ML training MnS consumer(s). The MnS consumer specifies in the ML training request the inference type which indicates the function or purpose of the ML model, e.g. CoverageProblemAnalysis [see TS 28.104 [71]. - The ML training MnS producer assigns an ML Model identifier to the trained ML model that is provisioned to the MnS consumer. The ML Model identifier identifies the provisioned ML model. - A trained ML model may be stored at a repository for use by other MnS consumers. The trained ML model is identified at the repository based on the ML Model Identifier. No additional metadata are stored in the repository to identify the capabilities of the trained ML model. Analysis on usage of ML Model identifier - The ML Model identifier identifies the provisioned ML model. -- Only the ML model consumer (MnS Consumer) is aware of the capabilities of the trained ML model by associating the trained ML model and its corresponding ML model identifier to a specific inference type during the ML training request. - When a trained ML model is stored in a repository, the ML Model identifier by its own cannot be used to identify the capabilities of the ML model. Editor’s Note: SA5 has not defined a dedicated parameter in the ML training request procedure for providing the ML Model Identifier 6.2.1.9.5 SA WG6 As part of the work defined in 3GPP TS 23.482 [34] - The ML model ID uniquely identifies the application-layer ML model. Figure 6.2.1.9.5-1 - ML model training/identification in AIML related work in SA6 - A VAL server may offload training of an ML model to an AIMLE server - An AIML server may train an ML model based on request from consumer(s) (VAL server). Two options are supported: - A VAL server may request to offload training of an application layer ML model to an AIML server where the model training request includes the ML Model identifier. - A VAL server may request to train a model for an analytics supported by ADAES where the model training request includes the analytics identifier. - The ML model information, including ML Model identifier and model capabilities can be stored in a model repository. Analysis on usage of ML Model identifier - In scenarios where AIMLE trains an application-layer ML model an ML Model identifier can implicitly identify the capabilities of the ML model. - In scenarios where AIMLE trains an ML model for ADAES services, ML Model identifier identifies the provisioned ML model. If the trained ML model is stored in an ML repository, then the information stored in the repository may include the capabilities of the ML model identified by an ML Model identifier (e.g. supported Analytics ID).
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.2 Analysis on Federated Learning
The term 'Horizontal Federated Learning' and ‘Vertical Federated Learning' have been defined in SA WG2 and RAN WG1 as well as SA WG5 defines 'Federated Learning', as illustrated in Table 6.2.2-1. Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.2-1: Definition of Federated Learning as defined across 3GPP WGs TSG (TS/TR) Federated Learning SA WG2 TR 23.700-84 [4] Horizontal Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different FL clients for local model training have the same feature space for different samples (e.g. UE IDs). SA WG2 TR 23.700-84 [4] Vertical Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different VFL Participant for local model training have different feature spaces for the same samples (e.g. UE IDs). RAN WG1 TR 38.843 [3] Federated Learning: A machine learning technique that trains an AI/ML model across multiple decentralized edge nodes (e.g. UEs, gNBs) each performing local model training using local data samples. The technique requires multiple interactions of the model, but no exchange of local data samples. 3GPP SA5 TR 28.858 [19] Federated Learning: a distributed machine learning approach where the ML model is trained collaboratively by multiple ML training functions including one acting as an FL server and multiple acting as FL clients iteratively without exchanging data samples. The definition of Federated Learning provided by RAN WG1 appears to only apply to Horizontal Federated Learning, as the phrase "each performing local model training using local data samples" implies that the data samples at individual nodes are distinct. The key difference between Horizontal Federated Learning and Vertical Federated Learning lies in the characteristics of the local datasets: - Horizontal Federated Learning: Local datasets have the same features but different samples. - Vertical Federated Learning: local data sets have different features but share same samples. The definition of Federated Learning provided by SA WG5 highlights the collaborative training process among multiple FL participants, including an FL server and FL clients, without specifying the characteristics of the client datasets. This broader definition facilitates a more comprehensive understanding of both Horizontal Federated Learning (HFL) and Vertical Federated Learning (VFL) that are already defined in the specifications and also offers greater flexibility across more general scenarios. The terms "distributed learning" and "federated learning" are often used together as "distributed/federated learning" in SA1 TS 22.261. "Distributed learning" typically refers to a broader set of learning techniques including "federated learning". Although the two terms are related, they are not identical and should be used appropriately based on the context. The following unified definition for ‘Federated Learning’ is proposed: Federated Learning: A distributed machine learning approach where the ML model(s) are collaboratively trained by multiple participants, including one acting as an FL server and multiple acting as FL clients, iteratively without exchanging data samples. The following unified definition for ‘Horizontal Federated Learning’ is proposed: Horizontal Federated Learning: A federated learning technique without exchanging/sharing local data set, wherein the local data set in different HFL clients for local model training have the same feature space for different samples. The following unified definition for ‘Vertical Federated Learning’ is proposed: Vertical Federated Learning: A federated learning technique without exchanging/sharing local data set and local ML models, wherein the local data set in different VFL clients for local model training have different feature spaces for the same samples.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.4 Analysis on Decision vs Prediction vs Output
RAN WG1 and RAN WG3 only uses "prediction" in all corresponding ML related TRs/TSs. SA WG2 uses "output" in all corresponding TRs/TSs where output may include both statistics and predictions. SA WG5 uses "decision" in all corresponding TRs/TSs with few occurrences of "prediction". Editor's note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. The term “output” is proposed as unified term since output may include decision or prediction or statistic or recommendation.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.5 Analysis on ML vs AI vs AI/ML
RAN WG1, RAN WG2, RAN WG3 and SA WG1 only uses "AI/ML" in all corresponding ML related TRs/TSs. SA WG2 uses a mix of "ML" and "AI/ML" in all corresponding ML related TRs/TSs. SA WG3, SA WG4 and SA WG6 uses a mix of "AI/ML", "AI" and "ML" in all corresponding ML related TRs/TSs. SA WG5 uses "ML" for training/testing/emulation and "AI/ML" for inference in all corresponding ML related TRs/TSs. The term "AI/ML" is to be used a unified definition encompassing "AI/ML", "AI" and "ML" in all corresponding ML related TRs/TSs.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.2.6 Analysis on Transfer Learning
The term ‘Transfer Learning’ has been defined by SA5, as illustrated in Table 6.2.6-1. SA6 mentions the term Transfer Learning’ in TS 23.482 [34], but definition of the term is not given. SA1 also uses the term ‘Transfer Learning’ in TS 22.261 [6] and TR 22.876 [21] without providing definitions of the term. Editor’s note: Further analysis may be needed, e.g. to determine whether a unified definition can be derived. Table 6.2.6-1: Definition of Transfer Learning as defined across 3GPP WGs TSG (TS/TR) Transfer Learning 3GPP SA5 TR 28.858 [19] ML Knowledge-based Transfer Learning: a technique where the knowledge gained from training of one or more ML models is applied or adapted to improve or develop another ML model. The following unified definition for ‘Transfer Learning’ is proposed (based on SA5 definition): Transfer Learning: A machine learning technique where the knowledge acquired from training one or more ML models is leveraged to enhance the performance or accelerate the training of another ML model.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.3 AI/ML related features
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.3.1 Analysis on ML model training services
The analysis focuses on the specifications from SA WG2, SA WG5 and SA WG6, considering these are the working groups defining services and operations related to ML model training in 3GPP Release 18. SA WG1, SA WG3, SA WG4, RAN WG1, RAN WG2, and RAN WG3 have not defined any services or operations related to ML model training. Table 6.3.1-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: • SA WG2: Emphasizes a structured approach to ML model training services by defining a clear consumer-producer relationship. This enables specific entities to consume and produce these services, ensuring a well-defined and controlled environment for service utilization. • SA WG5: Offers a more flexible approach by defining generic ML model training services. This allows for greater adaptability in implementation and usage, without the constraints of a specific consumer-producer relationship. • SA WG6: Mirrors the approach of SA WG2, prioritizing a clear consumer-producer relationship for its defined services. This aligns with the structured approach advocated by SA WG2. Editor’s note: This analysis is based on Release 18 and does not consider Release 19 for SA WG2 and SA WG5. Further analysis needs to be conducted as Release 19 matures and normative work progresses for these working groups. While SA WG2 and SA WG6 restrict the potential producers and consumers, SA WG5 emphasizes flexibility and adaptability. The choice of approach will depend on the specific needs and requirements of the individual service provider and consumer. Editor’s note: Further investigation is needed to understand the implications of these different approaches and their impact on the overall 3GPP ecosystem. Table 6.3.1-1: ML model training related services and operations as specified across 3GPP WGs ML Model Training TSG (TS/TR) Service/API Type Service/API/IOC Name Description [Consumer, Producer] SA WG2 TS 23.288 [8] ML Model Provisioning Services Nnwdaf_MLModelProvision_Subscribe The consumer subscribes to NWDAF ML model provision with specific parameters to receive a notification when an ML Model matching the subscription parameters becomes available. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF Nnwdaf_MLModelProvision_Unsubscribe The consumer unsubscribes to NWDAF ML model provision. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF Nnwdaf_MLModelProvision_Notify The NWDAF notifies the ML model information to the consumer which has subscribed to the NWDAF ML model provision service. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF ML Model Information Services Nnwdaf_MLModelInfo_Request The consumer requests and gets NWDAF ML Model Information. Consumer: NWDAF AnLF, LMF Producer: NWDAF MTLF ML Model Training Services Nnwdaf_MLModelTraining_Subscribe The consumer subscribes to NWDAF ML model training with specific parameters. Consumer: NWDAF MTLF Producer: NWDAF MTLF Nnwdaf_MLModelTraining_Unsubscribe The consumer terminates NWDAF ML model training. Consumer: NWDAF MTLF Producer: NWDAF MTLF Nnwdaf_MLModelTraining_Notify The NWDAF notifies about the trained ML model to the consumer which has subscribed to the NWDAF ML model training service. Consumer: NWDAF MTLF Producer: NWDAF MTLF ML Model Training Information Services Nnwdaf_MLModelTrainingInfo_Request The consumer requests for the information about NWDAF ML model training with specific parameters. Consumer: NWDAF MTLF Producer: NWDAF MTLF SA WG5 TS 28.105 [9] ML Training Management Services MLTrainingRequest It represents the ML model training request to train an ML model which is triggered by the ML training MnS consumer towards the ML training MnS producer. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of training an ML model MLTrainingReport It represents the ML model training report provided by the ML training MnS producer to the ML training MnS consumer who has requested for ML model training. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of training an ML model MLTrainingProcess It represents the ML model training process. When a ML model training process starts, an instance of the MLTrainingProcess is created by the MnS Producer and notification is sent to MnS consumer who has subscribed to it. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of training an ML model SA WG6 TS 23.482 [34] ML Model Training APIs Aimles_MLModelTraining Request The consumer sends an ML model training request to the producer, requesting to assist in its ML model training. This request consists of ML model information or ML model requirement information, etc. Consumer: VAL server Producer: AIMLE Server Aimles_MLModelTraining Response If the consumer is authorized, the producer identifies and selects the appropriate ML model for training based on the ML model requirement information. The producer returns a success response indicating the selected ML model for training; otherwise, a failure response indicating the reason for failure. Consumer: VAL server Producer: AIMLE Server
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.3.2 Analysis on analytics related services
This clause focuses on the specifications from SA WG2, SA WG5 and SA WG6, considering these are the working groups defining services and operations related to ML model inference in 3GPP Release 18. SA WG1, SA WG3, SA WG4, RAN WG1, RAN WG2, and RAN WG3 have not defined any services or operations related to ML model inference. Table 6.3.1-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: • SA WG2: Defines analytics services through a clear consumer-producer relationship. It defines several analytics types in TS 23.288 [8], each one supported by the NWDAF/RE-NWDAF AnLF and requested/subscribed by the NWDAF/RE-NWDAF AnLF consumer using the defined analytics services. • SA WG5: Defines generic analytics services without specific consumer-producer relationship in 3GPP TS 28.105 [9] and 3GPP TS 28.104 [71]. It defines several analytics types in TS 28.104 [71], each one supported by an MnS producer and requested by the MnS consumer using the defined analytics services. • SA WG6: Defines individual analytics services for each analytics type. It defines several analytics types in 3GPP TS 23.436 [33]. Editor’s note: This analysis is based on Release 18 and does not consider Release 19 for SA WG2 and SA WG5. Further analysis needs to be conducted as Release 19 matures and normative work progresses for these working groups. While SA WG2 and SA WG6 restrict the potential producers and consumers, SA WG5 emphasizes flexibility and adaptability. Moreover, SA WG2 and SA WG5 defines several analytics types that can be supported by an entity and requested by another entity using the defined ML model inference services. However, in SA WG6, individual services are defined for each analytics type, lacking a generic analytics service definition as seen in SA WG2 and SA WG5. Editor’s note: Further investigation is required to determine if similar analytics (e.g., radio resource related analytics) are defined across SA WG2, SA WG5, and SA WG6. Editor’s note: It is FFS whether analysis on ML model inference is needed. Table 6.3.1-1: Analytics related services and operations as specified across 3GPP WGs ML Model inference TSG (TS/TR) Service/API Type Service/API/IOC Name Description [Consumer, Producer] SA WG2 TS 23.288 [8] Network Data Analytics Subscription Services Nnwdaf_AnalyticsSubscription_Subscribe The consumer subscribes for network data analytics and optionally its corresponding analytics accuracy information with specific parameters. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsSubscription_Unsubscribe The consumer unsubscribes for network data analytics. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsSubscription_Notify The NWDAF notifies the analytics and optionally Analytics Accuracy Information to the consumer which has subscribed to the NWDAF analytics subscription service. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsSubscription_Transfer The consumer NWDAF requests NWDAF for transferring analytics subscriptions from the consumer NWDAF. Consumer: NWDAF AnLF Producer: NWDAF AnLF Network Data Analytics Information Services Nnwdaf_AnalyticsInfo_Request The consumer requests NWDAF operator specific analytics and optionally Analytics Accuracy Information with specific parameters. Consumer: PCF, NSSF, AMF, SMF, NEF, AF, OAM, CEF, NWDAF, DCCF, LMF Producer: NWDAF AnLF Nnwdaf_AnalyticsInfo_ContextTransfer The consumer NWDAF requests NWDAF to transfer context information related to analytics subscriptions. Consumer: NWDAF AnLF Producer: NWDAF AnLF Network Data Roaming Analytics Services Nnwdaf_RoamingAnalytics_Subscribe The consumer subscribes for network data analytics related to roaming UEs. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingAnalytics_Unsubscribe The consumer unsubscribes for network data analytics related to roaming UEs. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingAnalytics_Notify The NWDAF notifies the analytics related to roaming UE(s) to the consumer which has subscribed to the NWDAF roaming analytics subscription service. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingAnalytics_Request The consumer requests NWDAF operator specific related to roaming UEs. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF SA WG5 TS 28.105 [9] & SA WG5 TS 28.104 [71] Management Data Analytics Services MDARequest It represents the management data analytics output request which is created by an MDA MnS consumer towards the MDA MnS producer. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of producing management data analytics MDAReport It represents the management data analytics report containing the outputs for one or more MDA types delivered to the MDA consumer who has requested for management data analytics. Consumer: Any authorized network function, any authorized management function, operator Producer: Any function that is capable of producing management data analytics SA WG6 TS 23.436 [33] SS_ADAE_VAL_performance_analytics VAL_performance_analytics_subscribe The consumer subscribes for VAL performance analytics. Consumer: VAL server Producer: ADAE server VAL_performance_analytics_notify The consumer is notified by ADAES on the VAL performance analytics. Consumer: VAL server Producer: ADAE server SS_ADAE_slice_performance_analytics slice_performance_analytics_subscribe The consumer subscribes for slice specific performance analytics. Consumer: VAL server Producer: ADAE server slice_performance_analytics_notify The consumer is notified by ADAES on the slice specific performance analytics. Consumer: VAL server Producer: ADAE server SS_ADAE_UE-to-UE_performance_analytics UE-to-UE performance_analytics_subscribe The consumer subscribes for UE-to-UE performance analytics. UE-to-UE performance_analytics_notify The consumer is notified by ADAES on the slice specific performance analytics. Consumer: VAL server Producer: ADAE server SS_ADAE_server-to-server_performance_analytics server-to-server_performance_analytics_subscribe The consumer subscribes to the ADAE server for Server-to-server performance analytics. Consumer: VAL server, EES Producer: ADAE server server-to-server_performance_analytics_notify The consumer is notified by the ADAE server on the Server-to-server performance analytics. Consumer: VAL server, EES Producer: ADAE server SS_ADAE_location_accuracy_analytics Location_accuracy_analytics_subscribe The consumer subscribes for location accuracy analytics. Consumer: VAL server Producer: ADAE server Location_accuracy_analytics_notify The consumer is notified by ADAES on the location accuracy analytics. Consumer: VAL server Producer: ADAE server SS_ADAE_service_API_analytics Service_API_analytics_subscribe The consumer subscribes for service API analytics. Consumer: VAL server, Subscriber, API invoker Producer: ADAE server Service_API_analytics_notify The consumer is notified by ADAES on the location accuracy analytics. Consumer: VAL server, Subscriber, API invoker Producer: ADAE server SS_ADAE_slice_usage_pattern_analytics slice_usage_pattern_analytics_subscribe The consumer subscribes for slice usage pattern analytics. Consumer: VAL server, SEAL server Producer: ADAE server slice_usage_pattern_analytics_notify The consumer is notified by ADAES on the slice usage pattern analytics. Consumer: VAL server, SEAL server Producer: ADAE server SS_ADAE_edge_analytics edge_analytics_subscribe The consumer subscribes for edge load analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_analytics_notify The consumer is notified by ADAES on the edge load analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_analytics_get The consumer requests edge analytics data. Consumer: VAL server, ECS, EES Producer: ADAE server SS_ADAES_slice_usage_stats slice_usage_stats_get The consumer requests and receives slice usage statistics from ADAE server. Consumer: VAL server Producer: ADAE server SS_ADAES_edge_preparation_analytics edge_preparation_analytics_subscribe The consumer subscribes for edge computing preparation analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_preparation_analytics_notify The consumer is notified by the ADAE server on the edge computing preparation analytics. Consumer: VAL server, ECS, EES Producer: ADAE server edge_preparation_analytics_get The consumer requests edge computing preparation analytics Consumer: VAL server, ECS, EES Producer: ADAE server SS_ADAE_collision_detection_analytics collision_detection_analytics_subscribe The consumer subscribes for collision detection analytics. Consumer: VAL server, LM server, UAE server, UAS application specific server Producer: ADAE server collision_detection_analytics_notify The consumer is notified by the ADAE server on collision detection analytics. Consumer: VAL server, LM server, UAE server, UAS application specific server Producer: ADAE server collision_detection_analytics_get The consumer requests collision detection analytics. Consumer: VAL server, LM server, UAE server, UAS application specific server Producer: ADAE server SS_ADAE_location-related_UE_group_analytics location-related_UE_group_analytics_subscribe The consumer subscribes for location-related UE group analytics. Consumer: LM server Producer: ADAE server location-related_UE_group_analytics_notify The consumer is notified by the ADAE server on location-related UE group analytics. Consumer: LM server Producer: ADAE server location-related_UE_group_analytics_get The consumer requests location-related UE group analytics. Consumer: LM server Producer: ADAE server SS_ ADAE_AIML_member_capability_analytics AIML_member_capability_analytics_subscribe The consumer subscribes for application layer AIML member capability analytics. Consumer: VAL server, AIMLE server Producer: ADAE server AIML_member_capability_analytics_notify The consumer is notified by the ADAE server on application layer AIML member capability analytics. Consumer: VAL server, AIMLE server Producer: ADAE server AIML_member_capability_analytics_get The consumer requests application layer AIML member capability analytics. Consumer: VAL server, AIMLE server Producer: ADAE server
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.3.3 Analysis on ML performance evaluation and monitoring
The analysis focuses on the specifications from SA WG2, SA WG5 and SA WG6, considering these are the working groups defining services and operations related to ML performance evaluation in 3GPP Release 18 / Release 19. SA WG1, SA WG3, SA WG4, RAN WG1, RAN WG2, and RAN WG3 have not defined any services or operations related to ML performance evaluation. Table 6.3.3-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: • SA WG2: Dedicated ML model monitoring services are defined with a specific consumer-producer relationship. Additionally, analytics subscription and ML model training subscription services may indicate the performance requirements which the producer has to satisfy when providing the analytics or training the ML model. The focus in SA2 has been on the accuracy aspects of ML model performance. • SA WG5: SA WG5 defines the framework and mechanisms for performance assurance including performance metrics (ModelPerformance clause 7.4.1 of 28.105 [9]) on which the performance of an ML Model can be ascertained. ML training, ML testing and ML inference services indicate the performance requirements which the producer has to satisfy for the consumer when training the ML model or testing the ML model or providing the inferences. The achieved performance of ML Model is communicated to the consumer via MLTrainingReport, MLTestingReport and AIMLInferenceReport. • SA WG6: Dedicated ML model monitoring services are defined with specific consumer-producer relationship. Additionally, ML model training APIs indicate the performance requirements that the producer has to satisfy when providing the ML model. No specific ML model performance metrics are standardized in SA WG6. Editor’s note: Further investigation is needed to understand the implications of these different approaches and their impact on the overall 3GPP ecosystem. Table 6.3.3-1: ML model performance monitoring services and operations as specified in 3GPP WGs ML Model Training TSG (TS/TR) Service/API Type Service/API/IOC Name Description [Consumer, Producer] SA WG2 TS 23.288 [8] ML Model Monitoring Services Nnwdaf_MLModelMonitor_Subscribe The consumer subscribes to NWDAF for the monitored ML Model accuracy information and Analytics Feedback Information for the analytics generated by the NWDAF with specific parameters. Consumer: NWDAF Producer: NWDAF Nnwdaf_MLModelMonitor_Unsubscribe The consumer unsubscribes to the NWDAF for the monitored ML Model accuracy information and Analytics Feedback Information for the analytics generated by the NWDAF. Consumer: NWDAF Producer: NWDAF Nnwdaf_MLModelMonitor_Notify NWDAF notifies the monitored ML Model accuracy information and Analytics Feedback Information for the analytics generated by the NWDAF to the consumer who has subscribed to the specific NWDAF service. Consumer: NWDAF Producer: NWDAF Nnwdaf_MLModelMonitor_Register The consumer registers the use and monitoring capability for an ML Model at an NWDAF containing MTLF. Consumer: NWDAF Producer: NWDAF Nnwdaf_MLModelMonitor_Deregister The consumer deregisters, from an NWDAF containing MTLF, a previous MLModelMonitor registration, e.g. when the consumer is no longer using or monitoring the accuracy of the analytics generated using the ML Model. Consumer: NWDAF Producer: NWDAF SA WG6 TS 23.482 [34] ML Model Performance Monitoring APIs MLModelPerfMonitor_Subscribe The consumer subscribes for ML model performance monitoring. Consumer: VAL server Producer: AIMLE Server MLModelPerfMonitor_Notify The consumer is notified by ML repository on the ML model performance monitoring. Consumer: VAL server Producer: AIMLE Server Editor’s note: SA5 services need to be added to the table.
80eab521a98c34c5f064bf3f6541cd5c
22.850
6.3.4 Analysis on data collection and management for AI/ML
The analysis focuses on the specifications from SA WG2, SA WG6 and RAN WG3, considering these are the working groups defining services and operations related to data collection for AI/ML in 3GPP Release 18. SA WG1, SA WG3, SA WG4, SA WG5, RAN WG1 and RAN WG2 have not defined any services or operations related to data collection for AI/ML in Release 18. SA WG5 specifies data collection and performance measurement services that can be leveraged for AI/ML purposes (see 3GPP TS 28.622 [72]). Table 6.3.4-1 provides a detailed overview of the specific services defined by each working group. The key findings from the analysis are as follows: • SAWG2: Defines multiple network functions capable of producing data collection services and defines a function for data storage related services. Leverages event exposure framework and defines event exposure services for network functions that can be consumed by NWDAF (see 3GPP TS 23.288 [8] clause 6.2.2.1). Defines data collection for AI/ML services through a clear consumer-producer relationship. Editor’s note: Description for DCCF and data collection coordination is FFS. • SA WG6: Defines network functions similar to those in SA WG2. Data collection services defined for A-DCCF are generic while data collection and data storage services defined for A-ADRF are a mix of generic and use case specific. Editors’ note: More analysis is needed for Rel-19 A-DCCF part. • RAN WG3: Defines data collection messages exchanged between two gNBs over the Xn interface, in a P2P manner. It is to be noted that procedures used for AI/ML support in the NG-RAN shall be “data type agnostic”, which means that the intended use of the data (e.g., input, output, feedback) shall not be indicated. Editors’ note: This analysis is based on Release 18 and does not consider Release 19. Further analysis needs to be conducted as Release 19 matures and normative work progresses. While SA WG2 and SA WG6 both define data collection services, their approaches to data storage and retrieval are different. SA WG2 defines generic data storage and retrieval services that can be supported by an entity (ADRF) and requested by another entity but SA WG6 defines both generic and individual services (related to each analytics type) for storing and retrieving data. RAN WG3 operates independently and is unrelated to services defined in SA WGs and therefore can coexist. Table 6.3.4-1: Data Collection for AI/ML related services and operations as specified across 3GPP WGs Data Collection for AI/ML TSG (TS/TR) Service/API/Message Type Service/API/IOC/Message Name Description [Consumer, Producer] SA WG2 TS 23.288 [8] Event Exposure services Namf_EventExposure_Subscribe The NWDAF uses this service operation to subscribe to or modify event reporting for one UE, a group of UE(s) or any UE. Producer: AMF Namf_EventExposure_Unsubscribe The NWDAF uses this service operation to unsubscribe for a specific event for one UE, group of UE(s), any UE. Producer: AMF Namf_EventExposure_Notify Provides the previously subscribed event information to the NWDAF which has subscribed to that event before. Producer: AMF Nsmf_EventExposure_Subscribe This service operation is used by an NWDAF to subscribe or modify a subscription for event notifications on a specified PDU Session or for all PDU Sessions of one UE, group of UE(s) or any UE. Producer: SMF Nsmf_EventExposure_UnSubscribe This service operation is used by an NWDAF to unsubscribe event notifications. Producer: SMF Nsmf_EventExposure_Notify Report UE PDU Session related event(s) to the NWDAF which has subscribed to the event report service. Producer: SMF Npcf_EventExposure_Subscribe The NWDAF uses this service operation to subscribe to or modify event reporting for a group of UE(s) or any UE accessing a combination of (DNN, S-NSSAI). Producer: PCF Npcf_EventExposure_Unsubscribe The NWDAF uses this service operation to unsubscribe for a specific event for a group of UE(s) or any UE accessing a combination of (DNN, S-NSSAI). Producer: PCF Npcf_EventExposure_Notify This service operation reports the event to the NWDAF that has previously subscribed either using Npcf_EventExposure_Subscribe service operation or provided as part of the Data Set Application Data and Data Subset Service Parameters stored in UDR. Producer: PCF Nudm_EventExposure_Subscribe The NWDAF subscribes to receive an event. Producer: UDM Nudm_EventExposure_Unsubscribe The NWDAF deletes the subscription of an event if already defined in UDM. Producer: UDM Nudm_EventExposure_Notify UDM reports the event to the NWDAF that has previously subscribed. Producer: UDM Nudm_EventExposure_ModifySubscription The NWDAF requests to modify an existing subscription to event notifications. Producer: UDM Nnef_EventExposure_Subscribe The NWDAF subscribes to receive an event, or if the event is already defined in NEF, then the subscription is updated. Producer: NEF Nnef_EventExposure_Unsubscribe The NWDAF deletes an event if already defined in NEF. Producer: NEF Nnef_EventExposure_Notify NEF reports the event to the NWDAF that has previously subscribed. Producer: NEF Naf_EventExposure_Subscribe The NWDAF subscribes the event to collect AF data for UE(s), group of UEs, or any UE, or updates the subscription which is already defined in AF. Producer: AF Naf_EventExposure_Unsubscribe The NWDAF unsubscribes for a specific event. Producer: AF Naf_EventExposure_Notify The AF provides the previously subscribed event information to the NWDAF which has subscribed to that event before. Producer: AF Nnsacf_SliceEventExposure_Subscribe This service operation is used by the NWDAF to subscribe or modify a subscription with the NSACF for event based notifications of the current number of UEs registered for a network slice or the current number of PDU Sessions established on a network slice. Producer: NSACF Nnsacf_SliceEventExposure_Unsubscribe This service operation is used by the NWDAF to unsubscribe from the event notification. Producer: NSACF Nnsacf_SliceEventExposure_Notify This service operation is used by the NSACF to report the current number of UEs registered with a network slice or the current number of PDU Sessions established on a network slice in numbers or in percentage from the maximum allowed numbers, based on threshold or at expiry of periodic timer. Producer: NSACF Nupf_EventExposure_Subscribe This service operation reports the event and information to the NWDAF that has subscribed implicitly. Producer: UPF Nupf_EventExposure_Unsubscribe This service operation is used by an NWDAF to subscribe or modify a subscription to UPF event exposure notifications e.g. for the purpose of UPF data collection on a specified PDU Session or for all PDU Sessions of one UE or any UE. Producer: UPF Nupf_EventExposure_Notify The NF consumer uses this service operation to unsubscribe for a specific event. Consumer: Any NF Producer: UPF Nscp_EventExposure_Notify The NWDAF uses this service operation to unsubscribe for a specific event. Producer: SCP Nscp_EventExposure_Subscribe This service operation is used by an NWDAF to subscribe or modify a subscription to SCP event exposure notifications. Producer: SCP Nscp_EventExposure_Unsubscribe The NWDAF uses this service operation to unsubscribe from an existing subscription. Producer: SCP NWDAF Data Management services Nnwdaf_DataManagement_Subscribe The consumer subscribes to data exposed by an NWDAF. It can be historical data or runtime data. The subscription includes service operation specific parameters that identify the data to be provided and may include formatting and processing instructions that specify how the data is to be delivered to the consumer. Consumer: NWDAF, DCCF Producer: NWDAF Nnwdaf_DataManagement_Unsubscribe The consumer unsubscribes to the data exposed by an NWDAF. Consumer: NWDAF, DCCF Producer: NWDAF Nnwdaf_DataManagement_Notify The NWDAF notifies the consumer of the requested data or notifies of the availability of previously subscribed data when delivery is via an NWDAF. The NWDAF may also notify the consumer when Data or Analytics is to be deleted. Consumer: NWDAF, DCCF, MFAF, ADRF Producer: NWDAF Nnwdaf_DataManagement_Fetch The consumer retrieves from the NWDAF subscribed data, as indicated by Fetch Instructions from Nnwdaf_DataManagement_Notify. Consumer: NWDAF, DCCF, MFAF, ADRF Producer: NWDAF NWDAF Roaming Data services Nnwdaf_RoamingData_Subscribe The consumer subscribes for input data related to roaming UE(s) for NWDAF analytics. The subscription includes service operation specific parameters that identify the data to be provided and may include formatting and processing instructions that specify how the data is to be delivered to the consumer. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingData_Unsubscribe The consumer unsubscribes to input data related to roaming UE(s). Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF Nnwdaf_RoamingData_Notify NWDAF notifies the consumer about input data related to roaming UE(s) that the consumer has subscribed to. Consumer: H-RE-NWDAF, V-RE-NWDAF Producer: H-RE-NWDAF, V-RE-NWDAF DCCF Data Management Services Ndccf_DataManagement_Subscribe The consumer subscribes to receive data or analytics from the DCCF. The subscription includes service operation specific parameters that identify the data or analytics to be provided and may include formatting and processing instructions that specify how the data is to be delivered to the consumer. The consumer may also request that data be stored in an ADRF or an NWDAF hosting ADRF functionality. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF Ndccf_DataManagement_Unsubscribe The consumer unsubscribes to DCCF for data or analytics. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF Ndccf_DataManagement_Notify DCCF notifies the consumer instance of the requested data or analytics according to the request or notifies of the availability of previously subscribed Data or Analytics when data delivery is via the DCCF. The DCCF may also notify the consumer instance when Data or Analytics is to be deleted. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF Ndccf_DataManagement_Fetch The consumer retrieves from the DCCF, data or analytics as indicated by Ndccf_DataManagement_Notify Fetch Instructions. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: DCCF Ndccf_DataManagement_Transfer The Source DCCF transfers UE data subscription context to the target DCCF. Consumer: DCCF Producer: DCCF MFAF Data Management Services Nmfaf_3daDataManagement_Configure The consumer configures or reconfigures the MFAF to map data or analytics received by the MFAF to out-bound notification endpoints and to format and process the out-bound data or analytics. Consumer: DCCF, NWDAF Producer: MFAF Nmfaf_3daDataManagement_Deconfigure The consumer configures the MFAF to stop mapping data or analytics received by the MFAF to one or more out-bound notification endpoints. Consumer: DCCF, NWDAF Producer: MFAF Nmfaf_3caDataManagement_Notify MFAF provides data or analytics or notification of availability of data or analytics to notification endpoints. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: MFAF Nmfaf_3caDataManagement_Fetch The consumer retrieves from the MFAF, data or analytics as indicated by Nmfaf_3caDataManagement_Notify Fetch Instructions. Consumer: NWDAF, PCF, NSSF, AMF, SMF, NEF, AF, ADRF Producer: MFAF ADRF Data Management Services Nadrf_DataManagement_StorageRequest The consumer NF uses this service operation to request the ADRF to store data or analytics. Data or analytics are provided to the ADRF in the request message. Consumer: DCCF, NWDAF, MFAF Producer: ADRF Nadrf_DataManagement_StorageSubscriptionRequest The consumer (NWDAF or DCCF) uses this service operation to request the ADRF to initiate a subscription for data or analytics. Data or analytics provided in notifications as a result of the subsequent subscription by the ADRF are stored in the ADRF. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_StorageSubscriptionRemoval The consumer NF uses this service operation to request that the ADRF no longer subscribes to data or analytics it is collecting and storing. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalRequest The consumer NF uses this service operation to retrieve stored data or analytics from the ADRF. The Nadrf_DataManagement_RetrievalRequest response either contains the data or analytics or provides instructions for fetching the data or analytics. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalSubscribe The consumer NF uses this service operation to retrieve stored data or analytics from the ADRF and to receive future notifications containing the corresponding data or analytics received by ADRF. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalUnsubscribe The consumer NF uses this service operation to request that the ADRF no longer sends data or analytics to a notification endpoint. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_RetrievalNotify This service operation provides consumers with either data or analytics from an ADRF, or instructions to fetch the data or analytics from an ADRF. The notifications are provided to consumers that have subscribed using the Nadrf_DataManagement_RetrievalSubscribe service operation. Consumer: NWDAF, DCCF Producer: ADRF Nadrf_DataManagement_Delete This service operation instructs the ADRF to delete stored data. Consumer: NWDAF, DCCF Producer: ADRF SA WG6 TS 23.436 [36] A-ADRF Data Collection APIs SS_AADRF_Data_Collection Subscribe The consumer subscribes for offline data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_Data_Collection Notify The consumer is receiving the offline data from A-ADRF as notification, based on subscription. Consumer: ADAES Producer: A-ADRF SS_ AADRF_Historical_ServiceAPI_Logs Get The consumer requests API logs from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_NetworkSlice_Data Get The consumer requests network slice data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_Location_Accuracy_Data Get The consumer is receiving offline location analytics/data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_EdgeData_Collection Subscribe The consumer subscribes for offline edge data from A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_EdgeData_Collection Notify The consumer is receiving the offline edge data from A-ADRF as notification, based on subscription. Consumer: ADAES Producer: A-ADRF SS_AADRF_Edge_Preparation_Data Get The consumer is receiving offline edge computing preparation data from the A-ADRF. Consumer: ADAES Producer: A-ADRF SS_AADRF_Data_Storage Request Subscription The consumer requests A-ADRF to subscribe for data or analytics from ADAE server or A-DCCF for store. This service operation provides parameters needed by the A-ADRF to initiate the subscription (to an ADAE server or A-DCCF). Consumer: ADAE server, A-DCCF Producer: A-ADRF SS_AADRF_Data_Storage Store Data The consumer requests A-ADRF to store data or analytics from ADAE server or A-DCCF. Data or analytics are provided to the A-ADRF in the request message. Consumer: ADAE server Producer: A-ADRF SS_ADRF_ ServerToServer_Analytics Get The consumer is receiving offline server-to-server analytics/data from A-ADRF. Consumer: ADAES Producer: A-ADRF A-DCCF Data Collection APIs SS_ADCCF_Data_Collection Subscribe The consumer subscribes to receive data or analytics from A-DCCF. The subscription includes service operation specific parameters that identify the data or analytics to be provided. Consumer: ADAE server Producer: A-DCCF SS_ADCCF_Data_Collection Notify The A-DCCF notifies the consumer of the requested data or analytics according to the request or notifies of the availability of previously subscribed data or analytics when data delivery is via the A-DCCF. The A-DCCF may also notify the consumer when data or analytics is to be deleted. Consumer: ADAE server Producer: A-DCCF SS_ADCCF_Data_Collection Get The consumer retrieves data or analytics from the A-DCCF. Consumer: ADAE server Producer: A-DCCF RAN WG3 TS 38.423 [15] Data Collection procedures DATA COLLECTION REQUEST NG-RAN node 1 initiates the procedure by sending the DATA COLLECTION REQUEST message to NG-RAN node 2 to start information reporting or to stop information reporting. Upon receipt, NG-RAN node 2: • shall initiate the requested information reporting according to the parameters given in the request in case the Registration Request for Data Collection IE is set to "start"; or • shall stop all measurements and predictions and terminate the reporting in case the Registration Request for Data Collection IE is set to "stop". Report Characteristics for Data Collection IE in the DATA COLLECTION REQUEST message indicates the type of objects NG-RAN node 2 performs measurements or predictions on. DATA COLLECTION RESPONSE If NG-RAN node 2 is capable of providing all of the requested information, it shall initiate the information reporting as requested by NG-RAN node 1 and respond with the DATA COLLECTION RESPONSE message. If NG-RAN node 2 is capable of providing some but not all of the requested information, it shall initiate the information reporting for the admitted requested information and include the Node Measurement Initiation Result List IE or the Cell Measurement Initiation Result List IE or both in the DATA COLLECTION RESPONSE message. DATA COLLECTION FAILURE If none of the requested information can be initiated, NG-RAN node 2 shall send the DATA COLLECTION FAILURE message with an appropriate cause value. DATA COLLECTION UPDATE NG-RAN node 2 shall include in the DATA COLLECTION UPDATE message one or more of the following IEs based on the request: SSB Area Radio Resource Status List IE, Predicted Radio Resource Status, Predicted Number of Active UEs, Predicted RRC Connections, Average UE Throughput DL, Average UE Throughput UL, Average Packet Delay, Average Packet Loss, Energy Cost and Measured UE Trajectory. These IEs are specified in Rel. 18 to support three AI/ML for NG-RAN use cases, i.e., Energy Saving, Load Balancing and Mobility Optimization. Editors’ note: Some of the SA6 defined APIs in table above are defined in Rel-19 but are not complete yet. 6.3.X Analysis on feature alignment #X <alignment title> Editor's note: This clause describes AI/ML related terminology features misalignments, including cross-domain (UE, RAN, core network, media, OAM, and application enablement) aspects. Examples of areas to be investigated are LCM for AI/ML, data collection/storage/exposure, model training/delivery/ (de)-activation/inference emulation, inference/storage/exposure, performance evaluation and accuracy monitoring.
80eab521a98c34c5f064bf3f6541cd5c
22.850
7 Overall Evaluation
Editor's note: This clause will provide a general evaluation of potential terminology inconsistency #X and potential feature misalignment #X.
80eab521a98c34c5f064bf3f6541cd5c
22.850
8 Conclusions
Editor's note: This clause will provide information on any potential outcome from clause 5, clause 6 and clause 7 to the respective WGs (according to their Terms of Reference (ToR)) to resolve any issues with appropriate SA-level co-ordination as necessary. The term "AI/ML" is to be used as a unified definition encompassing "AI/ML", "AI" and "ML" in all corresponding ML related TRs/TSs. Annex A: ML Model Editor's note: It is for FFS, which of these parts from SA WG5 (SP-241234) will be moved/incorporated to clause 6 A.1 ML model life cycle management (LCM) Rel-18 specification addressed the AI/ML LCM management capabilities (including wide range of use cases, corresponding requirements (stage 1) and solutions (stage 2 NRMs & stage 3 OpenAPIs) for the ML model, including ML model training (which also includes validation), testing, AI/ML inference emulation, deployment and AI/ML inference steps of the lifecycle as shown below for managing the entire lifecycle of the ML model. Start of quoted text (from TS 28.105 [9]) 4a.0 ML model lifecycle AI/ML techniques are widely used in 5GS (including 5GC, NG-RAN, and management system), the generic AI/ML operational workflow in the lifecycle of an ML model, is depicted in Figure 4a.0-1. Figure 4a.0-1: ML model lifecycle The ML model lifecycle includes training, testing, emulation, deployment, and inference. These steps are briefly described below: - ML model training: training, including initial training and re-training, of an ML model or a group of ML models. It also includes validation of the ML model to evaluate the performance when the ML model performs on the training data and validation data. If the validation result does not meet the expectation (e.g. the variance is not acceptable), the ML model needs to be re-trained. - ML model testing: testing of a validated ML model to evaluate the performance of the trained ML model when it performs on testing data. If the testing result meets the expectations, the ML model may proceed to the next step If the testing result does not meet the expectations, the ML model needs to be re-trained. - AI/ML inference emulation: running an ML model for inference in an emulation environment. The purpose is to evaluate the inference performance of the ML model in the emulation environment prior to applying it to the target network or system. NOTE: The AI/ML inference emulation is considered optional and can be skipped in the AI/ML operational workflow. - ML model deployment: ML model deployment includes the ML model loading process (a.k.a. a sequence of atomic actions) to make a trained ML model available for use at the target AI/ML inference function. ML model deployment may not be needed in some cases, for example when the training function and inference function are co-located. - AI/ML inference: performing inference using a trained ML model by the AI/ML inference function. The AI/ML inference may also trigger model re-training or update based on e.g. performance monitoring and evaluation. End of quoted text A.1.1 Observations and analyses: AI/ML LCM - The AI/ML workflow defined by SA WG5 TS 28.105 [9] represents a general framework encapsulating the various life cycle management (LCM) operations for ML model (i.e. model training, testing, emulation, deployment, and inference). - The AI/ML LCM capabilities defined by SA WG5 for each of the operational steps are generic for managing of 3GPP system including the Management and orchestration, CN and RAN domains. - It is important to recognise that "domain-specific" ML model life cycle related tasks can be developed for the specific domains by the relevant 3GPP WGs, e.g. the RAN WGs can specify data collection within the RAN domain needed to train the UE-side, network-side, or the two-sided UE/network ML models, and specific LCM operations for UE-side model over air-interface. - While ML model and AI/ML inference function life cycle can be specified by the relevant 3GPP WG for the specific domain (i.e. RAN, CN or Management & Orchestration), the "management aspects" of life cycle (i.e. life cycle management) remains to be primarily a "management task" that falls within the responsibility of SA WG5. - The ML models and the associated "Life Cycle" can be a use case and/or domain specific, the management of the Life Cycle (i.e. LCM) is a higher layer task which is typically a role of the OAM that encompasses the process of e.g. the governance, automation, and operational practices applied to the entire AI/ML lifecycle. It is therefore imperative to distinguish the difference between Life Cycle and Life Cycle Management. - Where feasible, the ML model LCM workflow and associated management capabilities specified by SA WG5 in TS 28.105 [9] could be considered by 3GPP for the currently ongoing and future relevant specification development. The 3GPP WG(s) should potentially provide AI/ML LCM-related requirements, if any, to SA WG5 to avoid duplication and contention of effort. NOTE: SA WG5 Rel-18 specification in TS 28.105 [9] on ML model LCM and the associated management capabilities does not address the UE-side and UE/Network-side Model LCM. A.2 ML model lifecycle management capabilities Each step in the ML model lifecycle. i.e. the ML model training, ML model testing, AI/ML emulation, ML model deployment and AI/ML inference correspond to number of dedicated management capabilities. The specified capabilities are developed based on corresponding use cases and requirements. The management capabilities specified by SA WG5 TS 28.105 [9] are highlighted below: Start of quoted text (from TS 28.105 [9]) 6.1 ML model lifecycle management capabilities Each operational step in the ML model lifecycle (see clause 4a.0.1) is supported by one or more AI/ML management capabilities as listed below. Management capabilities for ML model training: - ML model training management: allowing the MnS consumer to request the ML model training, consume and control the producer-initiated training, and manage the ML model training/re-training process. The training management capability may include training performance management and setting a policy for the producer-initiated ML model training. - ML model training capability also includes validation to evaluate the performance of the ML model when performing on the validation data, and to identify the variance of the performance on the training and validation data. If the variance is not acceptable, the ML model would need to be re-trained before being made available for the next step in the operational workflow (e.g. ML model testing). Management capabilities for ML testing: - ML model testing management: allowing the MnS consumer to request the ML model testing, and to receive the testing results for a trained ML model. It may also include capabilities for selecting the specific performance metrics to be used or reported by the ML testing function. MnS consumer may also be allowed to trigger ML model re-training based on the ML model testing performance results. Management capabilities for AI/ML inference emulation: - AI/ML inference emulation: a capability allowing an MnS consumer to request an ML inference emulation for a specific ML model or models (after the training, validation, and testing) to evaluate the inference performance in an emulation environment prior to applying it to the target network or system. Management capabilities for ML model entity deployment: - ML entity loading management: allowing the MnS consumer to trigger, control and/or monitor the ML model loading process. Management capabilities for AI/ML inference: - AI/ML inference management: allowing an MnS consumer to control the inference, i.e. activate/deactivate the inference function and/or ML model/models, configure the allowed ranges of the inference output parameters. The capabilities also allow the MnS consumer to monitor and evaluate the inference performance and when needed trigger an update of an ML model or an AI/ML inference function. The use cases and corresponding requirements for AI/ML management capabilities are specified in the following clauses. End of quoted text A.2.1 Observations and analyses: ML model lifecycle management capabilities - ML model lifecycle management (LCM) capabilities are crucial for the effective deployment, operation, and optimization of AI/ML-enabled features and capabilities in both the NG-RAN and 5GC. These capabilities ensure that AI/ML models are not only developed and trained correctly but also tested, deployed, evaluated, and operated efficiently in the network environment. - The management capabilities outlined in TS 28.105 [9] offer a structured approach to managing the various steps of the ML model lifecycle. This structured approach is applicable to AI/ML-enabled features and capabilities in NG-RAN, 5GC, and management system, ensuring consistency and reliability in the deployment and operation of AI/ML technologies for different domains. - The AI/ML LCM management capabilities are foundational for integrating advanced AI/ML features into 5G networks. By ensuring that ML models are effectively managed from the training step through to inference, these capabilities provide robust and reliable AI/ML-driven network enhancements. - The AI/ML LCM workflow and associated management capabilities specified by SA WG5 in TS 28.105 [9] should be considered as the baseline for the AI/ML E2E framework for the 3GPP. These capabilities provide a comprehensive foundation for ensuring that AI/ML models and related processes are consistently managed across all steps of their lifecycle, promoting seamless integration and operation for all domain within the 5G system. A.3 AI/ML functionalities management scenarios The Rel-18 specification TS 28.105 [9] also documented AI/ML functionalities management scenarios in relation with managed AI/ML features which describe the possible locations of ML training function and AI/ML inference function involving the various 3GPP system domains. Start of quoted text (from TS 28.105 [9]) 4a.2 AI/ML functionalities management scenarios (relation with managed AI/ML features) The ML training function and/or AI/ML inference function can be located in the RAN domain MnS consumer (e.g. cross-domain management system) or the domain-specific management system (i.e. a management function for RAN or CN), or Network Function. For MDA, the ML training function can be located inside or outside the MDAF. The AI/ML inference function is in the MDAF. For NWDAF, the ML training function can be located in the MTLF of the NWDAF or the management system, the AI/ML inference function is in the AnLF. For RAN, the ML training function and AI/ML inference function can both be located in the gNB, or the ML training function can be located in the management system and AI/ML inference function is located in the gNB. Therefore, there might exist several location scenarios for ML training function and AI/ML inference function. Scenario 1: The ML training function and AI/ML inference function are both located in the 3GPP management system (e.g. RAN domain management function). For instance, for RAN domain-specific MDA, the ML training function and AI/ML inference functions for MDA can be located in the RAN domain-specific MDAF. As depicted in figure 4a.2-1. Figure 4a.2-1: Management for RAN domain specific MDAF Similarly, for CN domain-specific MDA the ML training function and AI/ML inference function can be located in CN domain-specific MDAF. Scenario 2: For RAN AI/ML capabilities the ML training function is located in the 3GPP RAN domain-specific management function while the AI/ML inference function is located in gNB. See figure 4a.2-2. Figure 4a.2-2: Management where the ML model training is located in RAN domain management function and AI/ML inference is located in gNB Scenario 3: The ML training function and AI/ML inference function are both located in the gNB. See figure 4a.2-3. Figure 4a.2-3: Management where the ML model training and AI/ML inference are both located in gNB Scenario 4: For NWDAF, the ML training function and AI/ML inference function are both located in the NWDAF. See figure 4a.2-4. Figure 4a.2-4: Management where the ML model training and AI/ML inference are both located in CN End of quoted text A.3.1 Observations and analyses: AI/ML functionalities management scenarios - The functional arrangement scenarios defined by SA WG5 specifications demonstrate that different part of the ML model life cycle can be managed depending on the use case. - The functional arrangements represent management deployment scenarios where for example ML model training related tasks can either be a domain specific or as a cooperative multi-domain task involving for example RAN and management & orchestration or CN and management & orchestration (OAM) domains. - The LCM workflow defined by SA WG5 serves as a management framework to accommodate and enable all the possible functional arrangement scenarios within or cross-domains in the 3GPP system. - The functional arrangement scenarios, coupled with the ML model LCM as defined by SA WG5 in TS 28.105 [9], can be considered in the ongoing and any future 3GPP relevant specification development. Annex B: Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2024-09 TSG SA#105 SP-241367 - - - Proposed skeleton agreed for FS_AIML_CAL at TSG SA#105 0.0.0 2024-09 TSG SA#105 - Implementing following approved papers: SP-241407, SP-241395, SP-241408, SP-241397, SP-241409, SP-241410. 0.1.0 2024-12 TSG SA#106 - - - - Implementing following approved papers: SP-241834, SP-241982, Sp-241839, SP-241983, SP-241965, SP-241984, SP-241985, SP-241986, SP-241987, SP-241988. 0.2.0 2025-03 TSG SA#107 - - - - Implementing following approved papers: SP-250404, SP-250405, SP-250406, SP-250299, SP-250346, SP-250407, SP-250349, SP-250408, SP-250409, SP-250410, SP-250411, SP-250412. 0.3.0
c3de68f0eebb105002ecbb9ed169af5e
22.851
1 Scope
The present document investigates use cases and potential new requirements related to 3GPP system enhanced support of specific 5G network sharing deployment scenarios, in particular where there is no direct interconnection between the shared NG-RAN and participating operators’ core networks. It includes the following aspects: - Mobility and service continuity, e.g., when moving from a non-shared 4G/5G network to a shared 5G network and vice versa, with focus on CN aspects. - Potential security requirements. - Charging requirements (e.g., based on traffic differentiation in specific network sharing geographical areas). - User/service experience (e.g., maintain the communication latency for voice, and SMS) when accessing the shared network, including scenarios of home-routed traffic or local breakout. - Other aspects, e.g., regulatory requirements, emergency services, PWS support.
c3de68f0eebb105002ecbb9ed169af5e
22.851
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [2] 3GPP TS 22.101: "Service principles". [3] 3GPP TS 22.261: "Service requirements for the 5G system". [4] 3GPP TS 29.513: "5G System; Policy and Charging Control signalling flows and QoS parameter mapping; Stage 3". [5] 3GPP TS 23.122: "Non-Access-Stratum (NAS) functions related to Mobile Station (MS) in idle mode". [6] 3GPP TS 22.011: "Service accessibility". [7] 3GPP TS 23.502: "Procedures for the 5G System". [8] 3GPP TS 22.071: "Location Services (LCS); Service description; Stage 1". [9] “Highway makes desert travel easy”, http://en.people.cn/n3/2022/0706/c90000-10119553.html
c3de68f0eebb105002ecbb9ed169af5e
22.851
3 Definitions of terms, symbols and abbreviations
c3de68f0eebb105002ecbb9ed169af5e
22.851
3.1 Terms
For the purposes of the present document, the terms given in 3GPP TR 21.905 [1] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in 3GPP TR 21.905 [1]. Indirect Network Sharing: describes the communication between the shared access NG-RAN and the Participating NG-RAN Operator’s core network being routed through the Hosting NG-RAN Operator’s core network. Hosted Service: a service containing the operator's own application(s) and/or trusted third-party application(s) in the Service Hosting Environment, which can be accessed by the user. Service Hosting Environment: the environment, located inside of 5G network and fully controlled by the operator, where Hosted Services are offered from. Hosting NG-RAN Operator: the operator that has operational control of a shared NG-RAN. NOTE 1: Hosting NG-RAN Operator can also be a Hosting RAN Operator. See 3GPP TS 22.101 [2]. Participating NG-RAN Operator: authorized operator that is sharing NG-RAN resources provided by a Hosting NG-RAN Operator. NOTE 2: Participating NG-RAN Operator can also be participating operator. See 3GPP TS 22.101 [2]. Shared NG-RAN: NG-RAN that is shared among a number of operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
3.2 Symbols
For the purposes of the present document, the following symbols apply: <symbol> <Explanation>
c3de68f0eebb105002ecbb9ed169af5e
22.851
3.3 Abbreviations
For the purposes of the present document, the abbreviations given in 3GPP TR 21.905 [1] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in 3GPP TR 21.905 [1]. HTA High Traffic Areas LTA Low Traffic Areas MOCN Multi-Operator Core Network NG-RAN Next Generation Radio Access Network SST Slice/Service Type
c3de68f0eebb105002ecbb9ed169af5e
22.851
4 Overview
The present document introduces the newly supported network sharing scenario, where a NG-RAN is shared among multiple operators without necessarily assuming a direct connection between the shared radio access network and the participating operator’s core network. Use cases including service continuity and QoS, access control and mobility, international roamers in shared network, hosted services, long-distance road transport are analyzed. This study provides alternatives for existing operators who intend to deploy a NG Radio Access Network to complement the existing market, taking into account of operators’ business consideration, such as network planning, operation and other factors.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5 Use Cases
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1 Use Case on Network Sharing without Direct Connections between the Shared Access and the Core Networks of the Participating Operators
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.1 Description
As stated in TS 22.261 [3] the increased density of access nodes needed to meet future performance objectives pose considerable challenges in deployment and acquiring spectrum and antenna locations. RAN sharing is seen as a technical solution to these issues. Sharing access networks and network infrastructure has become more important part of 3GPP systems. When two or more operators have respectively deployed or plan to deploy 5G access networks and core networks, a MOCN configuration can be considered for network sharing between these operators, i.e., a Multi-Operator Core Network (MOCN) in which multiple CN nodes are connected to the same radio access and the CN nodes are operated by different operators. One of the challenges for the partners’ network operators is the maintenance generated by the interconnection (e.g., number of network interfaces) between the shared RAN and two or more core networks, especially for a large number of shared base stations. For these reasons, it is suggested investigating other types of network sharing scenarios, where a 5G RAN is shared among multiple operators without necessarily assuming a direct connection between shared access and the core networks of the participating operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.2 Pre-conditions
Two (or more) operators provide coverage with their respective radio access networks in different parts of a country but together cover the entire country. There is an agreement between all the operators to work together and to build a shared network, but utilizing the different operator’s allocated spectrum appropriately in different parts of the coverage area (for example, Low Traffic Areas, LTA and High Traffic Areas, HTA). The hosting operator 1, as illustrated below, can share its NG-RAN with the participating operators with or without direct connections between the shared access and the core networks of the participating operators. The following preconditions apply, 1) 1. OP1 owns the NG-RAN to be shared with three other operators; OP2, OP3, and OP4. 2) 2. NG-RAN is shared with certain conditions, e.g., within a specific 5G frequency band or within specific area. 3) 3. NG-RAN does not have direct connections between the shared access and the core networks of the participating operators OP2 and OP3. 4) 4. OP4 has a MOCN arrangement with OP1. 5) 5. In this example UE 1 is subscribed to OP1, UE 2 is subscribed to OP 2, UE 3 is subscribed to OP3, and UE 4 is subscribed to OP4. Both options of direct and indirect connections between the shared access and the core networks of the participating operators are illustrated in Figure 5.1.2-1 below. Figure 5.1.2-2 shows the option of Indirect Network Sharing involving core network of hosting operator, as indirect connection between the shared access and the core networks of the participating operators. Figure 5.1.2-1: Different options both direct and indirect connections between the shared access and the core networks of the participating operators Figure 5.1.2-2: Indirect Network Sharing scenario involving core network of hosting operator between the shared access and the core networks of the participating operators
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.3 Service Flows
1) UE1 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP1. 2) UE2 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP2. 3) UE3 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP3. 4) UE4 can successfully attach to NG-RAN, and the display of the network operator name is the name of OP4. 5) The service provider of UE 1 is OP1. 6) The service provider of UE 2 is OP2. 7) The service provider of UE 3 is OP3. 8) The service provider of UE 4 is OP4. For UEs accessing the Shared NG-RAN, the network of the hosting operator needs to know which participating operator a UE is registered to and what type of network sharing (e.g., MOCN or otherwise) is in place for that participating operator. The inter-connection between participating operators’ core networks and Shared NG-RAN of the Hosting Operator can be supported via an element of the Hosting Operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.4 Post-conditions
The hosting network will be able to provide accessing to all participating operators' users.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.5 Existing Features partly or fully covering Use Case Functionality
Network sharing has been studied in previous releases, where related normative stage 1 requirements are introduced in 3GPP TS 22.101 [2] and TS 22.261[3]. 3GPP TS 22.101 [2] introduces general requirements of network sharing, stated as follows: Network sharing shall be transparent to the user. The specifications shall support both the sharing of: (i) radio access network only; (ii) radio access network and core network entities connected to radio access network. NOTE: In a normal deployment scenario only one or the other option will be implemented. The provisioning of services and service capabilities is described in 3GPP TS 22.101 [2]. The provision of services and service capabilities that is possible to offer in a network shall not be restricted by the existence of the network sharing It shall be possible for a core network operator to differentiate its service offering from other core network operators within the shared network. It shall be possible to control the access to service capabilities offered by a shared network according to the core network operator the user is subscribed to. The selection of 3GPP access network is described in 3GPP TS 22.261 [3] clause 6.19 : The UE uses the list of PLMN/RAT combinations for PLMN selection, if available, typically during roaming situations. In non-roaming situations, the UE and subscription combination typically matches the HPLMN/EHPLMN capabilities and policies, from a SST (slice/service type) perspective. That is, a 5G UE accessing its HPLMN/EHPLMN should be able to access SSTs according to UE capabilities and the related subscription. […] The 5G system shall support selection among any available PLMN/RAT combinations, identified through their respective PLMN identifier and Radio Access Technology identifier, in a prioritised order. The priority order may, subject to operator policies, be provisioned in an Operator Controlled PLMN Selector lists with associated RAT identifiers, stored in the 5G UE. The 5G system shall support, subject to operator policies, a User Controlled PLMN Selector list stored in the 5G UE, allowing the UE user to specify preferred PLMNs with associated RAT identifier in priority order.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.1.6 Potential New Requirements needed to support the Use Case
[PR 5.1.6-001] The 5G system shall be able to support network sharing with indirect connection between the Shared NG-RAN and one or more Participating NG-RAN Operators’ core networks. [PR 5.1.6-002] The 5G system shall be able to support means for Participating Operators to provide their operator’s name to a registered UE, for display to the user. 5.2 Use Case on Service Continuity and QoS
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.1 Description
In Indirect Network Sharing scenario, the requirements to the services provided by the participating operator and the hosting operator for a UE moving between their service areas needs to be clearly defined. These service considerations are based on their user subscriptions and charging requirements from the participating operator. The service principle is not expected to be significantly different from the MOCN access sharing. The business here includes not only the operator’s name displayed in the UE UI, but also the service logic provided by both the participating operator and the hosting operator for the services, such as voice, SMS and data communications for the UE.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.2 Pre-conditions
Assumptions, 1. OP 1 is a Hosting NG-RAN Operator. 2. The core network of OP 2 does not have direct connection with OP1’s Shared NG-RAN. 3. There is connection between the OP1’s CN and OP2’s CN. 4. UE 1 belongs to OP 1. UE 2 and UE N belong to OP 2.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.3 Service Flows
Figure 5.2.3-1: Basic service scenario without direct connections between the Shared NG-RAN and the core networks of the participating operators 1. UE 2 successfully registers to OP 2’s PLMN via the Shared NG-RAN. 2. UE 1 successfully registers to OP 1's PLMN. 3. UE N successfully registers to OP 2's PLMN, via OP2’s wireless access network. 4. UE 2 initiates a service, for example a 5G voice call to user N, and succeeds under the shared network. - OP 2 and OP1 do not need to expose the IMS network to each other, e.g., UE2 may also access services provided by their home environment in the same way even if the UE2 moves to the coverage of Shared NG-RAN of OP1, and it is not necessary to assume that the IMS network of OP1 is involved in this scenario. - When UE 2 moves to OP 2's 4G area from the shared network, the call continues. - When UE 2 moves to OP 2's 5G area from the shared network, the call does not drop. 5. UE2 may also use other services under the shared network, just as it usually does in the OP2’s network while OP1 and OP2 have different network services and service capabilities. 6. The network sharing partners may set a specific sharing allocation model for a network sharing method communicating with. It is necessary to understand the number of the users, and how long users using a certain shared network method will take. The collection of charging information associated with the sharing method that the UE accesses with should be possible.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.4 Post-conditions
The service of UE 2 succeeds, when UE 2 moves between shared access network and participating operator’s network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.5 Existing Features partly or fully covering Use Case Functionality
SA1 has performed various studies on services aspects in previous releases. For the definition of service continuity and the description of user experience see 3GPP TS 22.261 [3]. Requirements to service capabilities are described in 3GPP TS 22.101 [2] as follows: The provision of services and service capabilities that is possible to offer in a network shall not be restricted by the existence of the network sharing It shall be possible for a core network operator to differentiate its service offering from other core network operators within the shared network. It shall be possible to control the access to service capabilities offered by a shared network according to the core network operator the user is subscribed to. The 3GPP System shall support service continuity for UEs that are moving between different Shared RANs or between a Shared RAN and a non-shared RAN. Subscribers shall have a consistent user experience regardless of the domain used, subject to the constraints of the UE and access network. Some 5G specific requirements are described in TS 22.261 [3] as follows: The 5G system shall support mobility procedures between a 5G core network and an EPC with minimum impact to the user experience (e.g. QoS, QoE). The 5G system shall support inter- and/or intra- access technology mobility procedures within 5GS with minimum impact to the user experience (e.g. QoS, QoE). In addition to the charging requirements of 5G system introduced in Chapter 9 of TS 22.261 [3], the following requirement is defined in TS 22.101 [2]: Charging and accounting solutions shall support the shared network architecture so that end users can be appropriately charged for their usage of the shared network, and network sharing partners can be allocated their share of the costs of the shared network resources. Further charging requirements are specified in TS 29.513 [4].
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.2.6 Potential New Requirements needed to support the Use Case
[PR 5.2.6-001] The 5G core network shall be able to support collection of charging information associated with a UE accessing a Shared NG-RAN using Indirect Network Sharing. [PR 5.2.6-002] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall support service continuity for UEs that are moving between different Shared NG-RANs or between a Shared NG-RAN and a non-shared RAN (managed by Hosting and Participating Operators). [PR 5.2.6-003] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall be able to minimize the impact to the user experience (e.g., QoS, QoE) of UEs in an active communication moving between different Shared NG-RANs or between a Shared NG-RAN and a non-shared RAN (managed by Hosting and Participating Operators). [PR 5.2.6-004] In case of Indirect Network Sharing, the 5G system shall support a mechanism for a UE to access the subscribed PLMN services when entering a Shared NG-RAN.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3 Use Case on Network Access Control and Mobility between Sharing Parties
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.1 Description
It is worth mentioning that 5G networks have been designed to be able to provide shared facilities from the beginning. This means that in the case that the 4G network has both non-shared and shared E-UTRAN at the same time, there could be a number of different types of coverage in the same region: - Non-shared E-UTRA coverage, - Shared E-UTRA coverage, - Non-shared 5G NR coverage, - Shared 5G NR coverage. This study introduces a new sharing method arises as without direct connection between Shared NG-RAN and the core network of participating operators. Therefore, network access control and mobility management need to be considered when introducing the potential requirements of this new sharing method with existing network.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.2 Pre-conditions
1. It is assumed that OP1, OP2 and OP3 has deployed 4G and 5G networks. NOTE: The home 4G and 5G network of OP1, OP2 and OP3 may be deployed with non-shared and shared wireless access technology. - Both OP1 and OP3 are Hosting NG-RAN Operators, which shared NG-RAN with participating OP2. - UEs subscribe to OP2’s home PLMN. - UEs may register successfully to OP1 and OP3’s shared 5G network. 2. Both operators (i.e., OP1 and OP3) agreed to share their networks via indirect connection between the shared radio access network and the OP2’s core network. 3. Potential scenario1: The coverage of OP1 and OP3’s shared 5G network may overlap with OP2’s 4G network; may also overlap with OP2’s 5G network (i.e., at OP1 and OP2’s border, at OP3 and OP2’s border). 4. Potential scenario2: The coverage of OP1’s shared 5G network may overlap with OP3’s shared 5G network (i.e., at OP1’s and OP3’s border). Some parts of shared areas do not overlap the OP2’s network. There are none-shared areas in OP1 or OP3 networks as well.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.3 Service Flows
Mobility and access control scenarios in shared network are illustrated in the following: • Scenario 1 (Figure 5.3.3-1a): a UE with a subscription from OP2 moves between OP2’s own 4G access networks and either OP1’s or OP3’s shared 5G networks. • Scenario 2 (Figure 5.3.3-1b): a UE with a subscription from OP2 moves between OP2’s own 5G access networks and either OP1’s or OP3’s shared 5G networks. • Scenario 3 (Figure 5.3.3-2): a UE with a subscription from OP2 moves between coverage of OP1’s and OP3’s shared 5G access networks. Figure 5.3.3-1a: Scenario 1: a UE with a subscription from OP2 moves between OP2’s own 4G access networks and either OP1’s or OP3’s shared 5G networks. Figure 5.3.3-1b: Scenario 2: a UE with a subscription from OP2 moves between OP2’s own 5G access networks and either OP1’s or OP3’s shared 5G networks. NOTE 1: OP1_5G/OP3_5G are OP1/OP3’s shared 5G network via indirect connection between the shared radio access network and the OP2’s core network in the Figure 5.3.3-1a and Figure 5.3.3-1b. NOTE 2: OP2_5G/OP2_4G is OP2’s network, may be MOCN networks or non-shared network. 1. UE connects to the participating OP2_4G network then accesses to the hosting OP1_5G or OP3_5G network when the UE crosses the border between the shared network managed by hosting operators and the OP2’s own 4G access networks in the scenario 1 (As shown as ① in Figure 5.3.3-1a). 2. UE connects to the participating OP2_5G network then accesses to the hosting OP1_5G or OP3_5G network when the UE crosses the border between the shared network managed by hosting operator and the OP2’s own 5G access networks in the scenario 2 (As shown as ② in Figure 5.3.3-1b). Figure 5.3.3-2: Scenario 3: a UE with a subscription from OP2 moves between coverage of OP1’s and OP3’s shared 5G access networks 3. UE connects to the hosting OP1_5G network then accesses to the hosting OP3_5G network when the UE crosses the border between the two shared networks managed by different hosting operators in the scenario 3 (As shown as ③ in Figure 5.3.3-2). 4. The UE accesses the shared network via indirect sharing network method in the specific geographical area as OP1_5G (shared) and/or OP3_5G (shared) shown in Figure 5.3.3-2, based on the agreements between the hosting and the participating operators. 5. The UE connects to an appropriate access network, when the user moves to an area where more than one operator's access networks provide connectivity, e.g., existing OP3’s 5G network and the OP2’s 4G network, or existing OP1’s 5G network and OP3’s 5G network, based on the agreement between the hosting and participating operators.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.4 Post-conditions
All forms of mobility (i.e., between participating operators RAN and shared RAN for both CONNECTED mode and IDLE mode UE, see the clause 3.1 of TS 23.122 [5]) are successfully processed in a sharing scenario without direct connections between the shared access and the core networks of the participating operator.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.5 Existing Features partly or fully covering Use Case Functionality
SA1 has performed various studies on mobility and network sharing in previous releases, where related normative stage 1 requirements are introduced in 3GPP TS 22.101 [2] and 22.261 [3]. 3GPP TS 22.261 [3] introduces requirements of Diverse mobility management, stated as follows: The 5G system shall support inter- and/or intra- access technology mobility procedures within 5GS with minimum impact to the user experience (e.g. QoS, QoE). 3GPP TS 22.261 [3] describes various access related requirements, stated as follows: Based on operator policy, the 5G system shall support steering a UE to select certain 3GPP access network(s). 3GPP TS 22.101 [2] introduces requirements of mobility of network sharing, stated as follows: It shall be possible to support different mobility management rules, service capabilities and access rights as a function of the home PLMN of the subscribers. The above requirements are based on MOCN.
c3de68f0eebb105002ecbb9ed169af5e
22.851
5.3.6 Potential New Requirements needed to support the Use Case
[PR 5.3.6-001] In case of indirect network sharing, the 3GPP system shall support mechanisms to minimize service interruptions for UEs that are moving between different Shared RANs or between a Shared RAN and a non-shared RAN (managed by Hosting and Participating Operators). [PR 5.3.6-002] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall support access control for a UE accessing a Shared NG-RAN. [PR 5.3.6-003] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall be able to select an appropriate radio access network (i.e., 4G, 5G) for a UE. [PR 5.3.6-004] In case of Indirect Network Sharing and subject to Hosting and Participating Operators’ policies, the 3GPP system shall support a mechanism to enable a UE with a subscription to a Participating Operator to access an authorized Shared NG-RAN.