BGE_base_3gpp-qa-v2_Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iris49/3gpp-embedding-model-v0")
# Run inference
sentences = [
'What types of data structures are supported by the GET request body on the resource described in table 5.2.11.3.4-2, and how do they influence the request?',
"The data structures supported by the GET request body on the resource are detailed in table 5.2.11.3.4-2. These structures define the format and content of the data that can be sent in the request body. They might include fields such as 'filterCriteria', 'sortOrder', or 'pagination', which influence how the server processes the request and returns the appropriate data.",
"The specific triggers on the Ro interface that can lead to the termination of the IMS service include: 1) Reception of an unsuccessful Operation Result different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE in the Debit/Reserve Units Response message. 2) Reception of an unsuccessful Result Code different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE within the multiple units operation in the Debit/Reserve Units Response message when only one instance of the multiple units operation field is used. 3) Execution of the termination action procedure as defined in TS 32.299 when only one instance of the Multiple Unit Operation field is used. 4) Execution of the failure handling procedures when the Failure Action is set to 'Terminate' or 'Retry & Terminate'. 5) Reception in the IMS-GWF of an Abort-Session-Request message from OCS.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.8347 | 0.8341 | 0.8326 | 0.8294 | 0.8211 |
cosine_accuracy@3 | 0.9628 | 0.963 | 0.9624 | 0.9611 | 0.9575 |
cosine_accuracy@5 | 0.9806 | 0.9808 | 0.9802 | 0.9796 | 0.9772 |
cosine_accuracy@10 | 0.9927 | 0.9926 | 0.9923 | 0.9917 | 0.9906 |
cosine_precision@1 | 0.8347 | 0.8341 | 0.8326 | 0.8294 | 0.8211 |
cosine_precision@3 | 0.3209 | 0.321 | 0.3208 | 0.3204 | 0.3192 |
cosine_precision@5 | 0.1961 | 0.1962 | 0.196 | 0.1959 | 0.1954 |
cosine_precision@10 | 0.0993 | 0.0993 | 0.0992 | 0.0992 | 0.0991 |
cosine_recall@1 | 0.8347 | 0.8341 | 0.8326 | 0.8294 | 0.8211 |
cosine_recall@3 | 0.9628 | 0.963 | 0.9624 | 0.9611 | 0.9575 |
cosine_recall@5 | 0.9806 | 0.9808 | 0.9802 | 0.9796 | 0.9772 |
cosine_recall@10 | 0.9927 | 0.9926 | 0.9923 | 0.9917 | 0.9906 |
cosine_ndcg@10 | 0.9235 | 0.9233 | 0.9224 | 0.9205 | 0.9159 |
cosine_mrr@10 | 0.9003 | 0.9 | 0.8989 | 0.8965 | 0.8908 |
cosine_map@100 | 0.9007 | 0.9004 | 0.8993 | 0.897 | 0.8913 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 56,041 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 15 tokens
- mean: 30.56 tokens
- max: 66 tokens
- min: 42 tokens
- mean: 109.65 tokens
- max: 298 tokens
- Samples:
anchor positive What does the 'dataStatProps' attribute represent in the 'AnalyticsMetadataInfo' type, and what is its data type?
The 'dataStatProps' attribute in the 'AnalyticsMetadataInfo' type represents a list of dataset statistical properties of the data used to generate the analytics. It is defined as an optional attribute with a data type of 'array(DatasetStatisticalProperty)' and a cardinality of 1..N, meaning it can contain one or more elements.
Why is it important to have standardized methods for resource management in the Nudm_SubscriberDataManagement Service API?
Standardized methods for resource management in the Nudm_SubscriberDataManagement Service API are important because they ensure uniformity, predictability, and compatibility across different implementations and systems. This standardization facilitates seamless integration, reduces errors, and enhances the efficiency of managing subscriber data, which is critical for maintaining reliable communication services.
What is the purpose of the Nsmf_PDUSession_SMContextStatusNotify service operation in the context of I-SMF context transfer?
The Nsmf_PDUSession_SMContextStatusNotify service operation is used by the SMF (Session Management Function) to notify its consumers about the status of an SM (Session Management) context related to a PDU (Packet Data Unit) Session. In the context of I-SMF (Intermediate SMF) context transfer, this service operation is used to indicate the transfer of the SM context to a new I-SMF or SMF set. It also allows the SMF to update the SMF-derived CN (Core Network) assisted RAN (Radio Access Network) parameters tuning in the AMF (Access and Mobility Management Function). Additionally, it can report DDN (Downlink Data Notification) failures and provide target DNAI (Data Network Access Identifier) information for the current or next PDU session.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1fp16
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0.0913 | 10 | 1.4273 | - | - | - | - | - |
0.1826 | 20 | 0.5399 | - | - | - | - | - |
0.2740 | 30 | 0.1252 | - | - | - | - | - |
0.3653 | 40 | 0.0625 | - | - | - | - | - |
0.4566 | 50 | 0.0507 | - | - | - | - | - |
0.5479 | 60 | 0.0366 | - | - | - | - | - |
0.6393 | 70 | 0.029 | - | - | - | - | - |
0.7306 | 80 | 0.0239 | - | - | - | - | - |
0.8219 | 90 | 0.0252 | - | - | - | - | - |
0.9132 | 100 | 0.0237 | - | - | - | - | - |
0.9954 | 109 | - | 0.9199 | 0.9195 | 0.9180 | 0.9150 | 0.9081 |
1.0046 | 110 | 0.026 | - | - | - | - | - |
1.0959 | 120 | 0.017 | - | - | - | - | - |
1.1872 | 130 | 0.02 | - | - | - | - | - |
1.2785 | 140 | 0.0125 | - | - | - | - | - |
1.3699 | 150 | 0.0134 | - | - | - | - | - |
1.4612 | 160 | 0.0128 | - | - | - | - | - |
1.5525 | 170 | 0.0123 | - | - | - | - | - |
1.6438 | 180 | 0.0097 | - | - | - | - | - |
1.7352 | 190 | 0.0101 | - | - | - | - | - |
1.8265 | 200 | 0.0124 | - | - | - | - | - |
1.9178 | 210 | 0.0116 | - | - | - | - | - |
2.0 | 219 | - | 0.9220 | 0.9216 | 0.9206 | 0.9184 | 0.9130 |
2.0091 | 220 | 0.012 | - | - | - | - | - |
2.1005 | 230 | 0.0111 | - | - | - | - | - |
2.1918 | 240 | 0.0101 | - | - | - | - | - |
2.2831 | 250 | 0.0101 | - | - | - | - | - |
2.3744 | 260 | 0.009 | - | - | - | - | - |
2.4658 | 270 | 0.0103 | - | - | - | - | - |
2.5571 | 280 | 0.009 | - | - | - | - | - |
2.6484 | 290 | 0.0083 | - | - | - | - | - |
2.7397 | 300 | 0.0076 | - | - | - | - | - |
2.8311 | 310 | 0.0093 | - | - | - | - | - |
2.9224 | 320 | 0.0104 | - | - | - | - | - |
2.9954 | 328 | - | 0.9234 | 0.9230 | 0.9221 | 0.9201 | 0.9156 |
3.0137 | 330 | 0.0104 | - | - | - | - | - |
3.1050 | 340 | 0.0089 | - | - | - | - | - |
3.1963 | 350 | 0.0084 | - | - | - | - | - |
3.2877 | 360 | 0.0082 | - | - | - | - | - |
3.3790 | 370 | 0.0089 | - | - | - | - | - |
3.4703 | 380 | 0.0083 | - | - | - | - | - |
3.5616 | 390 | 0.0061 | - | - | - | - | - |
3.6530 | 400 | 0.0065 | - | - | - | - | - |
3.7443 | 410 | 0.0063 | - | - | - | - | - |
3.8356 | 420 | 0.0084 | - | - | - | - | - |
3.9269 | 430 | 0.0083 | - | - | - | - | - |
3.9817 | 436 | - | 0.9235 | 0.9233 | 0.9224 | 0.9205 | 0.9159 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 1.2.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for iris49/3gpp-embedding-model-v0
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.835
- Cosine Accuracy@3 on dim 768self-reported0.963
- Cosine Accuracy@5 on dim 768self-reported0.981
- Cosine Accuracy@10 on dim 768self-reported0.993
- Cosine Precision@1 on dim 768self-reported0.835
- Cosine Precision@3 on dim 768self-reported0.321
- Cosine Precision@5 on dim 768self-reported0.196
- Cosine Precision@10 on dim 768self-reported0.099
- Cosine Recall@1 on dim 768self-reported0.835
- Cosine Recall@3 on dim 768self-reported0.963