BGE_base_3gpp-qa-v2_Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("iris49/3gpp-embedding-model-v0")
# Run inference
sentences = [
    'What types of data structures are supported by the GET request body on the resource described in table 5.2.11.3.4-2, and how do they influence the request?',
    "The data structures supported by the GET request body on the resource are detailed in table 5.2.11.3.4-2. These structures define the format and content of the data that can be sent in the request body. They might include fields such as 'filterCriteria', 'sortOrder', or 'pagination', which influence how the server processes the request and returns the appropriate data.",
    "The specific triggers on the Ro interface that can lead to the termination of the IMS service include: 1) Reception of an unsuccessful Operation Result different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE in the Debit/Reserve Units Response message. 2) Reception of an unsuccessful Result Code different from DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE within the multiple units operation in the Debit/Reserve Units Response message when only one instance of the multiple units operation field is used. 3) Execution of the termination action procedure as defined in TS 32.299 when only one instance of the Multiple Unit Operation field is used. 4) Execution of the failure handling procedures when the Failure Action is set to 'Terminate' or 'Retry & Terminate'. 5) Reception in the IMS-GWF of an Abort-Session-Request message from OCS.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric dim_768 dim_512 dim_256 dim_128 dim_64
cosine_accuracy@1 0.8347 0.8341 0.8326 0.8294 0.8211
cosine_accuracy@3 0.9628 0.963 0.9624 0.9611 0.9575
cosine_accuracy@5 0.9806 0.9808 0.9802 0.9796 0.9772
cosine_accuracy@10 0.9927 0.9926 0.9923 0.9917 0.9906
cosine_precision@1 0.8347 0.8341 0.8326 0.8294 0.8211
cosine_precision@3 0.3209 0.321 0.3208 0.3204 0.3192
cosine_precision@5 0.1961 0.1962 0.196 0.1959 0.1954
cosine_precision@10 0.0993 0.0993 0.0992 0.0992 0.0991
cosine_recall@1 0.8347 0.8341 0.8326 0.8294 0.8211
cosine_recall@3 0.9628 0.963 0.9624 0.9611 0.9575
cosine_recall@5 0.9806 0.9808 0.9802 0.9796 0.9772
cosine_recall@10 0.9927 0.9926 0.9923 0.9917 0.9906
cosine_ndcg@10 0.9235 0.9233 0.9224 0.9205 0.9159
cosine_mrr@10 0.9003 0.9 0.8989 0.8965 0.8908
cosine_map@100 0.9007 0.9004 0.8993 0.897 0.8913

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 56,041 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 15 tokens
    • mean: 30.56 tokens
    • max: 66 tokens
    • min: 42 tokens
    • mean: 109.65 tokens
    • max: 298 tokens
  • Samples:
    anchor positive
    What does the 'dataStatProps' attribute represent in the 'AnalyticsMetadataInfo' type, and what is its data type? The 'dataStatProps' attribute in the 'AnalyticsMetadataInfo' type represents a list of dataset statistical properties of the data used to generate the analytics. It is defined as an optional attribute with a data type of 'array(DatasetStatisticalProperty)' and a cardinality of 1..N, meaning it can contain one or more elements.
    Why is it important to have standardized methods for resource management in the Nudm_SubscriberDataManagement Service API? Standardized methods for resource management in the Nudm_SubscriberDataManagement Service API are important because they ensure uniformity, predictability, and compatibility across different implementations and systems. This standardization facilitates seamless integration, reduces errors, and enhances the efficiency of managing subscriber data, which is critical for maintaining reliable communication services.
    What is the purpose of the Nsmf_PDUSession_SMContextStatusNotify service operation in the context of I-SMF context transfer? The Nsmf_PDUSession_SMContextStatusNotify service operation is used by the SMF (Session Management Function) to notify its consumers about the status of an SM (Session Management) context related to a PDU (Packet Data Unit) Session. In the context of I-SMF (Intermediate SMF) context transfer, this service operation is used to indicate the transfer of the SM context to a new I-SMF or SMF set. It also allows the SMF to update the SMF-derived CN (Core Network) assisted RAN (Radio Access Network) parameters tuning in the AMF (Access and Mobility Management Function). Additionally, it can report DDN (Downlink Data Notification) failures and provide target DNAI (Data Network Access Identifier) information for the current or next PDU session.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.0913 10 1.4273 - - - - -
0.1826 20 0.5399 - - - - -
0.2740 30 0.1252 - - - - -
0.3653 40 0.0625 - - - - -
0.4566 50 0.0507 - - - - -
0.5479 60 0.0366 - - - - -
0.6393 70 0.029 - - - - -
0.7306 80 0.0239 - - - - -
0.8219 90 0.0252 - - - - -
0.9132 100 0.0237 - - - - -
0.9954 109 - 0.9199 0.9195 0.9180 0.9150 0.9081
1.0046 110 0.026 - - - - -
1.0959 120 0.017 - - - - -
1.1872 130 0.02 - - - - -
1.2785 140 0.0125 - - - - -
1.3699 150 0.0134 - - - - -
1.4612 160 0.0128 - - - - -
1.5525 170 0.0123 - - - - -
1.6438 180 0.0097 - - - - -
1.7352 190 0.0101 - - - - -
1.8265 200 0.0124 - - - - -
1.9178 210 0.0116 - - - - -
2.0 219 - 0.9220 0.9216 0.9206 0.9184 0.9130
2.0091 220 0.012 - - - - -
2.1005 230 0.0111 - - - - -
2.1918 240 0.0101 - - - - -
2.2831 250 0.0101 - - - - -
2.3744 260 0.009 - - - - -
2.4658 270 0.0103 - - - - -
2.5571 280 0.009 - - - - -
2.6484 290 0.0083 - - - - -
2.7397 300 0.0076 - - - - -
2.8311 310 0.0093 - - - - -
2.9224 320 0.0104 - - - - -
2.9954 328 - 0.9234 0.9230 0.9221 0.9201 0.9156
3.0137 330 0.0104 - - - - -
3.1050 340 0.0089 - - - - -
3.1963 350 0.0084 - - - - -
3.2877 360 0.0082 - - - - -
3.3790 370 0.0089 - - - - -
3.4703 380 0.0083 - - - - -
3.5616 390 0.0061 - - - - -
3.6530 400 0.0065 - - - - -
3.7443 410 0.0063 - - - - -
3.8356 420 0.0084 - - - - -
3.9269 430 0.0083 - - - - -
3.9817 436 - 0.9235 0.9233 0.9224 0.9205 0.9159
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.3.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 1.2.1
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
0
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for iris49/3gpp-embedding-model-v0

Finetuned
(366)
this model

Evaluation results