Edit model card

SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("adriansanz/ST-tramits-sitges-005-5ep")
# Run inference
sentences = [
    'Les queixes, observacions i suggeriments són una eina important per a millorar la qualitat dels serveis municipals.',
    'Què és el que es busca amb les queixes, observacions i suggeriments?',
    'Quin és el propòsit dels ajuts econòmics?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.1437
cosine_accuracy@3 0.2819
cosine_accuracy@5 0.393
cosine_accuracy@10 0.5665
cosine_precision@1 0.1437
cosine_precision@3 0.094
cosine_precision@5 0.0786
cosine_precision@10 0.0566
cosine_recall@1 0.1437
cosine_recall@3 0.2819
cosine_recall@5 0.393
cosine_recall@10 0.5665
cosine_ndcg@10 0.3243
cosine_mrr@10 0.2507
cosine_map@100 0.2695

Information Retrieval

Metric Value
cosine_accuracy@1 0.147
cosine_accuracy@3 0.2871
cosine_accuracy@5 0.3901
cosine_accuracy@10 0.5631
cosine_precision@1 0.147
cosine_precision@3 0.0957
cosine_precision@5 0.078
cosine_precision@10 0.0563
cosine_recall@1 0.147
cosine_recall@3 0.2871
cosine_recall@5 0.3901
cosine_recall@10 0.5631
cosine_ndcg@10 0.3255
cosine_mrr@10 0.2533
cosine_map@100 0.2723

Information Retrieval

Metric Value
cosine_accuracy@1 0.1418
cosine_accuracy@3 0.2838
cosine_accuracy@5 0.389
cosine_accuracy@10 0.562
cosine_precision@1 0.1418
cosine_precision@3 0.0946
cosine_precision@5 0.0778
cosine_precision@10 0.0562
cosine_recall@1 0.1418
cosine_recall@3 0.2838
cosine_recall@5 0.389
cosine_recall@10 0.562
cosine_ndcg@10 0.3226
cosine_mrr@10 0.2497
cosine_map@100 0.2689

Information Retrieval

Metric Value
cosine_accuracy@1 0.1435
cosine_accuracy@3 0.2831
cosine_accuracy@5 0.385
cosine_accuracy@10 0.5551
cosine_precision@1 0.1435
cosine_precision@3 0.0944
cosine_precision@5 0.077
cosine_precision@10 0.0555
cosine_recall@1 0.1435
cosine_recall@3 0.2831
cosine_recall@5 0.385
cosine_recall@10 0.5551
cosine_ndcg@10 0.3205
cosine_mrr@10 0.2492
cosine_map@100 0.2685

Information Retrieval

Metric Value
cosine_accuracy@1 0.1392
cosine_accuracy@3 0.2795
cosine_accuracy@5 0.3838
cosine_accuracy@10 0.5534
cosine_precision@1 0.1392
cosine_precision@3 0.0932
cosine_precision@5 0.0768
cosine_precision@10 0.0553
cosine_recall@1 0.1392
cosine_recall@3 0.2795
cosine_recall@5 0.3838
cosine_recall@10 0.5534
cosine_ndcg@10 0.3176
cosine_mrr@10 0.2458
cosine_map@100 0.2649

Information Retrieval

Metric Value
cosine_accuracy@1 0.1403
cosine_accuracy@3 0.2753
cosine_accuracy@5 0.3698
cosine_accuracy@10 0.5361
cosine_precision@1 0.1403
cosine_precision@3 0.0918
cosine_precision@5 0.074
cosine_precision@10 0.0536
cosine_recall@1 0.1403
cosine_recall@3 0.2753
cosine_recall@5 0.3698
cosine_recall@10 0.5361
cosine_ndcg@10 0.3099
cosine_mrr@10 0.2412
cosine_map@100 0.2602

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 5,214 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 49.66 tokens
    • max: 149 tokens
    • min: 10 tokens
    • mean: 20.85 tokens
    • max: 48 tokens
  • Samples:
    positive anchor
    Ajuts per la reactivació de petites empreses i persones autònomes donades d’alta al règim especial de treballadors autònoms (RETA) amb una antiguitat superior als cinc anys (COVID19) Quin és el requisit per a les petites empreses per rebre ajuts?
    En cas de no poder desenvolupar el projecte o activitat per la qual s'ha sol·licitat la subvenció, l'entitat beneficiària pot renunciar a la subvenció. Puc renunciar a una subvenció si ja l'he rebut?
    L’Espai Jove de Sitges és l'equipament municipal on els joves poden dur a terme iniciatives pròpies i on també es desenvolupen d’altres impulsades per la regidoria de Joventut. Quin és el paper de la regidoria de Joventut a l'Espai Jove de Sitges?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 5
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.2
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_map@100 dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.4908 10 3.3699 - - - - - -
0.9816 20 1.8761 0.2565 0.2430 0.2509 0.2499 0.2301 0.2567
1.4724 30 1.3111 - - - - - -
1.9632 40 0.8122 0.2636 0.2578 0.2629 0.2639 0.2486 0.2654
2.4540 50 0.5903 - - - - - -
2.9448 60 0.4306 - - - - - -
2.9939 61 - 0.2661 0.2636 0.2648 0.2659 0.2544 0.2694
3.4356 70 0.3553 - - - - - -
3.9264 80 0.2925 - - - - - -
3.9755 81 - 0.2701 0.2621 0.2663 0.2706 0.2602 0.2709
4.4172 90 0.2797 - - - - - -
4.9080 100 0.267 0.2695 0.2649 0.2685 0.2689 0.2602 0.2723
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.35.0.dev0
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
10
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adriansanz/ST-tramits-sitges-005-5ep

Base model

BAAI/bge-m3
Finetuned
(118)
this model

Evaluation results