BGE base Financial Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ValentinaKim/bge-base-financial-matryoshka")
# Run inference
sentences = [
    'Operating cash flow provides the primary source of cash to fund operating needs and capital expenditures.',
    'What is the primary source of cash used by the company to fund operating needs and capital expenditures?',
    'What kinds of products and services does the Company provide under the AARP Program?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7129
cosine_accuracy@3 0.8386
cosine_accuracy@5 0.8657
cosine_accuracy@10 0.9129
cosine_precision@1 0.7129
cosine_precision@3 0.2795
cosine_precision@5 0.1731
cosine_precision@10 0.0913
cosine_recall@1 0.7129
cosine_recall@3 0.8386
cosine_recall@5 0.8657
cosine_recall@10 0.9129
cosine_ndcg@10 0.8161
cosine_mrr@10 0.7851
cosine_map@100 0.7884

Information Retrieval

Metric Value
cosine_accuracy@1 0.7086
cosine_accuracy@3 0.8314
cosine_accuracy@5 0.8571
cosine_accuracy@10 0.91
cosine_precision@1 0.7086
cosine_precision@3 0.2771
cosine_precision@5 0.1714
cosine_precision@10 0.091
cosine_recall@1 0.7086
cosine_recall@3 0.8314
cosine_recall@5 0.8571
cosine_recall@10 0.91
cosine_ndcg@10 0.81
cosine_mrr@10 0.7782
cosine_map@100 0.7817

Information Retrieval

Metric Value
cosine_accuracy@1 0.7057
cosine_accuracy@3 0.8214
cosine_accuracy@5 0.8614
cosine_accuracy@10 0.8957
cosine_precision@1 0.7057
cosine_precision@3 0.2738
cosine_precision@5 0.1723
cosine_precision@10 0.0896
cosine_recall@1 0.7057
cosine_recall@3 0.8214
cosine_recall@5 0.8614
cosine_recall@10 0.8957
cosine_ndcg@10 0.8032
cosine_mrr@10 0.7735
cosine_map@100 0.7778

Information Retrieval

Metric Value
cosine_accuracy@1 0.6871
cosine_accuracy@3 0.8086
cosine_accuracy@5 0.8429
cosine_accuracy@10 0.8943
cosine_precision@1 0.6871
cosine_precision@3 0.2695
cosine_precision@5 0.1686
cosine_precision@10 0.0894
cosine_recall@1 0.6871
cosine_recall@3 0.8086
cosine_recall@5 0.8429
cosine_recall@10 0.8943
cosine_ndcg@10 0.7914
cosine_mrr@10 0.7586
cosine_map@100 0.7626

Information Retrieval

Metric Value
cosine_accuracy@1 0.66
cosine_accuracy@3 0.7714
cosine_accuracy@5 0.8086
cosine_accuracy@10 0.8714
cosine_precision@1 0.66
cosine_precision@3 0.2571
cosine_precision@5 0.1617
cosine_precision@10 0.0871
cosine_recall@1 0.66
cosine_recall@3 0.7714
cosine_recall@5 0.8086
cosine_recall@10 0.8714
cosine_ndcg@10 0.7614
cosine_mrr@10 0.7269
cosine_map@100 0.732

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 6,300 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 6 tokens
    • mean: 45.81 tokens
    • max: 439 tokens
    • min: 7 tokens
    • mean: 20.26 tokens
    • max: 43 tokens
  • Samples:
    positive anchor
    For the year ended December 31, 2023, Alphabet Inc. reported a net cash provided by operating activities of $101,746 million. What was the net cash provided by operating activities for Alphabet Inc. in 2023?
    Our History In 2000, ICE was founded with the idea of transforming energy markets by creating a network that removed barriers and provided greater transparency, efficiency and access. When was Intercontinental Exchange, Inc. founded, and what was its initial focus?
    Item 8. Financial Statements and Supplementary Data The index to Financial Statements and Supplementary Data is presented What is presented in Item 8 according to Financial Statements and Supplementary Data?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • gradient_accumulation_steps: 32
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 32
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.9746 6 - 0.7258 0.7501 0.7513 0.6860 0.7589
1.6244 10 1.4436 - - - - -
1.9492 12 - 0.7494 0.7733 0.7800 0.7187 0.7827
2.9239 18 - 0.7601 0.7796 0.7813 0.7312 0.7897
3.2487 20 0.6159 - - - - -
3.8985 24 - 0.7626 0.7778 0.7817 0.732 0.7884
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.1.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
27
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ValentinaKim/bge-base-financial-matryoshka

Finetuned
(310)
this model

Evaluation results