Edit model card

Alchemy Embedding - Anudit Nagar

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-base-en-v1.5
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Society tends to admire those who despair when others hope, viewing them as sages or wise figures.',
    'What is often the societal perception of those who express pessimism about the future?',
    'How did the realization about user engagement influence the app development strategy?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.782
cosine_accuracy@3 0.8889
cosine_accuracy@5 0.9249
cosine_accuracy@10 0.952
cosine_precision@1 0.782
cosine_precision@3 0.2963
cosine_precision@5 0.185
cosine_precision@10 0.0952
cosine_recall@1 0.782
cosine_recall@3 0.8889
cosine_recall@5 0.9249
cosine_recall@10 0.952
cosine_ndcg@10 0.8676
cosine_mrr@10 0.8403
cosine_map@100 0.8422

Information Retrieval

Metric Value
cosine_accuracy@1 0.7804
cosine_accuracy@3 0.8848
cosine_accuracy@5 0.9221
cosine_accuracy@10 0.9515
cosine_precision@1 0.7804
cosine_precision@3 0.2949
cosine_precision@5 0.1844
cosine_precision@10 0.0951
cosine_recall@1 0.7804
cosine_recall@3 0.8848
cosine_recall@5 0.9221
cosine_recall@10 0.9515
cosine_ndcg@10 0.8662
cosine_mrr@10 0.8387
cosine_map@100 0.8405

Information Retrieval

Metric Value
cosine_accuracy@1 0.7754
cosine_accuracy@3 0.8804
cosine_accuracy@5 0.9169
cosine_accuracy@10 0.9468
cosine_precision@1 0.7754
cosine_precision@3 0.2935
cosine_precision@5 0.1834
cosine_precision@10 0.0947
cosine_recall@1 0.7754
cosine_recall@3 0.8804
cosine_recall@5 0.9169
cosine_recall@10 0.9468
cosine_ndcg@10 0.8614
cosine_mrr@10 0.8338
cosine_map@100 0.8361

Information Retrieval

Metric Value
cosine_accuracy@1 0.7617
cosine_accuracy@3 0.8717
cosine_accuracy@5 0.9117
cosine_accuracy@10 0.9419
cosine_precision@1 0.7617
cosine_precision@3 0.2906
cosine_precision@5 0.1823
cosine_precision@10 0.0942
cosine_recall@1 0.7617
cosine_recall@3 0.8717
cosine_recall@5 0.9117
cosine_recall@10 0.9419
cosine_ndcg@10 0.8516
cosine_mrr@10 0.8226
cosine_map@100 0.8248

Information Retrieval

Metric Value
cosine_accuracy@1 0.7409
cosine_accuracy@3 0.8539
cosine_accuracy@5 0.8936
cosine_accuracy@10 0.9293
cosine_precision@1 0.7409
cosine_precision@3 0.2846
cosine_precision@5 0.1787
cosine_precision@10 0.0929
cosine_recall@1 0.7409
cosine_recall@3 0.8539
cosine_recall@5 0.8936
cosine_recall@10 0.9293
cosine_ndcg@10 0.8339
cosine_mrr@10 0.8033
cosine_map@100 0.8058

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 32,833 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 3 tokens
    • mean: 34.54 tokens
    • max: 102 tokens
    • min: 9 tokens
    • mean: 16.78 tokens
    • max: 77 tokens
  • Samples:
    positive anchor
    The author saw taking risks as a necessary part of the creative process, and was willing to take risks in order to explore new ideas and themes. What was the author's perspective on the importance of taking risks in creative work?
    Recognizing that older users are less likely to invite new users led to a strategic focus on younger demographics, prompting a shift in development efforts toward creating products that resonate with teens. How did the realization about user engagement influence the app development strategy?
    The phrase emphasizes the fragility of Earth and our collective responsibility to protect it and ensure sustainable resource management for future generations. What is the significance of the phrase 'pale blue dot' in relation to environmental responsibility?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • gradient_accumulation_steps: 8
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 24
  • per_device_eval_batch_size: 24
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.0584 10 0.8567 - - - - -
0.1169 20 0.6549 - - - - -
0.1753 30 0.5407 - - - - -
0.2337 40 0.4586 - - - - -
0.2922 50 0.3914 - - - - -
0.3506 60 0.4104 - - - - -
0.4091 70 0.299 - - - - -
0.4675 80 0.2444 - - - - -
0.5259 90 0.2367 - - - - -
0.5844 100 0.2302 - - - - -
0.6428 110 0.2356 - - - - -
0.7012 120 0.1537 - - - - -
0.7597 130 0.2043 - - - - -
0.8181 140 0.1606 - - - - -
0.8766 150 0.1896 - - - - -
0.9350 160 0.1766 - - - - -
0.9934 170 0.1259 - - - - -
0.9993 171 - 0.8115 0.8233 0.8321 0.7829 0.8340
1.0519 180 0.1661 - - - - -
1.1103 190 0.1632 - - - - -
1.1687 200 0.1032 - - - - -
1.2272 210 0.1037 - - - - -
1.2856 220 0.0708 - - - - -
1.3440 230 0.0827 - - - - -
1.4025 240 0.0505 - - - - -
1.4609 250 0.0468 - - - - -
1.5194 260 0.0371 - - - - -
1.5778 270 0.049 - - - - -
1.6362 280 0.0527 - - - - -
1.6947 290 0.0316 - - - - -
1.7531 300 0.052 - - - - -
1.8115 310 0.0298 - - - - -
1.8700 320 0.0334 - - - - -
1.9284 330 0.0431 - - - - -
1.9869 340 0.0316 - - - - -
1.9985 342 - 0.8216 0.8342 0.8397 0.8006 0.8408
2.0453 350 0.0275 - - - - -
2.1037 360 0.0461 - - - - -
2.1622 370 0.0341 - - - - -
2.2206 380 0.0323 - - - - -
2.2790 390 0.0205 - - - - -
2.3375 400 0.0223 - - - - -
2.3959 410 0.0189 - - - - -
2.4543 420 0.0181 - - - - -
2.5128 430 0.0144 - - - - -
2.5712 440 0.0179 - - - - -
2.6297 450 0.0217 - - - - -
2.6881 460 0.016 - - - - -
2.7465 470 0.0143 - - - - -
2.8050 480 0.0193 - - - - -
2.8634 490 0.0183 - - - - -
2.9218 500 0.0171 - - - - -
2.9803 510 0.0195 - - - - -
2.9978 513 - 0.8242 0.8350 0.8409 0.8051 0.8413
3.0387 520 0.0127 - - - - -
3.0972 530 0.0261 - - - - -
3.1556 540 0.017 - - - - -
3.2140 550 0.0198 - - - - -
3.2725 560 0.0131 - - - - -
3.3309 570 0.0156 - - - - -
3.3893 580 0.0107 - - - - -
3.4478 590 0.0123 - - - - -
3.5062 600 0.0111 - - - - -
3.5646 610 0.0112 - - - - -
3.6231 620 0.0143 - - - - -
3.6815 630 0.013 - - - - -
3.7400 640 0.0105 - - - - -
3.7984 650 0.0126 - - - - -
3.8568 660 0.0118 - - - - -
3.9153 670 0.0163 - - - - -
3.9737 680 0.0187 - - - - -
3.9971 684 - 0.8248 0.8361 0.8405 0.8058 0.8422
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.5
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1
  • Accelerate: 0.33.0
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
51
Safetensors
Model size
137M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for anudit/finetuned-gte-base

Quantized
(1)
this model

Evaluation results