SentenceTransformer based on Alibaba-NLP/gte-multilingual-base

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-multilingual-base on the all-nli-tr dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-multilingual-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: tr

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("x1saint/gte-multi-triplet-v2")
# Run inference
sentences = [
    'Ve gerçekten, baba haklıydı, oğlu zaten her şeyi tecrübe etmişti, her şeyi denedi ve daha az ilgileniyordu.',
    'Oğlu her şeye olan ilgisini kaybediyordu.',
    'Baba oğlunun tecrübe için hala çok şey olduğunu biliyordu.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9438

Training Details

Training Dataset

all-nli-tr

  • Dataset: all-nli-tr at daeabfb
  • Size: 482,091 training samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 5 tokens
    • mean: 28.16 tokens
    • max: 151 tokens
    • min: 5 tokens
    • mean: 15.14 tokens
    • max: 49 tokens
    • min: 4 tokens
    • mean: 14.33 tokens
    • max: 55 tokens
  • Samples:
    anchor positive negative
    Mevsim boyunca ve sanırım senin seviyendeyken onları bir sonraki seviyeye düşürürsün. Eğer ebeveyn takımını çağırmaya karar verirlerse Braves üçlü A'dan birini çağırmaya karar verirlerse çifte bir adam onun yerine geçmeye gider ve bekar bir adam gelir. Eğer insanlar hatırlarsa, bir sonraki seviyeye düşersin. Hiçbir şeyi hatırlamazlar.
    Numaramızdan biri talimatlarınızı birazdan yerine getirecektir. Ekibimin bir üyesi emirlerinizi büyük bir hassasiyetle yerine getirecektir. Şu anda boş kimsek yok, bu yüzden sen de harekete geçmelisin.
    Bunu nereden biliyorsun? Bütün bunlar yine onların bilgileri. Bu bilgi onlara ait. Hiçbir bilgileri yok.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

all-nli-tr

  • Dataset: all-nli-tr at daeabfb
  • Size: 6,567 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 26.66 tokens
    • max: 121 tokens
    • min: 5 tokens
    • mean: 14.98 tokens
    • max: 49 tokens
    • min: 4 tokens
    • mean: 14.4 tokens
    • max: 37 tokens
  • Samples:
    anchor positive negative
    Bilemiyorum. Onunla ilgili karışık duygularım var. Bazen ondan hoşlanıyorum ama aynı zamanda birisinin onu dövmesini görmeyi seviyorum. Çoğunlukla ondan hoşlanıyorum, ama yine de birinin onu dövdüğünü görmekten zevk alıyorum. O benim favorim ve kimsenin onu yendiğini görmek istemiyorum.
    Sen ve arkadaşların burada hoş karşılanmaz, Severn söyledi. Severn orada insanların hoş karşılanmadığını söyledi. Severn orada insanların her zaman hoş karşılanacağını söyledi.
    Gecenin en aşağısı ne olduğundan emin değilim. Dün gece ne kadar soğuk oldu bilmiyorum. Dün gece hava 37 dereceydi.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • gradient_accumulation_steps: 4
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • bf16: True
  • dataloader_num_workers: 4

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 4
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 4
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss all-nli-dev_cosine_accuracy
0.2655 500 3.0729 0.4237 0.9229
0.5310 1000 2.2154 0.3830 0.9257
0.7965 1500 1.9267 0.3517 0.9319
1.0616 2000 1.7078 0.3424 0.9354
1.3271 2500 1.4602 0.3362 0.9368
1.5926 3000 1.3925 0.3290 0.9379
1.8581 3500 1.3124 0.3116 0.9417
2.1232 4000 1.1537 0.3154 0.9382
2.3887 4500 1.0209 0.3205 0.9412
2.6542 5000 0.9897 0.3065 0.9441
2.9197 5500 0.9611 0.3025 0.9420
3.1848 6000 0.8276 0.3162 0.9446
3.4503 6500 0.7779 0.3101 0.9408
3.7158 7000 0.7738 0.3110 0.9426
3.9813 7500 0.7641 0.3056 0.9434
4.2464 8000 0.6338 0.3152 0.9429
4.5119 8500 0.6397 0.3133 0.9421
4.7774 9000 0.6207 0.3160 0.9420
5.0425 9500 0.6044 0.3156 0.9408
5.3080 10000 0.5305 0.3205 0.9449
5.5735 10500 0.5377 0.3124 0.9450
5.8390 11000 0.5311 0.3168 0.9443
6.1041 11500 0.5017 0.3250 0.9435
6.3696 12000 0.46 0.3213 0.9429
6.6351 12500 0.4679 0.3212 0.9443
6.9006 13000 0.4692 0.3221 0.9434
7.1657 13500 0.4285 0.3231 0.9446
7.4312 14000 0.4161 0.3265 0.9456
7.6967 14500 0.409 0.3240 0.9456
7.9622 15000 0.4127 0.3250 0.9444
8.2273 15500 0.3843 0.3290 0.9447
8.4928 16000 0.3755 0.3259 0.9438
8.7583 16500 0.3786 0.3328 0.9438
9.0234 17000 0.3702 0.3284 0.9453
9.2889 17500 0.3525 0.3326 0.9444
9.5544 18000 0.3589 0.3320 0.9443
9.8199 18500 0.3483 0.3314 0.9438

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
29
Safetensors
Model size
305M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for x1saint/gte-multi-triplet-v2

Finetuned
(43)
this model

Dataset used to train x1saint/gte-multi-triplet-v2

Evaluation results