SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 on the query-hard-pos-neg-doc-pairs-statictable dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("yahyaabd/allstats-search-miniLM-v1-6")
# Run inference
sentences = [
    'Arus dana Q3 2006',
    'Ringkasan Neraca Arus Dana, Triwulan III, 2006, (Miliar Rupiah)',
    'Rata-Rata Pengeluaran per Kapita Sebulan di Daerah Perkotaan Menurut Kelompok Barang dan Golongan Pengeluaran per Kapita Sebulan, 2000-2012',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Binary Classification

Metric allstats-semantic-mini-v1_test allstats-semantic-mini-v1_dev
cosine_accuracy 0.9809 0.9754
cosine_accuracy_threshold 0.7711 0.7737
cosine_f1 0.9706 0.9623
cosine_f1_threshold 0.7711 0.7737
cosine_precision 0.9725 0.9545
cosine_recall 0.9687 0.9701
cosine_ap 0.9957 0.9927
cosine_mcc 0.9565 0.9441

Training Details

Training Dataset

query-hard-pos-neg-doc-pairs-statictable

  • Dataset: query-hard-pos-neg-doc-pairs-statictable at 7b28b96
  • Size: 25,580 training samples
  • Columns: query, doc, and label
  • Approximate statistics based on the first 1000 samples:
    query doc label
    type string string int
    details
    • min: 7 tokens
    • mean: 20.14 tokens
    • max: 55 tokens
    • min: 5 tokens
    • mean: 24.9 tokens
    • max: 47 tokens
    • 0: ~70.80%
    • 1: ~29.20%
  • Samples:
    query doc label
    Status pekerjaan utama penduduk usia 15+ yang bekerja, 2020 Jumlah Penghuni Lapas per Kanwil 0
    status pekerjaan utama penduduk usia 15+ yang bekerja, 2020 Jumlah Penghuni Lapas per Kanwil 0
    STATUS PEKERJAAN UTAMA PENDUDUK USIA 15+ YANG BEKERJA, 2020 Jumlah Penghuni Lapas per Kanwil 0
  • Loss: OnlineContrastiveLoss

Evaluation Dataset

query-hard-pos-neg-doc-pairs-statictable

  • Dataset: query-hard-pos-neg-doc-pairs-statictable at 7b28b96
  • Size: 5,479 evaluation samples
  • Columns: query, doc, and label
  • Approximate statistics based on the first 1000 samples:
    query doc label
    type string string int
    details
    • min: 7 tokens
    • mean: 20.78 tokens
    • max: 52 tokens
    • min: 4 tokens
    • mean: 26.28 tokens
    • max: 43 tokens
    • 0: ~71.50%
    • 1: ~28.50%
  • Samples:
    query doc label
    Bagaimana perbandingan PNS pria dan wanita di berbagai golongan tahun 2014? Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017 0
    bagaimana perbandingan pns pria dan wanita di berbagai golongan tahun 2014? Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017 0
    BAGAIMANA PERBANDINGAN PNS PRIA DAN WANITA DI BERBAGAI GOLONGAN TAHUN 2014? Rata-rata Pendapatan Bersih Berusaha Sendiri Menurut Provinsi dan Lapangan Pekerjaan Utama (ribu rupiah), 2017 0
  • Loss: OnlineContrastiveLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • warmup_ratio: 0.2
  • fp16: True
  • load_best_model_at_end: True
  • eval_on_start: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: True
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss allstats-semantic-mini-v1_test_cosine_ap allstats-semantic-mini-v1_dev_cosine_ap
-1 -1 - - 0.8910 -
0 0 - 1.0484 - 0.8789
0.025 20 1.0003 0.9175 - 0.8856
0.05 40 0.6667 0.6433 - 0.9010
0.075 60 0.5982 0.5203 - 0.9145
0.1 80 0.4476 0.4175 - 0.9344
0.125 100 0.3489 0.3152 - 0.9540
0.15 120 0.1643 0.2726 - 0.9602
0.175 140 0.2126 0.2525 - 0.9631
0.2 160 0.1797 0.2151 - 0.9715
0.225 180 0.1304 0.1895 - 0.9756
0.25 200 0.1714 0.2142 - 0.9767
0.275 220 0.1758 0.1840 - 0.9791
0.3 240 0.0562 0.1723 - 0.9801
0.325 260 0.0863 0.1656 - 0.9773
0.35 280 0.12 0.1806 - 0.9788
0.375 300 0.0982 0.1792 - 0.9769
0.4 320 0.0421 0.1724 - 0.9783
0.425 340 0.1078 0.2158 - 0.9733
0.45 360 0.0882 0.1501 - 0.9822
0.475 380 0.0251 0.1334 - 0.9843
0.5 400 0.0267 0.1238 - 0.9855
0.525 420 0.0899 0.1404 - 0.9859
0.55 440 0.0782 0.1253 - 0.9852
0.575 460 0.1209 0.1772 - 0.9768
0.6 480 0.0643 0.1817 - 0.9763
0.625 500 0.1051 0.2030 - 0.9748
0.65 520 0.0494 0.1405 - 0.9814
0.675 540 0.0548 0.1175 - 0.9831
0.7 560 0.121 0.1597 - 0.9819
0.725 580 0.0642 0.1675 - 0.9811
0.75 600 0.0618 0.1539 - 0.9827
0.775 620 0.0745 0.1149 - 0.9845
0.8 640 0.0452 0.1562 - 0.9797
0.825 660 0.0816 0.1580 - 0.9816
0.85 680 0.0957 0.1192 - 0.9830
0.875 700 0.06 0.1100 - 0.9863
0.9 720 0.018 0.1300 - 0.9822
0.925 740 0.0213 0.1267 - 0.9843
0.95 760 0.0263 0.1687 - 0.9796
0.975 780 0.032 0.1250 - 0.9849
1.0 800 0.065 0.1363 - 0.9828
1.025 820 0.0174 0.1394 - 0.9835
1.05 840 0.0568 0.1124 - 0.9849
1.075 860 0.0464 0.1174 - 0.9826
1.1 880 0.013 0.1178 - 0.9814
1.125 900 0.0331 0.1239 - 0.9812
1.15 920 0.0416 0.1240 - 0.9817
1.175 940 0.0111 0.1303 - 0.9840
1.2 960 0.0441 0.1156 - 0.9854
1.225 980 0.0243 0.0972 - 0.9879
1.25 1000 0.0 0.0917 - 0.9877
1.275 1020 0.0477 0.0863 - 0.9885
1.3 1040 0.0108 0.1029 - 0.9877
1.325 1060 0.0 0.1103 - 0.9869
1.35 1080 0.0134 0.1113 - 0.9871
1.375 1100 0.0 0.1146 - 0.9870
1.4 1120 0.0132 0.1218 - 0.9862
1.425 1140 0.0223 0.0948 - 0.9883
1.45 1160 0.0183 0.0883 - 0.9883
1.475 1180 0.0378 0.0961 - 0.9881
1.5 1200 0.0114 0.0961 - 0.9882
1.525 1220 0.0143 0.1020 - 0.9861
1.55 1240 0.0183 0.0867 - 0.9888
1.575 1260 0.0 0.0858 - 0.9892
1.6 1280 0.0 0.0858 - 0.9892
1.625 1300 0.0 0.0858 - 0.9892
1.65 1320 0.0172 0.0846 - 0.9896
1.675 1340 0.0153 0.0754 - 0.9917
1.7 1360 0.0163 0.0770 - 0.9913
1.725 1380 0.0167 0.0943 - 0.9901
1.75 1400 0.0148 0.0964 - 0.9899
1.775 1420 0.0065 0.0930 - 0.9902
1.8 1440 0.0 0.0945 - 0.9904
1.825 1460 0.0067 0.0991 - 0.9895
1.85 1480 0.0194 0.0996 - 0.9894
1.875 1500 0.0 0.0953 - 0.9903
1.9 1520 0.0236 0.0883 - 0.9906
1.925 1540 0.0111 0.0858 - 0.9904
1.95 1560 0.0 0.0878 - 0.9903
1.975 1580 0.0147 0.0849 - 0.9906
2.0 1600 0.0154 0.0852 - 0.9902
2.025 1620 0.0067 0.0861 - 0.9903
2.05 1640 0.019 0.0859 - 0.9907
2.075 1660 0.0083 0.0875 - 0.9908
2.1 1680 0.0067 0.0771 - 0.9917
2.125 1700 0.0 0.0773 - 0.9917
2.15 1720 0.0071 0.0771 - 0.9919
2.175 1740 0.0064 0.0756 - 0.9916
2.2 1760 0.0 0.0772 - 0.9916
2.225 1780 0.0 0.0772 - 0.9915
2.25 1800 0.0158 0.0734 - 0.9920
2.275 1820 0.0 0.0730 - 0.9920
2.3 1840 0.0 0.0733 - 0.9920
2.325 1860 0.0161 0.0681 - 0.9922
2.35 1880 0.0 0.0713 - 0.9920
2.375 1900 0.0 0.0721 - 0.9920
2.4 1920 0.0 0.0722 - 0.9920
2.425 1940 0.0064 0.0648 - 0.9928
2.45 1960 0.0068 0.0641 - 0.9930
2.475 1980 0.0069 0.0635 - 0.9929
2.5 2000 0.0066 0.0657 - 0.9929
2.525 2020 0.0 0.0657 - 0.9930
2.55 2040 0.0139 0.0657 - 0.9931
2.575 2060 0.0 0.0667 - 0.9931
2.6 2080 0.0 0.0666 - 0.9931
2.625 2100 0.0 0.0666 - 0.9931
2.65 2120 0.0 0.0666 - 0.9931
2.675 2140 0.0 0.0667 - 0.9931
2.7 2160 0.0 0.0666 - 0.9931
2.725 2180 0.0 0.0666 - 0.9931
2.75 2200 0.0071 0.0665 - 0.9931
2.775 2220 0.0 0.0671 - 0.9931
2.8 2240 0.0071 0.0692 - 0.9928
2.825 2260 0.0 0.0700 - 0.9927
2.85 2280 0.0068 0.0688 - 0.9927
2.875 2300 0.0 0.0688 - 0.9927
2.9 2320 0.0 0.0688 - 0.9927
2.925 2340 0.0 0.0688 - 0.9927
2.95 2360 0.0 0.0688 - 0.9927
2.975 2380 0.0 0.0688 - 0.9927
3.0 2400 0.0 0.0688 - 0.9927
-1 -1 - - 0.9957 -
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.4.0
  • Transformers: 4.48.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
8
Safetensors
Model size
118M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for yahyaabd/allstats-search-miniLM-v1-6

Dataset used to train yahyaabd/allstats-search-miniLM-v1-6

Evaluation results

  • Cosine Accuracy on allstats semantic mini v1 test
    self-reported
    0.981
  • Cosine Accuracy Threshold on allstats semantic mini v1 test
    self-reported
    0.771
  • Cosine F1 on allstats semantic mini v1 test
    self-reported
    0.971
  • Cosine F1 Threshold on allstats semantic mini v1 test
    self-reported
    0.771
  • Cosine Precision on allstats semantic mini v1 test
    self-reported
    0.973
  • Cosine Recall on allstats semantic mini v1 test
    self-reported
    0.969
  • Cosine Ap on allstats semantic mini v1 test
    self-reported
    0.996
  • Cosine Mcc on allstats semantic mini v1 test
    self-reported
    0.956
  • Cosine Accuracy on allstats semantic mini v1 dev
    self-reported
    0.975
  • Cosine Accuracy Threshold on allstats semantic mini v1 dev
    self-reported
    0.774