SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 1024 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("seongil-dn/bge-m3-kor-retrieval-bs1024-checkpoint-236")
# Run inference
sentences = [
    '전남지역의 석유와 화학제품은 왜 수출이 늘어나는 경향을 보였어',
    '(2) 전남지역\n2013년중 전남지역 수출은 전년대비 1.2% 감소로 전환하였다. 품목별로는 석유(+9.3% → +3.8%) 및 화학제품(+1.2% → +7.1%)이 중국 등 해외수요확대로 증가세를 지속하였으나 철강금속(+1.8% → -8.6%)은 글로벌 공급과잉 및 중국의 저가 철강수출 확대로, 선박(+7.6% → -49.2%)은 수주물량이 급격히 줄어들면서 감소로 전환하였다. 전남지역 수입은 원유, 화학제품, 철강금속 등의 수입이 줄면서 전년대비 7.4% 감소로 전환하였다.',
    '수출 증가세 지속\n1/4분기 중 수출은 전년동기대비 증가흐름을 지속하였다. 품목별로 보면 석유제품, 석유화학, 철강, 선박, 반도체, 자동차 등 대다수 품목에서 증가하였다. 석유제품은 글로벌 경기회복에 따른 에너지 수요 증가와 국제유가 급등으로 수출단가가 높은 상승세를 지속하면서 증가하였다. 석유화학도 중국, 아세안을 중심으로 합성수지, 고무 등의 수출이 큰 폭 증가한 데다 고유가로 인한 수출가격도 동반 상승하면서 증가세를 이어갔다. 철강은 건설, 조선 등 글로벌 전방산업의 수요 증대, 원자재가격 상승 및 중국 감산 등에 따른 수출단가 상승 등에 힘입어 증가세를 이어갔다. 선박은 1/4분기 중 인도물량이 확대됨에 따라 증가하였다. 반도체는 자동차 등 전방산업의 견조한 수요가 이어지는 가운데 전년동기대비로 높은 단가가 지속되면서 증가하였다. 자동차는 차량용 반도체 수급차질이 지속되었음에도 불구하고 글로벌 경기회복 흐름에 따라 수요가 늘어나면서 전년동기대비 소폭 증가하였다. 모니터링 결과 향후 수출은 증가세가 지속될 것으로 전망되었다. 석유화학 및 석유정제는 수출단가 상승과 전방산업의 수요확대 기조가 이어지면서 증가할 전망이다. 철강은 주요국 경기회복과 중국, 인도 등의 인프라 투자 확대 등으로 양호한 흐름을 이어갈 전망이다. 반도체는 글로벌 스마트폰 수요 회복, 디지털 전환 기조 등으로 견조한 증가세를 지속할 것으로 보인다. 자동차는 차량용 반도체 공급차질이 점차 완화되고 미국, 신흥시장을 중심으로 수요회복이 본격화됨에 따라 소폭 증가할 전망이다. 선박은 친환경 선박수요 지속, 글로별 교역 신장 등에도 불구하고 2021년 2/4분기 집중되었던 인도물량의 기저효과로 인해 감소할 것으로 보인다.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 512
  • learning_rate: 3e-05
  • num_train_epochs: 5
  • warmup_ratio: 0.05
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss
0.0085 1 2.0476
0.0171 2 2.0595
0.0256 3 2.0267
0.0342 4 2.0971
0.0427 5 2.2171
0.0513 6 2.287
0.0598 7 2.0867
0.0684 8 1.9498
0.0769 9 1.569
0.0855 10 1.3313
0.0940 11 1.4122
0.1026 12 1.3425
0.1111 13 1.1936
0.1197 14 0.8012
0.1282 15 0.8862
0.1368 16 1.193
0.1453 17 0.9771
0.1538 18 0.3887
0.1624 19 0.363
0.1709 20 0.3092
0.1795 21 0.2692
0.1880 22 0.2386
0.1966 23 0.2266
0.2051 24 0.233
0.2137 25 0.2214
0.2222 26 0.2038
0.2308 27 0.2015
0.2393 28 0.1772
0.2479 29 0.1697
0.2564 30 0.1718
0.2650 31 0.2456
0.2735 32 0.5238
0.2821 33 0.5308
0.2906 34 0.5111
0.2991 35 0.3931
0.3077 36 0.3414
0.3162 37 0.2704
0.3248 38 0.2949
0.3333 39 0.3082
0.3419 40 0.3755
0.3504 41 0.3127
0.3590 42 0.3756
0.3675 43 0.3564
0.3761 44 0.3905
0.3846 45 0.377
0.3932 46 0.3043
0.4017 47 0.3237
0.4103 48 0.4035
0.4188 49 0.4522
0.4274 50 0.4392
0.4359 51 0.4482
0.4444 52 0.3586
0.4530 53 0.3154
0.4615 54 0.4053
0.4701 55 0.5846
0.4786 56 0.4372
0.4872 57 0.6201
0.4957 58 0.5278
0.5043 59 0.4844
0.5128 60 0.5817
0.5214 61 0.3765
0.5299 62 0.4785
0.5385 63 0.5724
0.5470 64 0.5375
0.5556 65 0.5362
0.5641 66 0.4731
0.5726 67 0.4514
0.5812 68 0.4563
0.5897 69 0.4198
0.5983 70 0.4086
0.6068 71 0.3612
0.6154 72 0.3463
0.6239 73 0.6261
0.6325 74 0.6283
0.6410 75 0.4635
0.6496 76 0.463
0.6581 77 0.4075
0.6667 78 0.3797
0.6752 79 0.2769
0.6838 80 0.3353
0.6923 81 0.2295
0.7009 82 0.4316
0.7094 83 0.9861
0.7179 84 0.9684
0.7265 85 0.9883
0.7350 86 0.8865
0.7436 87 0.8248
0.7521 88 0.7973
0.7607 89 0.8465
0.7692 90 0.7664
0.7778 91 0.7643
0.7863 92 0.7665
0.7949 93 0.7348
0.8034 94 0.7493
0.8120 95 0.6115
0.8205 96 0.6233
0.8291 97 0.6435
0.8376 98 0.5581
0.8462 99 0.542
0.8547 100 0.5571
0.8632 101 0.502
0.8718 102 0.5375
0.8803 103 0.4952
0.8889 104 0.4873
0.8974 105 0.4599
0.9060 106 0.4536
0.9145 107 0.4479
0.9231 108 0.384
0.9316 109 0.3523
0.9402 110 0.369
0.9487 111 0.3422
0.9573 112 0.3698
0.9658 113 0.3625
0.9744 114 0.3736
0.9829 115 0.4313
0.9915 116 0.4605
1.0 117 0.2948
1.0085 118 0.7391
1.0171 119 0.6622
1.0256 120 0.6917
1.0342 121 0.7963
1.0427 122 0.7815
1.0513 123 0.6719
1.0598 124 0.6098
1.0684 125 0.549
1.0769 126 0.7212
1.0855 127 0.6381
1.0940 128 0.7424
1.1026 129 0.6822
1.1111 130 0.6921
1.1197 131 0.5022
1.1282 132 0.578
1.1368 133 0.8139
1.1453 134 0.6167
1.1538 135 0.1836
1.1624 136 0.1853
1.1709 137 0.1628
1.1795 138 0.1464
1.1880 139 0.1308
1.1966 140 0.1273
1.2051 141 0.1414
1.2137 142 0.138
1.2222 143 0.1268
1.2308 144 0.1348
1.2393 145 0.111
1.2479 146 0.1069
1.2564 147 0.1122
1.2650 148 0.1703
1.2735 149 0.405
1.2821 150 0.3876
1.2906 151 0.378
1.2991 152 0.2633
1.3077 153 0.2263
1.3162 154 0.1748
1.3248 155 0.2016
1.3333 156 0.2166
1.3419 157 0.2798
1.3504 158 0.2295
1.3590 159 0.2805
1.3675 160 0.2619
1.3761 161 0.3006
1.3846 162 0.2843
1.3932 163 0.2244
1.4017 164 0.2361
1.4103 165 0.3025
1.4188 166 0.3443
1.4274 167 0.3329
1.4359 168 0.3467
1.4444 169 0.2748
1.4530 170 0.2304
1.4615 171 0.3125
1.4701 172 0.478
1.4786 173 0.3085
1.4872 174 0.4337
1.4957 175 0.3936
1.5043 176 0.3455
1.5128 177 0.4205
1.5214 178 0.2752
1.5299 179 0.36
1.5385 180 0.4347
1.5470 181 0.3949
1.5556 182 0.4072
1.5641 183 0.3633
1.5726 184 0.3532
1.5812 185 0.3451
1.5897 186 0.3242
1.5983 187 0.3122
1.6068 188 0.2845
1.6154 189 0.2815
1.6239 190 6.9159
1.6325 191 7.9604
1.6410 192 6.5821
1.6496 193 3.9177
1.6581 194 1.6951
1.6667 195 0.5367
1.6752 196 0.2935
1.6838 197 0.3295
1.6923 198 0.2212
1.7009 199 0.335
1.7094 200 0.7829
1.7179 201 0.7884
1.7265 202 0.7921
1.7350 203 0.7342
1.7436 204 0.6092
1.7521 205 0.6014
1.7607 206 0.6414
1.7692 207 0.5842
1.7778 208 0.5916
1.7863 209 0.5993
1.7949 210 0.5658
1.8034 211 0.6013
1.8120 212 0.4769
1.8205 213 0.4801
1.8291 214 0.5087
1.8376 215 0.436
1.8462 216 0.4398
1.8547 217 0.4391
1.8632 218 0.419
1.8718 219 0.4338
1.8803 220 0.395
1.8889 221 0.4063
1.8974 222 0.375
1.9060 223 0.3655
1.9145 224 0.3637
1.9231 225 0.3098
1.9316 226 0.2782
1.9402 227 0.2941
1.9487 228 0.275
1.9573 229 0.3018
1.9658 230 0.2971
1.9744 231 0.3108
1.9829 232 0.3808
1.9915 233 0.4067
2.0 234 0.2424
2.0085 235 0.6453
2.0171 236 0.5577

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.2.1
  • Transformers: 4.44.2
  • PyTorch: 2.3.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CachedMultipleNegativesRankingLoss

@misc{gao2021scaling,
    title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
    author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
    year={2021},
    eprint={2101.06983},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
19
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for seongil-dn/bge-m3-kor-retrieval-bs1024-checkpoint-236

Base model

BAAI/bge-m3
Finetuned
(181)
this model