SentenceTransformer based on intfloat/multilingual-e5-small
This is a sentence-transformers model finetuned from intfloat/multilingual-e5-small. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: intfloat/multilingual-e5-small
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("srikarvar/e-small-triplet")
# Run inference
sentences = [
'How many countries are in the European Union?',
'Number of countries in the European Union',
'How many continents are there?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Dataset:
triplet-validation
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 1.0 |
dot_accuracy | 0.0 |
manhattan_accuracy | 1.0 |
euclidean_accuracy | 1.0 |
max_accuracy | 1.0 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 548 training samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 6 tokens
- mean: 10.84 tokens
- max: 22 tokens
- min: 4 tokens
- mean: 9.57 tokens
- max: 20 tokens
- min: 6 tokens
- mean: 10.79 tokens
- max: 22 tokens
- Samples:
anchor positive negative What is the difference between a laptop and a tablet?
Comparison between a laptop and a tablet
What is the difference between a laptop and a smartphone?
How do I get to the nearest train station?
Directions to the nearest train station
How do I get to the airport?
Who is the author of '1984'?
Writer of the novel '1984'
Who is the author of 'Pride and Prejudice'?
- Loss:
TripletLoss
with these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 0.5 }
Evaluation Dataset
Unnamed Dataset
- Size: 61 evaluation samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 7 tokens
- mean: 10.36 tokens
- max: 14 tokens
- min: 6 tokens
- mean: 9.28 tokens
- max: 13 tokens
- min: 6 tokens
- mean: 10.46 tokens
- max: 14 tokens
- Samples:
anchor positive negative How many states are there in the USA?
Total number of states in the United States
How many provinces are there in Canada?
What is the chemical formula for ethanol?
Molecular structure of ethanol
What is the chemical formula for methanol?
How to clean a laptop screen?
Steps to safely clean a laptop display
How to clean a laptop keyboard?
- Loss:
TripletLoss
with these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 0.5 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 32gradient_accumulation_steps
: 2learning_rate
: 3e-05weight_decay
: 0.01num_train_epochs
: 10lr_scheduler_type
: reduce_lr_on_plateauwarmup_ratio
: 0.1load_best_model_at_end
: Trueoptim
: adamw_torch_fused
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 32per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 2eval_accumulation_steps
: Nonelearning_rate
: 3e-05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: reduce_lr_on_plateaulr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | triplet-validation_max_accuracy |
---|---|---|---|---|
1.0 | 9 | - | 0.1078 | - |
1.1111 | 10 | 0.3352 | - | - |
2.0 | 18 | - | 0.0670 | - |
2.2222 | 20 | 0.1677 | - | - |
3.0 | 27 | - | 0.0434 | - |
3.3333 | 30 | 0.0832 | - | - |
4.0 | 36 | - | 0.0323 | - |
4.4444 | 40 | 0.063 | - | - |
5.0 | 45 | - | 0.0299 | - |
5.5556 | 50 | 0.0449 | - | - |
6.0 | 54 | - | 0.0273 | - |
6.6667 | 60 | 0.0357 | - | - |
7.0 | 63 | - | 0.0241 | - |
7.7778 | 70 | 0.0254 | - | - |
8.0 | 72 | - | 0.0224 | - |
8.8889 | 80 | 0.02 | - | - |
9.0 | 81 | - | 0.0211 | - |
10.0 | 90 | 0.0173 | 0.0216 | 1.0 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for srikarvar/e-small-triplet
Base model
intfloat/multilingual-e5-smallEvaluation results
- Cosine Accuracy on triplet validationself-reported1.000
- Dot Accuracy on triplet validationself-reported0.000
- Manhattan Accuracy on triplet validationself-reported1.000
- Euclidean Accuracy on triplet validationself-reported1.000
- Max Accuracy on triplet validationself-reported1.000