SetFit with sentence-transformers/paraphrase-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("faodl/setfit-paraphrase-mpnet-base-v2-5ClassesDesc-multilabel-augmented")
# Run inference
preds = model("Provision 1 - Access to safe nutritious food for all The package will be aimed at ending hunger and all forms of malnutrition and reduce the incidence of non-communicable diseases, enabling all people to be nourished and healthy. This suggests that all people at all times have access to sufficient quantities of affordable and safe foo")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 6 93.5916 1014

Training Hyperparameters

  • batch_size: (8, 8)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0010 1 0.3063 -
0.0524 50 0.2204 -
0.1047 100 0.1689 -
0.1571 150 0.1464 -
0.2094 200 0.1236 -
0.2618 250 0.1088 -
0.3141 300 0.0649 -
0.3665 350 0.0697 -
0.4188 400 0.0395 -
0.4712 450 0.052 -
0.5236 500 0.0263 -
0.5759 550 0.0376 -
0.6283 600 0.0307 -
0.6806 650 0.022 -
0.7330 700 0.0162 -
0.7853 750 0.012 -
0.8377 800 0.0135 -
0.8901 850 0.0173 -
0.9424 900 0.0171 -
0.9948 950 0.0117 -

Framework Versions

  • Python: 3.11.11
  • SetFit: 1.1.1
  • Sentence Transformers: 3.4.1
  • Transformers: 4.50.2
  • PyTorch: 2.6.0+cu124
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
2
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for faodl/setfit-paraphrase-mpnet-base-v2-5ClassesDesc-multilabel-augmented

Finetuned
(294)
this model