SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • "Reasoning:\nThe answer provides key insights into the reasons behind the Denver Nuggets' offensive outburst in January, specifically mentioning the comfort and effectiveness of the team, the coaching strategy of taking the first available shot in the rhythm of the offense, and emphasizing pushing the ball after both makes and misses. These points are directly supported by the document. However, the inclusion of information about a new training technique involving virtual reality is not supported by the provided document and thus detracts from the answer's accuracy and relevance.\n\nFinal Evaluation:"
  • 'Reasoning:\nWhile the provided answer attempts to address the differences between film and digital photography, it contains several inaccuracies and inconsistencies with the document. The answer incorrectly states that film under-exposes better and compresses the range into the bottom end, whereas the document clearly states that film over-exposes better and compresses the range into the top end. Additionally, it mentions that the digital sensors capture all three colors at each point, which is inaccurate as per the document; it states digital sensors capture only one color at each point and then interpolate the data to create an RGB image. Moreover, the answer states the comparison is to 5MP digital sensors whereas the document talks about 10MP sensors.\n\nThese inaccuracies undermine the grounding andreliability of the answer.\n\nFinal Evaluation:'
  • "Reasoning:\nThe provided answer addresses a topic entirely unrelated to the given question. The question is about the main conflict in the third book of the Arcana Chronicles by Kresley Cole, but the answer discusses the result of an MMA event featuring Antonio Rogerio Nogueira. There is no connection between the document's content and the question, leading to a clear lack of context grounding, relevance, and conciseness.\n\nFinal Evaluation: \nEvaluation:"
1
  • 'Reasoning:\nThe answer provided effectively draws upon the document to list several best practices that a web designer can incorporate into their client discovery and web design process to avoid unnecessary revisions and conflicts. Each practice mentioned—getting to know the client and their needs, signing a detailed contract, and communicating honestly about extra charges—is directly supported by points raised in the document. The answer is relevant, concise, and closely aligned with the specific question asked.\n\nFinal Evaluation:'
  • "Reasoning:\nThe answer provided correctly identifies the author's belief on what creates a connection between the reader and the characters in a story. It states that drawing from the author's own painful and emotional experiences makes the story genuine and relatable, thus engaging the reader. This is consistent with the document, which emphasizes the importance of authenticity and emotional depth derived from the author's personal experiences to make characters and their struggles real to the reader. The answer is relevant to the question and concisely captures the essence of the author's argument without includingunnecessary information.\n\nFinal Evaluation:"
  • 'Reasoning:\nThe answer correctly identifies Mauro Rubin as the CEO of JoinPad during the event at Talent Garden Calabiana, Milan. This is well-supported by the context provided in the document, which specifically mentions Mauro Rubin, the JoinPad CEO, taking the stage at the event. The answer is relevant to the question andconcise.\n\nFinal Evaluation: \nEvaluation:'

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remove_final_eva")
# Run inference
preds = model("Reasoning:
The provided answer incorrectly states who signed Kieron Freeman when he went on loan to Notts County instead of answering the question about Aaron Pryor's boxing manager. This answer does not relate to the provided question, making it contextually irrelevant and failing to address the inquiry about Aaron Pryor’s manager during his boxing career.

Final Evaluation:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 33 77.1275 176
Label Training Sample Count
0 200
1 208

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0010 1 0.2056 -
0.0490 50 0.2533 -
0.0980 100 0.1804 -
0.1471 150 0.0762 -
0.1961 200 0.0731 -
0.2451 250 0.0501 -
0.2941 300 0.0336 -
0.3431 350 0.0197 -
0.3922 400 0.0105 -
0.4412 450 0.0049 -
0.4902 500 0.0031 -
0.5392 550 0.0024 -
0.5882 600 0.0021 -
0.6373 650 0.0018 -
0.6863 700 0.0017 -
0.7353 750 0.0018 -
0.7843 800 0.0016 -
0.8333 850 0.0015 -
0.8824 900 0.0017 -
0.9314 950 0.0015 -
0.9804 1000 0.0013 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
18
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remove_final_eva

Finetuned
(310)
this model