SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'Reasoning:\n1. Context Grounding: The answer is well-supported by the provided document. The document indicates that a dime is "one-tenth of a dollar" and has a monetary value of "10¢."\n2. Relevance: The answer directly addresses the specific question of the monetary value of a dime.\n3. Conciseness: The answer is clear, to the point, and does not include unnecessary information.\n\nFinal Result:'
  • 'Reasoning:\n1. Context Grounding: The document lists "Set the investigation status" as a topic, indicating that it is possible to set the investigation status.\n2. Relevance: The answer "Yes" directly addresses the question of whether one can set the investigation status.\n3. Conciseness: The answer is brief and to the point.\n4. Specificity: The document explicitly mentions setting the investigation status, so the answer is specific to the asked question.\n5. Key/Value/Event Name: The relevant key/event is "Set the investigation status" which is correctlyidentified as being possible.\n\nFinal result:'
  • '**Reasoning:\n\n1. Context Grounding: The answer is well-supported by the document, aligning perfectly with the specific benefits mentioned by the author in the document. It includes benefits like unapologetic "me" time, health, self-growth, patience, taking time to be still, accepting changing moods, responsibility for happiness, appreciation for the body, yoga's presence off the mat, and the importance of being open. These points are directly extracted from the provided content.\n \n2. Relevance: The answer focuses squarely on addressing the specific question asked — what benefits the author has experienced from their regular yoga practice. It avoids unrelated topics and stays on point.\n\n3. Conciseness: The answer is clear, concise, and directly lists the benefits without unnecessary elaboration. Each mentioned benefit corresponds to a specific point from the document, making it easyto verify and understand.\n\nFinal Result: **'
0
  • 'Reasoning:\n1. Context Grounding: The answer is consistent with the suggestions found in the provided document. It mentions reducing salt intake, cutting processed foods and alcohol, and drinking more water, all of which are present in the document.\n2. Relevance: The answer stays focused on the main query about losing the last 10 pounds and provides actionable advice directly related to the document.\n3. Conciseness: The answer is a bit lengthy but stays mostly on point without much deviation.\n\nThe document provides a variety of methods for tackling the last 10 pounds, and the answer effectively consolidates some of these points. Although the response could be shorter and more concise, it appropriately addresses the question based on the document.\n\nFinal Result:'
  • 'The given answer ..\\/..\\/_images\\/bal_https://elliott.biz/ does not match any of the Image URLs provided in the document. For step 5, the correct Image URL is ..\\/..\\/_images\\/bal_http://osborn-mendoza.info/.\n\nReasoning:\n1. Context Grounding: The answer is not supported by the provided document. The correct Image URL should be found in the list under the relevant step.\n2. Relevance: The answer must directly address the specific question by identifying the correct Image URL corresponding to step 5.\n3. Conciseness: The answer should be concise but accurate. The provided answer adds irrelevant information.\n4. Specifics: The provided document includes specific Image URLs for each step that must be matched correctly to the steps provided.\n5. Key, Value, and Event Name: The correct identification of the image URL for step 5 is critical here.\n\nFinal result:'
  • 'Reasoning:\n1. Context Grounding: The provided answer is rooted in the document, which mentions that Amy Bloom finds starting a project hard and having to clear mental space, recalibrate, and become less involved in her everyday life.\n2. Relevance: The response accurately focuses on the challenges Bloom faces when starting a significant writing project, without deviating into irrelevant areas.\n3. Conciseness: The answer effectively summarizes the relevant information from the document, staying clear and to the point while avoiding unnecessary detail.\n\nFinal Result:'

Evaluation

Metrics

Label Accuracy
all 0.7183

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_cybereason_gpt-4o_cot-instructions_remove_final_evaluation_e2_larger_trai")
# Run inference
preds = model("The percentage in the response status column indicates the total amount of successful completion of response actions.

Reasoning:
1. **Context Grounding**: The answer is well-supported by the document which states, \"percentage indicates the total amount of successful completion of response actions.\"
2. **Relevance**: The answer directly addresses the specific question asked about what the percentage in the response status column indicates.
3. **Conciseness**: The answer is succinct and to the point without unnecessary information.
4. **Specificity**: The answer is specific to what is being asked, detailing exactly what the percentage represents.
5. **Accuracy**: The answer provides the correct key/value as per the document.

Final result:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 33 94.4664 198
Label Training Sample Count
0 129
1 139

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (2, 2)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0015 1 0.1648 -
0.0746 50 0.2605 -
0.1493 100 0.2538 -
0.2239 150 0.2244 -
0.2985 200 0.1409 -
0.3731 250 0.0715 -
0.4478 300 0.0238 -
0.5224 350 0.0059 -
0.5970 400 0.0032 -
0.6716 450 0.0025 -
0.7463 500 0.0024 -
0.8209 550 0.0019 -
0.8955 600 0.0017 -
0.9701 650 0.0016 -
1.0448 700 0.0015 -
1.1194 750 0.0015 -
1.1940 800 0.0013 -
1.2687 850 0.0013 -
1.3433 900 0.0013 -
1.4179 950 0.0012 -
1.4925 1000 0.0013 -
1.5672 1050 0.0012 -
1.6418 1100 0.0011 -
1.7164 1150 0.0011 -
1.7910 1200 0.0011 -
1.8657 1250 0.0012 -
1.9403 1300 0.0011 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
19
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_cybereason_gpt-4o_cot-instructions_remove_final_evaluation_e2_larger_trai

Finetuned
(310)
this model

Evaluation results