SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1
  • 'Reasoning:\nThe given answer is directly aligned with the context provided by the document. It accurately identifies Ennita Manyumwa as a 26-year-old African woman who is HIV-free. It further elaborates on her significance in the fight against AIDS by highlighting her resistance to older men who offer gifts for sex, a behavior that helps prevent the spread of AIDS. This information is consistent with the document and directly answers the question without any unnecessary details.\n\nFinal Evaluation:'
  • 'Reasoning:\nThe answer directly addresses the question by listing the benefits the author has experienced from their regular yoga practice. These benefits include unapologetic "me" time, improved health, self-growth, increased patience, the ability to be still, acceptance of daily changes, the realization that happiness is their responsibility, a deeper appreciation for their body, the understanding that yoga exists off the mat, and the importance of being open. Each of these points is explicitly mentioned in the provided document, making the answer well-supported and contextually accurate. The answer is concise and relevant, sticking closely to the specifics asked for in the question.\n\nFinal Evaluation:'
  • 'Reasoning:\nThe answer accurately identifies that the work on germ-free-life conducted at Notre Dame University resulted in the establishment of the Lobund Institute. This directly aligns with the information provided in the document, which details the evolution of the research from its beginnings in 1928 to the establishment of the Lobund Institute. The response is relevant, well-grounded in the context of the document, and concise.\n\nEvaluation:'
0
  • 'Reasoning:\nThe answer provided accurately addresses the question by explaining how to enable approval for appointment bookings, which subsequently changes the booking process for clients from immediate booking to a "request to book" process. This may be a solution to the issue if clients are currently experiencing difficulties due to the lack of this feature. The steps given are clear, concise, and directly supported by the provided document, aligning well with the instructions mentioned for enabling approval.\n\nHowever, it is important to note that the answer does not directly state why clients might be unable to book appointments online, nor does it explore other potential reasons beyond the approval setting. Directly stating that clients cannot book appointments online due to lack of enabling approval, and covering any other potential issues mentioned in the document, would make it even more thorough.\n\nEvaluation:'
  • 'Reasoning:\nThe answer does cover the fundamental steps to write a nonfiction book, such as selecting a topic, conducting research, creating an outline, and starting the writing process. However, it includes an incorrect aside, stating that "The Old Man and the Sea" by Ernest Hemingway is nonfiction and based on true events, which detracts from the otherwise accurate guidance. Additionally, the answer could be more detailed and aligned with the extensive steps provided in the document, such as discussing the importance of understanding the genre, reading and analyzing examples, brainstorming, setting up interviews, organizing research, creating a writing schedule, and focusing on writing techniques.\n\nFinal Evaluation: \nEvaluation:'
  • 'Reasoning:\nThe provided answer directly contradicts the guidelines given in the document on studying English literature. The answer suggests not taking notes, ignoring significant passages, and avoiding making character profiles, which are all contrary to the recommendations in the document. The document emphasizes the importance of thorough reading, taking detailed notes, creating character profiles, and paying attention to important passages and concepts, which are crucial for comprehensive understanding and analysis of English literature.\n\nFinal Evaluation: \nEvaluation:'

Evaluation

Metrics

Label Accuracy
all 0.6761

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_cybereason_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remov")
# Run inference
preds = model("Reasoning:
The provided document clearly outlines the purpose of the <ORGANIZATION> XDR On-Site Collector Agent: it is installed to collect logs from platforms and securely forward them to <ORGANIZATION> XDR. The answer given aligns accurately with the document's description, addressing the specific question without deviating into unrelated topics. The response isalso concise and to the point.

Evaluation:")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 33 96.1280 289
Label Training Sample Count
0 312
1 321

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (2, 2)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0006 1 0.2154 -
0.0316 50 0.2582 -
0.0632 100 0.2517 -
0.0948 150 0.2562 -
0.1263 200 0.2532 -
0.1579 250 0.2412 -
0.1895 300 0.184 -
0.2211 350 0.1608 -
0.2527 400 0.1487 -
0.2843 450 0.117 -
0.3159 500 0.0685 -
0.3474 550 0.0327 -
0.3790 600 0.0257 -
0.4106 650 0.0139 -
0.4422 700 0.012 -
0.4738 750 0.0047 -
0.5054 800 0.0046 -
0.5370 850 0.0042 -
0.5685 900 0.0058 -
0.6001 950 0.0029 -
0.6317 1000 0.0055 -
0.6633 1050 0.0033 -
0.6949 1100 0.0026 -
0.7265 1150 0.0026 -
0.7581 1200 0.0033 -
0.7896 1250 0.0049 -
0.8212 1300 0.0043 -
0.8528 1350 0.0019 -
0.8844 1400 0.0015 -
0.9160 1450 0.0014 -
0.9476 1500 0.0017 -
0.9792 1550 0.0013 -
1.0107 1600 0.0019 -
1.0423 1650 0.0012 -
1.0739 1700 0.0011 -
1.1055 1750 0.0013 -
1.1371 1800 0.0012 -
1.1687 1850 0.0013 -
1.2003 1900 0.0013 -
1.2318 1950 0.0012 -
1.2634 2000 0.0011 -
1.2950 2050 0.0012 -
1.3266 2100 0.0011 -
1.3582 2150 0.0011 -
1.3898 2200 0.0012 -
1.4214 2250 0.0014 -
1.4529 2300 0.0011 -
1.4845 2350 0.001 -
1.5161 2400 0.0011 -
1.5477 2450 0.001 -
1.5793 2500 0.001 -
1.6109 2550 0.0012 -
1.6425 2600 0.0011 -
1.6740 2650 0.0011 -
1.7056 2700 0.001 -
1.7372 2750 0.001 -
1.7688 2800 0.001 -
1.8004 2850 0.001 -
1.8320 2900 0.001 -
1.8636 2950 0.001 -
1.8951 3000 0.001 -
1.9267 3050 0.0009 -
1.9583 3100 0.0011 -
1.9899 3150 0.001 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
24
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_cybereason_gpt-4o_improved-cot-instructions_chat_few_shot_generated_remov

Finetuned
(310)
this model

Evaluation results