SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
3.0
  • '찹쌀호떡믹스 400g 5개 오브젝티브'
  • '신진 찹쌀호떡가루 2.5Kg 호떡믹스 퍼스트'
  • '찹쌀호떡믹스 400g 10개 묶음배송가능 옵션9.\xa0오븐용깨찰빵믹스 500g EY 인터내셔널'
0.0
  • '브레드가든 바닐라에센스 59ml 주식회사 몬즈컴퍼니'
  • '선인 냉동레몬제스트 500g 레몬껍질 선인 냉동레몬제스트 500g 레몬껍질 아이은하'
  • '샤프 인스턴트 이스트 골드 500g 샤프 이스트 골드 500g 주식회사 맘쿠킹'
2.0
  • '곰표 와플믹스 1kg x 4팩 코스트코나'
  • '동원비셰프 스위트사워믹스1kg 엠디에스마케팅 주식회사'
  • 'CJ 백설 붕어빵믹스 10kg [맛있는] [좋아하는]간편 로이스'
1.0
  • '오뚜기 베이킹소다 400g 지윤 주식회사'
  • '밥스레드밀 파우더 397g 베이킹 글로벌피스'
  • 'Anthony s 유기농 요리 등급 코코아 파우더 1 lb 프로마스터'

Evaluation

Metrics

Label Metric
all 0.8175

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_fd17")
# Run inference
preds = model("행복한 쌀잉어빵 반죽 5kg 팥앙금 3kg 행복유통")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 3 9.2 22
Label Training Sample Count
0.0 50
1.0 50
2.0 50
3.0 50

Training Hyperparameters

  • batch_size: (512, 512)
  • num_epochs: (20, 20)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 40
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0312 1 0.4064 -
1.5625 50 0.1639 -
3.125 100 0.003 -
4.6875 150 0.0003 -
6.25 200 0.0001 -
7.8125 250 0.0001 -
9.375 300 0.0001 -
10.9375 350 0.0 -
12.5 400 0.0 -
14.0625 450 0.0 -
15.625 500 0.0 -
17.1875 550 0.0 -
18.75 600 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0.dev0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.46.1
  • PyTorch: 2.4.0+cu121
  • Datasets: 2.20.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
1,759
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mini1013/master_cate_fd17

Base model

klue/roberta-base
Finetuned
(131)
this model

Evaluation results