SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
1.0
  • '파쉬 독일 보온 물주머니 노커버 기본형 커버 체크 핑크네이비 주식회사 하이유로'
  • '파쉬 독일 보온 물주머니 노커버 기본형 3.노커버 기본형 레드 주식회사 하이유로'
  • '꼼띠아 국산 프리미엄 온열 황토 순면 냉 온 어깨 찜질기 찜질팩 목 등 찜질 쿨매트 허리찜질기(그레이) BH스토어'
0.0
  • '한양 온수찜질기 밍크 파우치 회색_SET 밍크파우치+복대 한양의료기'
  • '한양 온수찜질기 밍크 파우치 블랙_밍크 발찜질기 한양의료기'
  • '게르마늄 전기찜질기 뜸질기 찜질기 찜질팩 전기메트 허리 배 무릎 찜질 MinSellAmount 스마일배송'

Evaluation

Metrics

Label Metric
all 0.9710

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_lh5")
# Run inference
preds = model("충전식 온수 찜질기 온열 전기 찜질팩 IVB-D1000 핑크 ")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 4 10.73 20
Label Training Sample Count
0.0 50
1.0 50

Training Hyperparameters

  • batch_size: (512, 512)
  • num_epochs: (20, 20)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 40
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0625 1 0.3748 -
3.125 50 0.0002 -
6.25 100 0.0 -
9.375 150 0.0 -
12.5 200 0.0 -
15.625 250 0.0 -
18.75 300 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0.dev0
  • Sentence Transformers: 3.1.1
  • Transformers: 4.46.1
  • PyTorch: 2.4.0+cu121
  • Datasets: 2.20.0
  • Tokenizers: 0.20.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
206
Safetensors
Model size
111M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mini1013/master_cate_lh5

Base model

klue/roberta-base
Finetuned
(132)
this model

Evaluation results