--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: okay so just inform them to re submit the order okay because that is the information that i see right now for the cellphone phone number aah ending again one four seven one there is the port protection so provide them again the aah aah aah port out number to port ahead to call this one the transfer pin and they will have to aah started over and reset make the order and i usually that should really work - text: hi welcome to cricket nation thanks for your patience this call may i have your name - text: okay okay okay let me just go ahead and wait for the car all right i have now pulled up of the account and as i can see here the account has uh one line under the sixty dollar plan and the due date is every the fourth of them i let me just review the account as well as the payments on the accounts okay can i just uh put the call on hold for just a minute or two - text: cleveland but it could be done in the morning or tending that you've name but it is within today okay so that's a the status for the other number which is the one ending in one four seven one - text: ' that yes you did receive two text messages from cricket letting you know that your payment is due by the nine which i''m i''m not sure how exactly that was sent to you however prize' pipeline_tag: text-classification inference: true base_model: sentence-transformers/all-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/all-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8571428571428571 name: Accuracy --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 7 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | | | 4 | | | 6 | | | 5 | | | 1 | | | 2 | | | 3 | | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8571 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Jalajkx/all_mpnetcric-setfit-model") # Run inference preds = model("hi welcome to cricket nation thanks for your patience this call may i have your name") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 5 | 64.6774 | 312 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 7 | | 1 | 12 | | 2 | 3 | | 3 | 4 | | 4 | 19 | | 5 | 13 | | 6 | 4 | ### Training Hyperparameters - batch_size: (4, 4) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 25 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0013 | 1 | 0.2177 | - | | 0.0645 | 50 | 0.092 | - | | 0.1290 | 100 | 0.0167 | - | | 0.1935 | 150 | 0.1883 | - | | 0.2581 | 200 | 0.3168 | - | | 0.3226 | 250 | 0.0372 | - | | 0.3871 | 300 | 0.0253 | - | | 0.4516 | 350 | 0.2565 | - | | 0.5161 | 400 | 0.0096 | - | | 0.5806 | 450 | 0.0957 | - | | 0.6452 | 500 | 0.001 | - | | 0.7097 | 550 | 0.0021 | - | | 0.7742 | 600 | 0.2043 | - | | 0.8387 | 650 | 0.0042 | - | | 0.9032 | 700 | 0.001 | - | | 0.9677 | 750 | 0.0788 | - | ### Framework Versions - Python: 3.10.13 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.36.1 - PyTorch: 2.0.1 - Datasets: 2.15.0 - Tokenizers: 0.15.0 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```