--- tags: - setfit - absa - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Edelweiss:Downgrade Wipro to 'Hold', says Edelweiss - text: Overweight:Morgan Stanley upgrades Axis Bank to Overweight; ups target price - text: 'downside:Expect more downside in the IT, pharma stocks: Sandeep Wagle' - text: 'Barclays:Infusion of additional $1 trillion to India''s GDP to create new midcap leaders: Barclays' - text: focus:Jaypee, Reliance Group stocks in focus ahead of UP results metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: false base_model: sentence-transformers/all-MiniLM-L6-v2 --- # SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. This model was trained within the context of a larger system for ABSA, which looks like so: 1. Use a spaCy model to select possible aspect span candidates. 2. **Use this SetFit model to filter these possible aspect span candidates.** 3. Use a SetFit model to classify the filtered aspect span candidates. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **spaCy Model:** en_core_web_sm - **SetFitABSA Aspect Model:** [Askinkaty/setfit-finance-aspect](https://huggingface.co/Askinkaty/setfit-finance-aspect) - **SetFitABSA Polarity Model:** [Askinkaty/setfit-finance-polarity](https://huggingface.co/Askinkaty/setfit-finance-polarity) - **Maximum Sequence Length:** 256 tokens - **Number of Classes:** 2 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:----------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | aspect | | | no aspect | | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import AbsaModel # Download from the 🤗 Hub model = AbsaModel.from_pretrained( "Askinkaty/setfit-finance-aspect", "Askinkaty/setfit-finance-polarity", ) # Run inference preds = model("Banking stocks to see lot of traction: Mitesh Thacker.") ``` ### Training Hyperparameters - batch_size: 64 - num_epochs: 2 - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: 2e-05 - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: True - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Framework Versions - Python: 3.11.11 - SetFit: 1.1.0 - Sentence Transformers: 3.3.1 - spaCy: 3.7.5 - Transformers: 4.42.1 - PyTorch: 2.5.1+cu124 - Datasets: 3.2.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```