File size: 1,758 Bytes
327944c 4615da1 327944c 4615da1 327944c 4615da1 327944c 4615da1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
language: en
license: other
tags:
- sentiment-analysis
- fine-tuned
- sentiment-classification
- transformers
model_name: Fine-Tuned Sentiment Model
model_type: Roberta
datasets:
- custom-dataset
- rohittamidapati11/training_data
- rohittamidapati11/validation_data
metrics:
- micro precision and recall
- macro precision and recall
---
# Fine-Tuned Sentiment Model
This model is fine-tuned for Sentiment Analysis task, the model classifies a customer ticket into 5-categories of sentiments, namely:
- "Strong Negative"
- "Mild Negative"
- "Neutral"
- "Mild Positive"
- "Strong Positive"
*Point To Note*: The Customers are from these specific industries only:
- Food
- Cars
- Pet Food
- Furniture
- Beauty
## Model Details
- **Model Architecture**: This fine-tuned model was built on a pre-trained model, "IDEA-CCNL/Erlangshen-Roberta-110M-Sentiment"
- **Training Dataset**: The Dataset was generated using the model, "meta-llama/Llama-3.2-1B-Instruct"
## Example Usage-
To use this model for Sentiment Analysis:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("your_username/fine_tuned_sentiment_model_rt")
model = AutoModelForSequenceClassification.from_pretrained("your_username/fine_tuned_sentiment_model_rt")
# Example input
inputs = tokenizer("The food was a bit bland, but the portion sizes were generous. I was looking forward to trying it, but it didn't quite live up to my expectations.", return_tensors='pt')
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits, dim = 1).item()
print("Predicted Sentiment:", predicted_class) |