Fine-tuned RoBERTa for Sentiment Analysis on Reviews
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-sentiment-latest on the Amazon Reviews dataset for sentiment analysis.
Model Details
- Model Name:
AnkitAI/reviews-roberta-base-sentiment-analysis
- Base Model:
cardiffnlp/twitter-roberta-base-sentiment-latest
- Dataset: Amazon Reviews
- Fine-tuning: This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative).
Training
The model was trained using the following parameters:
- Learning Rate: 2e-5
- Batch Size: 16
- Weight Decay: 0.01
- Evaluation Strategy: Epoch
Training Details
- Evaluation Loss: 0.1049
- Evaluation Runtime: 3177.538 seconds
- Evaluation Samples/Second: 226.591
- Evaluation Steps/Second: 7.081
- Training Runtime: 110070.6349 seconds
- Training Samples/Second: 78.495
- Training Steps/Second: 2.453
- Training Loss: 0.0858
- Evaluation Accuracy: 97.19%
- Evaluation Precision: 97.9%
- Evaluation Recall: 97.18%
- Evaluation F1 Score: 97.19%
Usage
You can use this model directly with the Hugging Face transformers
library:
from transformers import RobertaForSequenceClassification, RobertaTokenizer
model_name = "AnkitAI/reviews-roberta-base-sentiment-analysis"
model = RobertaForSequenceClassification.from_pretrained(model_name)
tokenizer = RobertaTokenizer.from_pretrained(model_name)
# Example usage
inputs = tokenizer("This product is great!", return_tensors="pt")
outputs = model(**inputs) # 1 for positive, 0 for negative
License
This model is licensed under the MIT License.
- Downloads last month
- 228
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.