Sentiment Analysis Model
Model Details
- Base Model: google/electra-base-discriminator
- Task: Binary Sentiment Analysis (Positive/Negative)
- Datasets: IMDB and Amazon Reviews
- Language: English
Training Hyperparameters
- Batch Size: 8
- Learning Rate: 2e-5
- Number of Epochs: 2
- Max Sequence Length: 128 tokens
- Model Architecture: ELECTRA (Discriminator)
Training
The model was trained using a combination of IMDB and Amazon reviews datasets, using ELECTRA's discriminator architecture which is particularly efficient with limited data. The hyperparameters were optimized for performance on consumer-grade hardware.
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load model and tokenizer
model_name = "auskola/sentimientos"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
def analyze_sentiment(text):
# Tokenize and predict
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=1)
# Get prediction and confidence
prediction = torch.argmax(probabilities, dim=1)
confidence = torch.max(probabilities).item()
return {
"sentiment": "Positive" if prediction.item() == 1 else "Negative",
"confidence": confidence
}
# Ejemplos de uso
texts = [
"This product exceeded my expectations!",
"Terrible service, would not recommend",
"The movie was pretty good"
]
for text in texts:
result = analyze_sentiment(text)
print(f"\nText: {text}")
print(f"Sentiment: {result['sentiment']}")
print(f"Confidence: {result['confidence']:.2f}")
- Downloads last month
- 6