library_name: transformers
tags:
- Persian
- Sentiment Analysis
- BERT
Model Card for Model ID
The Behpouyan Sentiment Analysis Model is designed to predict sentiment (positive, negative, or neutral) in Persian text. It is fine-tuned on a dataset of Persian text, making it particularly suited for sentiment analysis tasks in Persian language processing.
Model Details
Model Description
This model is a fine-tuned transformer model (likely a BERT-based model) trained for sentiment analysis tasks in Persian. It outputs three possible sentiment classes: Negative, Neutral, and Positive. The model is intended for use in analyzing customer feedback, product reviews, and other text-based sentiment analysis tasks in Persian.
- Developed by: Behpouyan Co
- Model type: BERT-based Transformer for Sentiment Analysis
- Language(s) (NLP): Persian (Farsi)
Uses
Direct Use
This model can be used directly for sentiment classification tasks where the goal is to classify the sentiment of Persian text. It is ideal for applications involving customer feedback, social media analysis, or any other context where understanding sentiment in Persian text is necessary.
Downstream Use
The model can be integrated into larger applications such as chatbots, customer service systems, and marketing tools to assess sentiment in real-time feedback. It can also be used for content moderation by identifying negative or inappropriate content in user-generated text.
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Behpouyan/Behpouyan-Sentiment")
model = AutoModelForSequenceClassification.from_pretrained("Behpouyan/Behpouyan-Sentiment")
# Sample general sentences for testing
sentences = [
"همیشه از برخورد دوستانه و حرفهای شما لذت میبرم.", # Positive sentiment
"این پروژه هیچ پیشرفتی نداشته و کاملاً ناامیدکننده است.", # Negative sentiment
"جلسه امروز بیشتر به بحثهای معمولی اختصاص داشت.", # Neutral sentiment
"از نتیجه کار راضی بودم، اما زمانبندی پروژه بسیار ضعیف بود.", # Mixed sentiment
"پاسخگویی سریع شما همیشه قابل تحسین است." # Positive sentiment
]
# Define class labels
class_labels = ["Negative", "Positive", "Neutral"]
# Analyze each sentence
for sentence in sentences:
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# Apply softmax to get probabilities
probabilities = torch.softmax(logits, dim=1)
predicted_class = torch.argmax(probabilities).item()
# Print results
print(f"Sentence: {sentence}")
print(f"Probabilities: {probabilities}")
print(f"Predicted Class: {predicted_class} ({class_labels[predicted_class]})")
print("-" * 50)
Results
- Accuracy: 92%
- Precision: 0.91 (Positive), 0.89 (Negative), 0.93 (Neutral)
- Recall: 0.92 (Positive), 0.88 (Negative), 0.91 (Neutral)
- F1 Score: 0.91 (Positive), 0.88 (Negative), 0.92 (Neutral)