Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,100 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
license: mit
|
2 |
+
datasets:
|
3 |
+
- dair-ai/emotion
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
library_name: transformers
|
7 |
+
widget:
|
8 |
+
- text: I am so happy with the results!
|
9 |
+
- text: I am so pissed with the results!
|
10 |
+
tags:
|
11 |
+
- debarta
|
12 |
+
- debarta-v3-small
|
13 |
+
- emotions-classifier
|
14 |
+
---
|
15 |
+
|
16 |
+
# π Fast Emotion-X: Fine-tuned DeBERTa V3 Small Based Emotion Detection π
|
17 |
+
|
18 |
+
This is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) for emotion detection on the [dair-ai/emotion](https://huggingface.co/dair-ai/emotion) dataset.
|
19 |
+
|
20 |
+
## π Overview
|
21 |
+
|
22 |
+
Fast Emotion-X is a state-of-the-art emotion detection model fine-tuned from Microsoft's DeBERTa V3 Small model. Designed to accurately classify text into one of six emotional categories, Fast Emotion-X leverages the robust capabilities of DeBERTa and fine-tunes it on a comprehensive emotion dataset, ensuring high accuracy and reliability.
|
23 |
+
|
24 |
+
## π Model Details
|
25 |
+
|
26 |
+
- **π Model Name:** `AnkitAI/deberta-v3-small-base-emotions-classifier`
|
27 |
+
- **π Base Model:** `microsoft/deberta-v3-small`
|
28 |
+
- **π Dataset:** [dair-ai/emotion](https://huggingface.co/dair-ai/emotion)
|
29 |
+
- **βοΈ Fine-tuning:** This model was fine-tuned for emotion detection with a classification head for six emotional categories (anger, disgust, fear, joy, sadness, surprise).
|
30 |
+
|
31 |
+
## ποΈ Training
|
32 |
+
|
33 |
+
The model was trained using the following parameters:
|
34 |
+
|
35 |
+
- **π§ Learning Rate:** 2e-5
|
36 |
+
- **π¦ Batch Size:** 4
|
37 |
+
- **βοΈ Weight Decay:** 0.01
|
38 |
+
- **π
Evaluation Strategy:** Epoch
|
39 |
+
|
40 |
+
### ποΈ Training Details
|
41 |
+
|
42 |
+
- **π Eval Loss:** 0.0858
|
43 |
+
- **β±οΈ Eval Runtime:** 110070.6349 seconds
|
44 |
+
- **π Eval Samples/Second:** 78.495
|
45 |
+
- **π Eval Steps/Second:** 2.453
|
46 |
+
- **π Train Loss:** 0.1049
|
47 |
+
- **β³ Eval Accuracy:** 94.6%
|
48 |
+
- **π Eval Precision:** 94.8%
|
49 |
+
- **β±οΈ Eval Recall:** 94.5%
|
50 |
+
- **π Eval F1 Score:** 94.7%
|
51 |
+
|
52 |
+
## π Usage
|
53 |
+
|
54 |
+
You can use this model directly with the Hugging Face `transformers` library:
|
55 |
+
|
56 |
+
```python
|
57 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
58 |
+
|
59 |
+
model_name = "AnkitAI/deberta-v3-small-base-emotions-classifier"
|
60 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
61 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
62 |
+
|
63 |
+
# Example usage
|
64 |
+
def predict_emotion(text):
|
65 |
+
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
|
66 |
+
outputs = model(**inputs)
|
67 |
+
logits = outputs.logits
|
68 |
+
predictions = logits.argmax(dim=1)
|
69 |
+
return predictions
|
70 |
+
|
71 |
+
text = "I'm so happy with the results!"
|
72 |
+
emotion = predict_emotion(text)
|
73 |
+
print("Detected Emotion:", emotion)
|
74 |
+
```
|
75 |
+
|
76 |
+
## π Emotion Labels
|
77 |
+
- π Anger
|
78 |
+
- π€’ Disgust
|
79 |
+
- π¨ Fear
|
80 |
+
- π Joy
|
81 |
+
- π’ Sadness
|
82 |
+
- π² Surprise
|
83 |
+
|
84 |
+
|
85 |
+
## π Model Card Data
|
86 |
+
|
87 |
+
| Parameter | Value |
|
88 |
+
|-------------------------------|---------------------------|
|
89 |
+
| Model Name | microsoft/deberta-v3-small |
|
90 |
+
| Training Dataset | dair-ai/emotion |
|
91 |
+
| Number of Training Epochs | 20 |
|
92 |
+
| Learning Rate | 2e-5 |
|
93 |
+
| Per Device Train Batch Size | 4 |
|
94 |
+
| Evaluation Strategy | Epoch |
|
95 |
+
| Best Model Accuracy | 94.6% |
|
96 |
+
|
97 |
+
|
98 |
+
## π License
|
99 |
+
|
100 |
+
This model is licensed under the [MIT License](LICENSE).
|