bhadresh-savani's picture
Add evaluation results on the default config of emotion (#1)
1e8c5c4
|
raw
history blame
3.64 kB
---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
model-index:
- name: bhadresh-savani/electra-base-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
verified: true
- name: Precision Macro
type: precision
value: 0.911532655431019
verified: true
- name: Precision Micro
type: precision
value: 0.9265
verified: true
- name: Precision Weighted
type: precision
value: 0.9305456360257519
verified: true
- name: Recall Macro
type: recall
value: 0.8536923122511134
verified: true
- name: Recall Micro
type: recall
value: 0.9265
verified: true
- name: Recall Weighted
type: recall
value: 0.9265
verified: true
- name: F1 Macro
type: f1
value: 0.8657529340483895
verified: true
- name: F1 Micro
type: f1
value: 0.9265
verified: true
- name: F1 Weighted
type: f1
value: 0.924844632421077
verified: true
- name: loss
type: loss
value: 0.3268870413303375
verified: true
---
# Electra-base-emotion
## Model description:
## Model Performance Comparision on Emotion Dataset from Twitter:
| Model | Accuracy | F1 Score | Test Sample per Second |
| --- | --- | --- | --- |
| [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
| [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
| [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
| [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
| [Electra-base-emotion](https://huggingface.co/bhadresh-savani/electra-base-emotion) | 91.95 | 91.90 | 472.72 |
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/electra-base-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
"""
Output:
[[
{'label': 'sadness', 'score': 0.0006792712374590337},
{'label': 'joy', 'score': 0.9959300756454468},
{'label': 'love', 'score': 0.0009452480007894337},
{'label': 'anger', 'score': 0.0018055217806249857},
{'label': 'fear', 'score': 0.00041110432357527316},
{'label': 'surprise', 'score': 0.0002288572577526793}
]]
"""
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```json
{
'epoch': 8.0,
'eval_accuracy': 0.9195,
'eval_f1': 0.918975455617076,
'eval_loss': 0.3486028015613556,
'eval_runtime': 4.2308,
'eval_samples_per_second': 472.726,
'eval_steps_per_second': 7.564
}
```
## Reference:
* [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/)