|
--- |
|
language: |
|
- ar |
|
|
|
datasets: |
|
- AJGT |
|
|
|
tags: |
|
- AJGT |
|
|
|
widget: |
|
- text: "يهدي الله من يشاء" |
|
- text: "الاسلوب قذر وقمامه" |
|
|
|
--- |
|
|
|
# BERT-AJGT |
|
Arabic version bert model fine tuned on AJGT dataset |
|
|
|
## Data |
|
The model were fine-tuned on ~1800 sentence from twitter for Jordanian dialect. |
|
|
|
|
|
## Results |
|
| class | precision | recall | f1-score | Support | |
|
|----------|-----------|--------|----------|---------| |
|
| 0 | 0.9462 | 0.9778 | 0.9617 | 90 | |
|
| 1 | 0.9399 | 0.9689 | 0.9542 | 90 | |
|
| Accuracy | | | 0.9611 | 180 | |
|
|
|
|
|
|
|
## How to use |
|
|
|
You can use these models by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this: |
|
|
|
```python |
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
|
model_name="mofawzy/bert-ajgt" |
|
model = AutoModelForSequenceClassification.from_pretrained(model_name,num_labels=2) |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
``` |
|
|