File size: 2,016 Bytes
085a5c8
 
 
8b1c79e
085a5c8
 
 
1d1a5c1
 
3b372d5
 
1d1a5c1
3b372d5
1d1a5c1
 
 
 
 
 
 
6c3f574
 
 
 
1d1a5c1
3b372d5
 
 
 
 
 
 
 
 
1d1a5c1
4aaf171
1d1a5c1
 
 
 
4aaf171
1d1a5c1
 
1de617a
1d1a5c1
1de617a
ab8dbfb
1de617a
 
 
 
1d1a5c1
 
1de617a
1d1a5c1
1de617a
1d1a5c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: mit
datasets:
- nanelimon/insult-dataset
language:
- tr
pipeline_tag: text-classification
---

# About the model
This model is designed for text classification, specifically for identifying offensive content in Turkish text. The model classifies text into five categories: INSULT, OTHER, PROFANITY, RACIST, and SEXIST.

## Model Metrics

|        | INSULT | OTHER | PROFANITY | RACIST | SEXIST |
| ------ | ------  | ------ | ------  | ------ | ------ |
| Precision | 0.901 | 0.924 | 0.978 | 1.000 | 0.980 |
| Recall  | 0.920 | 0.980 | 0.900 | 0.980 | 1.000 |
| F1 Score | 0.910 | 0.9514 | 0.937 | 0.989 | 0.990 |

- F-Score:  0.9559690799177005
- Recall:  0.9559999999999998
- Precision:  0.9570284225256961
- Accuracy:  0.956

## Training Information
- Device : macOS 14.5 23F79 arm64 | GPU: Apple M2 Max | Memory: 5840MiB / 32768MiB 
- Training completed in 0:22:54 (hh:mm:ss)
- Optimizer: AdamW
- learning_rate: 2e-5 
- eps: 1e-8 
- epochs: 10
- Batch size: 64

## Dependency
```sh
pip install torch torchvision torchaudio
pip install tf-keras  
pip install transformers  
pip install tensorflow
```
## Example
```sh
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification, TextClassificationPipeline

# Load the tokenizer and model
model_name = "nanelimon/bert-base-turkish-offensive"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFAutoModelForSequenceClassification.from_pretrained(model_name)

# Create the pipeline
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, top_k=2)

# Test the pipeline
print(pipe('Bu bir denemedir hadi sende dene!'))

```
Result;
```sh
[[{'label': 'OTHER', 'score': 1.000}, {'label': 'INSULT', 'score': 0.000}]]
```
- label= It shows which class the sent Turkish text belongs to according to the model.
- score= It shows the compliance rate of the Turkish text sent to the label found.

## Authors
- Seyma SARIGIL: [email protected]

## License

gpl-3.0

**Free Software, Hell Yeah!**