Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,58 @@ datasets:
|
|
5 |
language:
|
6 |
- tr
|
7 |
pipeline_tag: text-classification
|
8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
language:
|
6 |
- tr
|
7 |
pipeline_tag: text-classification
|
8 |
+
---
|
9 |
+
|
10 |
+
## About the model
|
11 |
+
It is a Turkish bert-based model created to determine the types of bullying that people use against each other in social media.
|
12 |
+
Included classes;
|
13 |
+
|
14 |
+
- Nötr
|
15 |
+
- Kızdırma/Hakaret
|
16 |
+
- Cinsiyetçilik
|
17 |
+
- Irkçılık
|
18 |
+
|
19 |
+
3388 tweets were used in the training of the model. Accordingly, the success rates in education are as follows;
|
20 |
+
|
21 |
+
| | INSULT | OTHER | PROFANITY | RACIST | SEXIST |
|
22 |
+
| ------ | ------ | ------ | ------ | ------ | ------ |
|
23 |
+
| Precision | 0.901 | 0.924 | 0.978 | 1.000 | 0.980 |
|
24 |
+
| Recall | 0.920 | 0.980 | 0.900 | 0.980 | 1.000 |
|
25 |
+
| F1 Score | 0.910 | 0.9514 | 0.937 | 0.989 | 0.990 |
|
26 |
+
|
27 |
+
F-Score: 0.9559690799177005
|
28 |
+
Recall: 0.9559999999999998
|
29 |
+
Precision: 0.9570284225256961
|
30 |
+
Accuracy: 0.956
|
31 |
+
|
32 |
+
## Dependency
|
33 |
+
pip install torch torchvision torchaudio
|
34 |
+
pip install tf-keras
|
35 |
+
pip install transformers
|
36 |
+
pip install tensorflow
|
37 |
+
|
38 |
+
## Example
|
39 |
+
```sh
|
40 |
+
from transformers import AutoTokenizer, TextClassificationPipeline, TFBertForSequenceClassification
|
41 |
+
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained("nanelimon/bert-base-turkish-offensive")
|
43 |
+
model = TFBertForSequenceClassification.from_pretrained("nanelimon/bert-base-turkish-offensive", from_pt=True)
|
44 |
+
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, top_k=2)
|
45 |
+
|
46 |
+
print(pipe('Bu bir denemedir hadi sende dene!'))
|
47 |
+
```
|
48 |
+
Result;
|
49 |
+
```sh
|
50 |
+
[[{'label': 'OTHER', 'score': 1.000}, {'label': 'INSULT', 'score': 0.000}]]
|
51 |
+
```
|
52 |
+
- label= It shows which class the sent Turkish text belongs to according to the model.
|
53 |
+
- score= It shows the compliance rate of the Turkish text sent to the label found.
|
54 |
+
|
55 |
+
## Authors
|
56 |
+
- Seyma SARIGIL: [email protected]
|
57 |
+
|
58 |
+
## License
|
59 |
+
|
60 |
+
gpl-3.0
|
61 |
+
|
62 |
+
**Free Software, Hell Yeah!**
|