Seymasa's picture
Update README.md
6c3f574 verified
|
raw
history blame
1.8 kB
metadata
license: mit
datasets:
  - nanelimon/insult-dataset
language:
  - tr
pipeline_tag: text-classification

About the model

It is a Turkish bert-based model created to determine the types of bullying that people use against each other in social media. Included classes;

  • Nötr
  • Kızdırma/Hakaret
  • Cinsiyetçilik
  • Irkçılık

3388 tweets were used in the training of the model. Accordingly, the success rates in education are as follows;

INSULT OTHER PROFANITY RACIST SEXIST
Precision 0.901 0.924 0.978 1.000 0.980
Recall 0.920 0.980 0.900 0.980 1.000
F1 Score 0.910 0.9514 0.937 0.989 0.990
  • F-Score: 0.9559690799177005
  • Recall: 0.9559999999999998
  • Precision: 0.9570284225256961
  • Accuracy: 0.956

Dependency

pip install torch torchvision torchaudio pip install tf-keras
pip install transformers
pip install tensorflow

Example

from transformers import AutoTokenizer, TextClassificationPipeline, TFBertForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("nanelimon/bert-base-turkish-offensive")
model = TFBertForSequenceClassification.from_pretrained("nanelimon/bert-base-turkish-offensive", from_pt=True)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, top_k=2)

print(pipe('Bu bir denemedir hadi sende dene!'))

Result;

[[{'label': 'OTHER', 'score': 1.000}, {'label': 'INSULT', 'score': 0.000}]]
  • label= It shows which class the sent Turkish text belongs to according to the model.
  • score= It shows the compliance rate of the Turkish text sent to the label found.

Authors

License

gpl-3.0

Free Software, Hell Yeah!