metadata
library_name: transformers
tags:
- toxic-comment
Model Card for Model ID
This model is fine-tuned on top of best base uncased model for task of performing classification of task as sarcastic or not. Its purpose is to predict whether a given text contains hate speech or not. Class Label are 1 for toxic comment and 0 for not.
Model Details
Model Description
Important info This model works with binary classification and doesn't consider multilabel clssification. It detects it's either a toxic comment or not.
- Developed by: Ayush Dhoundiyal
- Language(s) (NLP): English
- Finetuned from model: Bert Base Uncased
Model Sources [optional]
- Paper: https://github.com/ayushdh96/Natural-Language-Processing/blob/main/Ayush_Dhoundiyal_Project_Report.pdf [More Information Needed]
Training Details
Training Data
Pre-processing invloved basic steps like lemmtizing, stemming of words. Removing stop words and lowercasing the text to be classified. It's requested to perform these steps for good results.
Evaluation
The model provides the accuracy of 0.95, precision of 0.84. recall of 0.62 and f1 score of 0.71.