shainaraza
commited on
Commit
•
59365ad
1
Parent(s):
9a5b29f
Update README.md
Browse files
README.md
CHANGED
@@ -11,11 +11,13 @@ Toxicity Classifier with Debiaser
|
|
11 |
|
12 |
## Model description
|
13 |
|
14 |
-
This model is a text classification model trained on a large dataset of comments to predict whether a given comment contains biased language or not.
|
|
|
15 |
|
16 |
## Intended Use
|
17 |
|
18 |
-
This model is intended to be used to automatically detect
|
|
|
19 |
|
20 |
`````
|
21 |
import torch
|
@@ -38,7 +40,7 @@ print(f"Prediction: {'biased' if prediction == 1 else 'not biased'}")
|
|
38 |
|
39 |
## Training data
|
40 |
|
41 |
-
The model was trained on a labeled dataset of comments from various online platforms, which were annotated as toxic or non-toxic by human annotators.
|
42 |
|
43 |
## Evaluation results
|
44 |
|
@@ -50,8 +52,8 @@ The model was evaluated on a separate test set of comments and achieved the foll
|
|
50 |
|
51 |
## Limitations and bias
|
52 |
|
53 |
-
This model has been trained and tested on comments from various online platforms, but its performance may be limited when applied to comments from different domains or languages.
|
54 |
|
55 |
## Conclusion
|
56 |
|
57 |
-
The Toxicity Classifier
|
|
|
11 |
|
12 |
## Model description
|
13 |
|
14 |
+
This model is a text classification model trained on a large dataset of comments to predict whether a given comment contains biased language or not.
|
15 |
+
The model is based on DistilBERT architecture and fine-tuned on a labeled dataset of toxic and non-toxic comments.
|
16 |
|
17 |
## Intended Use
|
18 |
|
19 |
+
This model is intended to be used to automatically detect biased language in user-generated comments in various online platforms.
|
20 |
+
It can also be used as a component in a larger pipeline for text classification, sentiment analysis, or bias detection tasks.
|
21 |
|
22 |
`````
|
23 |
import torch
|
|
|
40 |
|
41 |
## Training data
|
42 |
|
43 |
+
The model was trained on a labeled dataset of comments from various online platforms, which were annotated as toxic or non-toxic by human annotators.
|
44 |
|
45 |
## Evaluation results
|
46 |
|
|
|
52 |
|
53 |
## Limitations and bias
|
54 |
|
55 |
+
This model has been trained and tested on comments from various online platforms, but its performance may be limited when applied to comments from different domains or languages.
|
56 |
|
57 |
## Conclusion
|
58 |
|
59 |
+
The Toxicity Classifier is a powerful tool for automatically detecting and flagging potentially biased language in user-generated comments. While there are some limitations to its performance and potential biases in the training data, the model's high accuracy and robustness make it a valuable asset for any online platform looking to improve the quality and inclusivity of its user-generated content.
|