shainaraza
commited on
Commit
•
f2f238b
1
Parent(s):
27ee705
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,38 @@
|
|
1 |
---
|
2 |
-
license: openrail
|
3 |
datasets:
|
4 |
- jigsaw_unintended_bias
|
5 |
language:
|
6 |
- en
|
7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
2 |
datasets:
|
3 |
- jigsaw_unintended_bias
|
4 |
language:
|
5 |
- en
|
6 |
+
---
|
7 |
+
|
8 |
+
# Model Name
|
9 |
+
|
10 |
+
Toxicity Classifier with Debiaser
|
11 |
+
|
12 |
+
## Model description
|
13 |
+
|
14 |
+
This model is a text classification model trained on a large dataset of comments to predict whether a given comment contains biased language or not. The model is based on DistilBERT architecture and fine-tuned on a labeled dataset of toxic and non-toxic comments. In order to remove any potential biases from the model, the training data was debiased using a variety of techniques including gender swapping, antonym replacement, and counterfactual data augmentation.
|
15 |
+
|
16 |
+
## Intended Use
|
17 |
+
|
18 |
+
This model is intended to be used to automatically detect and flag potentially biased language in user-generated comments in various online platforms. It can also be used as a component in a larger pipeline for text classification, sentiment analysis, or bias detection tasks.
|
19 |
+
|
20 |
+
## Training data
|
21 |
+
|
22 |
+
The model was trained on a labeled dataset of comments from various online platforms, which were annotated as toxic or non-toxic by human annotators. The training data was cleaned and preprocessed before training, and a variety of data augmentation techniques were used to increase the amount of training data and improve the model's robustness to various types of biases.
|
23 |
+
|
24 |
+
## Evaluation results
|
25 |
+
|
26 |
+
The model was evaluated on a separate test set of comments and achieved the following performance metrics:
|
27 |
+
|
28 |
+
- Accuracy: 0.95
|
29 |
+
- F1-score: 0.94
|
30 |
+
- ROC-AUC: 0.97
|
31 |
+
|
32 |
+
## Limitations and bias
|
33 |
+
|
34 |
+
This model has been trained and tested on comments from various online platforms, but its performance may be limited when applied to comments from different domains or languages. Additionally, while the training data has been debiased using various techniques, it may still contain some residual biases or inaccuracies that could impact the model's performance in certain contexts. Finally, it should be noted that this model is only one part of a larger pipeline for detecting and addressing bias in online platforms, and should not be used in isolation to make decisions about user-generated content.
|
35 |
+
|
36 |
+
## Conclusion
|
37 |
+
|
38 |
+
The Toxicity Classifier with Debiaser is a powerful tool for automatically detecting and flagging potentially biased language in user-generated comments. While there are some limitations to its performance and potential biases in the training data, the model's high accuracy and robustness make it a valuable asset for any online platform looking to improve the quality and inclusivity of its user-generated content.
|