Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
CASIA-LM commited on
Commit
bc5e221
·
verified ·
1 Parent(s): d622103

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -120,7 +120,7 @@ In order to explore the domain distribution across different quality intervals,
120
 
121
  During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure 6, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41\% of the dataset.
122
 
123
- Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019\cite{offenseval}, AbusEval\cite{abuseval}, HatEval\cite{hateval}, RAL-E\cite{hatebert} and ToxiGen\cite{hartvigsen2022toxigen} are English toxicity datasets, while COLD\cite{deng2022cold}, ToxiCN\cite{toxicn}, SWSR\cite{jiang2022swsr} and CDial-Bias\cite{zhou2022towards} are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
124
 
125
  <div align="center">
126
  <img src="./Pictures/toxicity-datasets-comparison.png" width="100%" />
 
120
 
121
  During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure 6, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41\% of the dataset.
122
 
123
+ Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019, AbusEval, HatEval, RAL-E and ToxiGen are English toxicity datasets, while COLD, ToxiCN, SWSR and CDial-Bias are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
124
 
125
  <div align="center">
126
  <img src="./Pictures/toxicity-datasets-comparison.png" width="100%" />