Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ We have released the latest and largest Chinese dataset, ChineseWebText 2.0, whi
|
|
41 |
We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
|
42 |
|
43 |
<div align="center">
|
44 |
-
<img src="./
|
45 |
<br>
|
46 |
<em>Figure 1: The pipeline of MDFG-tool.</em>
|
47 |
</div>
|
@@ -55,7 +55,7 @@ In order to provide a high-level overview of the preparation and preprocessing s
|
|
55 |
After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
|
56 |
|
57 |
<div align="center">
|
58 |
-
<img src="./
|
59 |
<br>
|
60 |
<em>Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.</em>
|
61 |
</div>
|
@@ -64,7 +64,7 @@ After collecting raw data from various sources, we initially obtain a original C
|
|
64 |
#### Data Quality Distribution
|
65 |
|
66 |
<div align="center">
|
67 |
-
<img src="./
|
68 |
<br>
|
69 |
<em>Figure 3: The Data Analysis on Quality Evaluation.</em>
|
70 |
</div>
|
@@ -90,7 +90,7 @@ As illustrated in Figure 8, the sample counts and corresponding proportions acro
|
|
90 |
|
91 |
|
92 |
<div align="center">
|
93 |
-
<img src="./
|
94 |
<br>
|
95 |
<em>Figure 4: Data Distribution Across Different Domains.</em>
|
96 |
</div>
|
@@ -102,7 +102,7 @@ In order to explore the domain distribution across different quality intervals,
|
|
102 |
|
103 |
|
104 |
<div align="center">
|
105 |
-
<img src="./
|
106 |
<br>
|
107 |
<em>Figure 5: Table of Domain Distribution Across Quality Levels</em>
|
108 |
</div>
|
@@ -111,7 +111,7 @@ In order to explore the domain distribution across different quality intervals,
|
|
111 |
#### Data Toxicity Analysis
|
112 |
|
113 |
<div align="center">
|
114 |
-
<img src="./
|
115 |
<br>
|
116 |
<em>Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.</em>
|
117 |
</div>
|
@@ -123,7 +123,7 @@ During the training procedure of LLMs, toxic data introduces harmful knowledge a
|
|
123 |
Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019\cite{offenseval}, AbusEval\cite{abuseval}, HatEval\cite{hateval}, RAL-E\cite{hatebert} and ToxiGen\cite{hartvigsen2022toxigen} are English toxicity datasets, while COLD\cite{deng2022cold}, ToxiCN\cite{toxicn}, SWSR\cite{jiang2022swsr} and CDial-Bias\cite{zhou2022towards} are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
|
124 |
|
125 |
<div align="center">
|
126 |
-
<img src="./
|
127 |
<br>
|
128 |
<em>Figure 7: Table of Comparison of Different Toxicity Datasets.</em>
|
129 |
</div>
|
|
|
41 |
We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
|
42 |
|
43 |
<div align="center">
|
44 |
+
<img src="./Pictures/structure.png" width="67%" />
|
45 |
<br>
|
46 |
<em>Figure 1: The pipeline of MDFG-tool.</em>
|
47 |
</div>
|
|
|
55 |
After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
|
56 |
|
57 |
<div align="center">
|
58 |
+
<img src="./Pictures/data_statistics.png" width="100%" />
|
59 |
<br>
|
60 |
<em>Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.</em>
|
61 |
</div>
|
|
|
64 |
#### Data Quality Distribution
|
65 |
|
66 |
<div align="center">
|
67 |
+
<img src="./Pictures/quality-evaluation.png" width="100%" />
|
68 |
<br>
|
69 |
<em>Figure 3: The Data Analysis on Quality Evaluation.</em>
|
70 |
</div>
|
|
|
90 |
|
91 |
|
92 |
<div align="center">
|
93 |
+
<img src="./Pictures/domain-distribution.png" width="100%" />
|
94 |
<br>
|
95 |
<em>Figure 4: Data Distribution Across Different Domains.</em>
|
96 |
</div>
|
|
|
102 |
|
103 |
|
104 |
<div align="center">
|
105 |
+
<img src="./Pictures/domain-distribution-per-quality.png" width="100%" />
|
106 |
<br>
|
107 |
<em>Figure 5: Table of Domain Distribution Across Quality Levels</em>
|
108 |
</div>
|
|
|
111 |
#### Data Toxicity Analysis
|
112 |
|
113 |
<div align="center">
|
114 |
+
<img src="./Pictures/toxicity_distribution.png" width="100%" />
|
115 |
<br>
|
116 |
<em>Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.</em>
|
117 |
</div>
|
|
|
123 |
Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019\cite{offenseval}, AbusEval\cite{abuseval}, HatEval\cite{hateval}, RAL-E\cite{hatebert} and ToxiGen\cite{hartvigsen2022toxigen} are English toxicity datasets, while COLD\cite{deng2022cold}, ToxiCN\cite{toxicn}, SWSR\cite{jiang2022swsr} and CDial-Bias\cite{zhou2022towards} are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
|
124 |
|
125 |
<div align="center">
|
126 |
+
<img src="./Pictures/toxicity-datasets-comparison.png" width="100%" />
|
127 |
<br>
|
128 |
<em>Figure 7: Table of Comparison of Different Toxicity Datasets.</em>
|
129 |
</div>
|