Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ We have released the latest and largest Chinese dataset, ChineseWebText 2.0, whi
|
|
41 |
We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
|
42 |
|
43 |
<div align="center">
|
44 |
-
<img src="
|
45 |
<br>
|
46 |
<em>Figure 1: The pipeline of MDFG-tool.</em>
|
47 |
</div>
|
@@ -55,7 +55,7 @@ In order to provide a high-level overview of the preparation and preprocessing s
|
|
55 |
After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
|
56 |
|
57 |
<div align="center">
|
58 |
-
<img src="
|
59 |
<br>
|
60 |
<em>Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.</em>
|
61 |
</div>
|
@@ -64,7 +64,7 @@ After collecting raw data from various sources, we initially obtain a original C
|
|
64 |
#### Data Quality Distribution
|
65 |
|
66 |
<div align="center">
|
67 |
-
<img src="
|
68 |
<br>
|
69 |
<em>Figure 3: The Data Analysis on Quality Evaluation.</em>
|
70 |
</div>
|
@@ -90,7 +90,7 @@ As illustrated in Figure 8, the sample counts and corresponding proportions acro
|
|
90 |
|
91 |
|
92 |
<div align="center">
|
93 |
-
<img src="
|
94 |
<br>
|
95 |
<em>Figure 4: Data Distribution Across Different Domains.</em>
|
96 |
</div>
|
@@ -102,7 +102,7 @@ In order to explore the domain distribution across different quality intervals,
|
|
102 |
|
103 |
|
104 |
<div align="center">
|
105 |
-
<img src="
|
106 |
<br>
|
107 |
<em>Figure 5: Table of Domain Distribution Across Quality Levels</em>
|
108 |
</div>
|
@@ -111,7 +111,7 @@ In order to explore the domain distribution across different quality intervals,
|
|
111 |
\subsection{Data Toxicity Analysis}
|
112 |
|
113 |
<div align="center">
|
114 |
-
<img src="
|
115 |
<br>
|
116 |
<em>Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.</em>
|
117 |
</div>
|
|
|
41 |
We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
|
42 |
|
43 |
<div align="center">
|
44 |
+
<img src=".\picture\structure.png" width="67%" />
|
45 |
<br>
|
46 |
<em>Figure 1: The pipeline of MDFG-tool.</em>
|
47 |
</div>
|
|
|
55 |
After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
|
56 |
|
57 |
<div align="center">
|
58 |
+
<img src=".\picture\data_statistics.png" width="67%" />
|
59 |
<br>
|
60 |
<em>Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.</em>
|
61 |
</div>
|
|
|
64 |
#### Data Quality Distribution
|
65 |
|
66 |
<div align="center">
|
67 |
+
<img src=".\picture\quality-evaluation.png" width="67%" />
|
68 |
<br>
|
69 |
<em>Figure 3: The Data Analysis on Quality Evaluation.</em>
|
70 |
</div>
|
|
|
90 |
|
91 |
|
92 |
<div align="center">
|
93 |
+
<img src=".\picture\domain-distribution.png" width="67%" />
|
94 |
<br>
|
95 |
<em>Figure 4: Data Distribution Across Different Domains.</em>
|
96 |
</div>
|
|
|
102 |
|
103 |
|
104 |
<div align="center">
|
105 |
+
<img src=".\picture\domain-distribution-per-quality.png" width="67%" />
|
106 |
<br>
|
107 |
<em>Figure 5: Table of Domain Distribution Across Quality Levels</em>
|
108 |
</div>
|
|
|
111 |
\subsection{Data Toxicity Analysis}
|
112 |
|
113 |
<div align="center">
|
114 |
+
<img src=".\picture\toxicity_distribution.pdf" width="67%" />
|
115 |
<br>
|
116 |
<em>Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.</em>
|
117 |
</div>
|