Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
CASIA-LM commited on
Commit
56eab3a
·
verified ·
1 Parent(s): 5aad68f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -60
README.md CHANGED
@@ -3,6 +3,7 @@ license: apache-2.0
3
  size_categories:
4
  - n>1T
5
  ---
 
6
  # ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information
7
  This directory contains the ChineseWebText2.0 dataset, and a new tool-chain called MDFG-tool for constructing large-scale and high-quality Chinese datasets with multi-dimensional and fine-grained information. Our ChineseWebText2.0 code is publicly available on github [(here)](https://github.com/CASIA-LM/ChineseWebText2.0).
8
  ## ChineseWebText2.0
@@ -39,7 +40,11 @@ We have released the latest and largest Chinese dataset, ChineseWebText 2.0, whi
39
 
40
  We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
41
 
42
- <div align=center><img src="./picture/structure.png" style="zoom:67%;" /></div>
 
 
 
 
43
 
44
  ### Data Analysis
45
 
@@ -50,35 +55,26 @@ In order to provide a high-level overview of the preparation and preprocessing s
50
  After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
51
 
52
  <div align="center">
53
- <img src="./picture/data_statistics.png" width="67%" />
54
  <br>
55
- <em>Figure 1: Description of Image 1</em>
56
  </div>
57
 
58
 
59
  #### Data Quality Distribution
60
 
 
 
 
 
 
61
 
62
- <table>
63
- <tr>
64
- <td>
65
- <img src="./picture/quality_distribution.pdf" alt="Image 1" style="width:100%;">
66
- <br>
67
- <em>Figure 1: Description of Image 1</em>
68
- </td>
69
- <td>
70
- <img src="./picture/human_acceptance.pdf" alt="Image 2" style="width:100%;">
71
- <br>
72
- <em>Figure 1: Description of Image 1</em>
73
- </td>
74
- </tr>
75
- </table>
76
 
77
 
78
- **Quality Distribution** To investigate the quality distribution, we calculate the data proportions across different quality score ranges from our ChineseWebText 2.0 dataset. Figure~\ref{fig:quality_distribution} shows the proportion of data across different quality score intervals. The data is primarily concentrated in the mid-range score intervals \([0.2, 0.4)\), each contributing approximately 18\%. Additionally, a significant proportion lies within the high-quality interval \([0.9, 1.0)\), reflecting the presence of high-quality content in the dataset. In contrast, the lowest interval \([0.1, 0.2)\) contains only a minimal fraction, indicating a limited amount of poor-quality data. Note that the quantity of quality scores in the range [0, 0.1) is zero, so this interval has been omitted. This quality distribution provides a valuable reference for LLM researchers, enabling them to select data based on desired quality thresholds.
79
 
80
  **Human Acceptance Evaluation**
81
- To validate the consistency between quality evaluation and human judgments, Figure~\ref{fig:human_acceptance} displays human acceptance rates across different score intervals, showing a clear positive trend: higher scores correlate with higher acceptance rates.
82
  Specifically, the highest score interval \([0.5, 1.0)\) achieves an acceptance rate exceeding 90\%, while the lowest interval \([0.1, 0.2)\) still maintains an acceptance rate of 80\%. This trend highlights the overall high quality of the data.
83
 
84
  In summary, the dataset is primarily concentrated in the mid-quality range, with higher scores strongly correlating to greater human acceptance. This alignment underscores the dataset's potential for high-quality applications, where consistency in human-like quality is essential.
@@ -92,58 +88,45 @@ To investigate the distribution of our dataset across different domains, in this
92
 
93
  As illustrated in Figure 8, the sample counts and corresponding proportions across various domains are presented. The Encyclopedia, General, and News domains dominate the dataset, comprising 33.43\%, 32.63\%, and 28.01\% of the data, respectively. In contrast, the Math domain has the smallest share at 0.55\%, yet it still includes over 8 million samples. Figure 9 complements this with a bar chart that provides a more intuitive visualization of the data distribution. This comprehensive domain distribution enables LLM researchers to select suitable datasets, facilitating the enhancement of the model’s knowledge and capabilities in specific domains.
94
 
95
- <!-- table and figure -->
 
 
 
 
 
 
96
 
97
 
98
  \paragraph{Quality-Related Domain Distribution}
99
- In order to explore the domain distribution across different quality intervals, we perform an analysis focusing on the quality-related domain distribution. Specifically, we calculate the proportions of various domains within each quality interval. As shown in Table~\ref{tab:quality_domain_distribution}, this table provides a detailed breakdown of domain proportions across different quality intervals. From the results, we observe that the distribution of domain data within each quality interval aligns closely with their overall distribution in the dataset. Based on the proportions in Table~\ref{tab:quality_domain_distribution}, researchers can filter domain-specific data within targeted quality intervals, enabling the extraction of higher-quality domain-specific data subsets.
100
-
101
- \begin{table}[htpb!]
102
- \centering
103
- \caption{Domain Distribution Across Quality Levels.}
104
- \label{tab:quality_domain_distribution}
105
- \begin{tabular}{lcccccccccc}
106
- % \hline
107
- \toprule
108
- \multirow{2.5}{*}{Domain} & \multicolumn{9}{c}{Proportion in Each Quality Interval (\%)} \\
109
- \cmidrule{2-11}
110
- & 0.1-0.2 & 0.2-0.3 & 0.3-0.4 & 0.4-0.5 & 0.5-0.6 & 0.6-0.7 & 0.7-0.8 & 0.8-0.9 & 0.9-1.0 & \textbf{Total} \\
111
- \midrule
112
- % Domain & 0.1-0.2 & 0.2-0.3 & 0.3-0.4 & 0.4-0.5 & 0.5-0.6 & 0.6-0.7 & 0.7-0.8 & 0.8-0.9 & 0.9-1.0 & \textbf{Total}\\
113
- % \hline
114
- Book & 2.37 & 1.98 & 2.61 & 2.72 & 2.57 & 2.83 & 3.27 & 3.83 & 5.47 & 2.97 \\
115
- Dialogue & 7.99 & 12.57 & 17.48 & 21.57 & 24.99 & 26.32 & 27.79 & 27.94 & 21.26 & 20.54 \\
116
- Education & 8.43 & 11.70 & 15.00 & 15.33 & 15.96 & 16.11 & 15.44 & 14.82 & 13.93 & 14.44 \\
117
- Encyclopedia & 17.98 & 27.67 & 31.51 & 34.92 & 38.66 & 40.03 & 39.52 & 37.29 & 29.48 & 33.43 \\
118
- Finance & 1.57 & 3.16 & 4.82 & 6.67 & 8.81 & 10.48 & 11.86 & 12.23 & 9.65 & 7.26 \\
119
- Law & 0.34 & 0.80 & 1.43 & 1.97 & 2.63 & 3.54 & 4.71 & 5.46 & 5.40 & 2.66 \\
120
- Math & 0.42 & 0.41 & 0.48 & 0.58 & 0.65 & 0.68 & 0.71 & 0.67 & 0.49 & 0.55 \\
121
- Medicine & 3.21 & 6.72 & 7.71 & 7.02 & 7.25 & 7.28 & 7.00 & 6.61 & 5.41 & 6.89 \\
122
- Military & 0.12 & 0.24 & 0.50 & 0.82 & 0.95 & 1.05 & 1.23 & 1.41 & 1.41 & 0.81 \\
123
- News & 18.78 & 19.60 & 24.33 & 28.55 & 32.62 & 34.78 & 34.72 & 33.61 & 31.19 & 28.01 \\
124
- Technology & 16.75 & 18.52 & 18.44 & 20.42 & 22.26 & 22.42 & 21.45 & 19.66 & 15.32 & 19.47 \\
125
- General & 53.44 & 44.00 & 35.62 & 30.36 & 26.52 & 24.57 & 24.12 & 25.29 & 31.20 & 32.63 \\
126
- \bottomrule
127
- % \hline
128
- \end{tabular}
129
- \end{table}
130
 
131
 
132
  \subsection{Data Toxicity Analysis}
133
 
134
- \begin{figure}[htp]
135
- \centering
136
- \includegraphics[width=15cm]{picture/toxicity_distribution.pdf}
137
- \caption{The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.}
138
- \label{fig:Toxicity_distribution}
139
- \end{figure}
140
 
141
 
142
- During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure~\ref{fig:Toxicity_distribution}, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41\% of the dataset.
143
 
144
- Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Table~\ref{tab:comparison_toxic}, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019\cite{offenseval}, AbusEval\cite{abuseval}, HatEval\cite{hateval}, RAL-E\cite{hatebert} and ToxiGen\cite{hartvigsen2022toxigen} are English toxicity datasets, while COLD\cite{deng2022cold}, ToxiCN\cite{toxicn}, SWSR\cite{jiang2022swsr} and CDial-Bias\cite{zhou2022towards} are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
145
 
146
- <!-- tabel====== -->
 
 
 
 
 
 
147
 
148
 
149
  ## Citation
@@ -155,3 +138,6 @@ Please cite the paper if you use the data or code in this repo.
155
 
156
 
157
 
 
 
 
 
3
  size_categories:
4
  - n>1T
5
  ---
6
+
7
  # ChineseWebText 2.0: Large-Scale High-quality Chinese Web Text with Multi-dimensional and fine-grained information
8
  This directory contains the ChineseWebText2.0 dataset, and a new tool-chain called MDFG-tool for constructing large-scale and high-quality Chinese datasets with multi-dimensional and fine-grained information. Our ChineseWebText2.0 code is publicly available on github [(here)](https://github.com/CASIA-LM/ChineseWebText2.0).
9
  ## ChineseWebText2.0
 
40
 
41
  We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse-grained filtering module, which applies rule-based methods to clean the data, focusing on criteria such as text length and sensitive words to ensure data quality. After cleaning, we evaluate the text quality using a BERT-based model. This process generates a quality score, and by selecting an appropriate threshold, we can extract high-quality text data that meets our needs. Next, we use FastText for both single-label and multi-label classification of the cleaned data. Meanwhile, we conduct toxicity assessment. The FastText model is used to filter out toxic content and assign toxicity scores to each text. This scoring system allows researchers to set thresholds for identifying and selecting harmful texts for further training.
42
 
43
+ <div align="center">
44
+ <img src="/picture/structure.png" width="67%" />
45
+ <br>
46
+ <em>Figure 1: The pipeline of MDFG-tool.</em>
47
+ </div>
48
 
49
  ### Data Analysis
50
 
 
55
  After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
56
 
57
  <div align="center">
58
+ <img src="/picture/data_statistics.png" width="67%" />
59
  <br>
60
+ <em>Figure 2: The proportion of data removed from the originally collected data in each processing step. The gray bars represent the proportion of data removed in each step relative to the data remaining before that step, while the other colored bars represent the retained data and its proportion relative to the originally collected data.</em>
61
  </div>
62
 
63
 
64
  #### Data Quality Distribution
65
 
66
+ <div align="center">
67
+ <img src="/picture/quality-evaluation.png" width="67%" />
68
+ <br>
69
+ <em>Figure 3: The Data Analysis on Quality Evaluation.</em>
70
+ </div>
71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
 
74
+ **Quality Distribution** To investigate the quality distribution, we calculate the data proportions across different quality score ranges from our ChineseWebText 2.0 dataset. Figure 3(a) shows the proportion of data across different quality score intervals. The data is primarily concentrated in the mid-range score intervals \([0.2, 0.4)\), each contributing approximately 18\%. Additionally, a significant proportion lies within the high-quality interval \([0.9, 1.0)\), reflecting the presence of high-quality content in the dataset. In contrast, the lowest interval \([0.1, 0.2)\) contains only a minimal fraction, indicating a limited amount of poor-quality data. Note that the quantity of quality scores in the range [0, 0.1) is zero, so this interval has been omitted. This quality distribution provides a valuable reference for LLM researchers, enabling them to select data based on desired quality thresholds.
75
 
76
  **Human Acceptance Evaluation**
77
+ To validate the consistency between quality evaluation and human judgments, Figure 3(b) displays human acceptance rates across different score intervals, showing a clear positive trend: higher scores correlate with higher acceptance rates.
78
  Specifically, the highest score interval \([0.5, 1.0)\) achieves an acceptance rate exceeding 90\%, while the lowest interval \([0.1, 0.2)\) still maintains an acceptance rate of 80\%. This trend highlights the overall high quality of the data.
79
 
80
  In summary, the dataset is primarily concentrated in the mid-quality range, with higher scores strongly correlating to greater human acceptance. This alignment underscores the dataset's potential for high-quality applications, where consistency in human-like quality is essential.
 
88
 
89
  As illustrated in Figure 8, the sample counts and corresponding proportions across various domains are presented. The Encyclopedia, General, and News domains dominate the dataset, comprising 33.43\%, 32.63\%, and 28.01\% of the data, respectively. In contrast, the Math domain has the smallest share at 0.55\%, yet it still includes over 8 million samples. Figure 9 complements this with a bar chart that provides a more intuitive visualization of the data distribution. This comprehensive domain distribution enables LLM researchers to select suitable datasets, facilitating the enhancement of the model’s knowledge and capabilities in specific domains.
90
 
91
+
92
+ <div align="center">
93
+ <img src="/picture/domain-distribution.png" width="67%" />
94
+ <br>
95
+ <em>Figure 4: Data Distribution Across Different Domains.</em>
96
+ </div>
97
+
98
 
99
 
100
  \paragraph{Quality-Related Domain Distribution}
101
+ In order to explore the domain distribution across different quality intervals, we perform an analysis focusing on the quality-related domain distribution. Specifically, we calculate the proportions of various domains within each quality interval. As shown in Figure 5, this table provides a detailed breakdown of domain proportions across different quality intervals. From the results, we observe that the distribution of domain data within each quality interval aligns closely with their overall distribution in the dataset. Based on the proportions in Figure 5, researchers can filter domain-specific data within targeted quality intervals, enabling the extraction of higher-quality domain-specific data subsets.
102
+
103
+
104
+ <div align="center">
105
+ <img src="/picture/domain-distribution-per-quality.png" width="67%" />
106
+ <br>
107
+ <em>Figure 5: Table of Domain Distribution Across Quality Levels</em>
108
+ </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
 
110
 
111
  \subsection{Data Toxicity Analysis}
112
 
113
+ <div align="center">
114
+ <img src="/picture/toxicity_distribution.pdf" width="67%" />
115
+ <br>
116
+ <em>Figure 6:The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.</em>
117
+ </div>
 
118
 
119
 
 
120
 
121
+ During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure 6, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41\% of the dataset.
122
 
123
+ Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Figure 7, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019\cite{offenseval}, AbusEval\cite{abuseval}, HatEval\cite{hateval}, RAL-E\cite{hatebert} and ToxiGen\cite{hartvigsen2022toxigen} are English toxicity datasets, while COLD\cite{deng2022cold}, ToxiCN\cite{toxicn}, SWSR\cite{jiang2022swsr} and CDial-Bias\cite{zhou2022towards} are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
124
+
125
+ <div align="center">
126
+ <img src="toxicity-datasets-comparison.png" width="67%" />
127
+ <br>
128
+ <em>Figure 7: Table of Comparison of Different Toxicity Datasets.</em>
129
+ </div>
130
 
131
 
132
  ## Citation
 
138
 
139
 
140
 
141
+
142
+
143
+