Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
CASIA-LM commited on
Commit
db70cca
·
verified ·
1 Parent(s): 7b804d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -1
README.md CHANGED
@@ -41,9 +41,117 @@ We introduce a new toolchain, MDFG-tool (see Figure 1). We begin with the coarse
41
 
42
  <div align=center><img src="./structure.png" style="zoom:67%;" /></div>
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ## Citation
45
  Please cite the paper if you use the data or code in this repo.
46
 
47
  ```shell
48
 
49
- ```
 
 
 
 
41
 
42
  <div align=center><img src="./structure.png" style="zoom:67%;" /></div>
43
 
44
+ ### Data Analysis
45
+
46
+ #### Removal Rate for Different Stages
47
+
48
+ In order to provide a high-level overview of the preparation and preprocessing stages, the figure followed shows the processing workflow and the removal rate of each step. This figure details the removal ratio of data from the previous step and the absolute percentage of the remaining data from the original collected dataset, thereby facilitating readers in tracking the various processing stages from the raw data to the high-quality dataset.
49
+
50
+ After collecting raw data from various sources, we initially obtain a original Chinese dataset totaling 6.6 TB. However, due to a significant amount of irrelevant and noisy content in some sources, a manual sampling analysis is performed in preparation stage. If irrelevant text accounted for more than 50\% of a source, the data from that source will be discarded entirely. As a result, a substantial portion of the data is removed during the preparation stage, retaining only 67.68\% of the original dataset. In preprocessing stage, four rule-based steps are implemented to filter the remained data. First, the Data Length step remove overly short texts to ensure that each text contains sufficient informational content. Next, the Character Proportion step eliminate texts with a high percentage of noisy characters, such as English, Traditional Chinese characters, or other irrelevant symbols. Finally, the Sensitive Words step and the Deduplication step are employed to remove toxic content and duplicate texts from the dataset. After the preprocessing stage, we produce a high-quality Chinese text dataset totaling 3.8 TB. In the next stage, each text in this high-quality dataset will be enriched with fine-grained annotations, including a quality score, domain lablels, a toxicity score and a toxicity label.
51
+
52
+ <div align="center">
53
+ <img src="/picture/data_statistics.png" width="67%" />
54
+ <br>
55
+ <em>Figure 1: Description of Image 1</em>
56
+ </div>
57
+
58
+
59
+ #### Data Quality Distribution
60
+
61
+
62
+ <table>
63
+ <tr>
64
+ <td>
65
+ <img src="/picture/quality_distribution.pdf" alt="Image 1" style="width:100%;">
66
+ <br>
67
+ <em>Figure 1: Description of Image 1</em>
68
+ </td>
69
+ <td>
70
+ <img src="/picture/human_acceptance.pdf" alt="Image 2" style="width:100%;">
71
+ <br>
72
+ <em>Figure 1: Description of Image 1</em>
73
+ </td>
74
+ </tr>
75
+ </table>
76
+
77
+
78
+ **Quality Distribution** To investigate the quality distribution, we calculate the data proportions across different quality score ranges from our ChineseWebText 2.0 dataset. Figure~\ref{fig:quality_distribution} shows the proportion of data across different quality score intervals. The data is primarily concentrated in the mid-range score intervals \([0.2, 0.4)\), each contributing approximately 18\%. Additionally, a significant proportion lies within the high-quality interval \([0.9, 1.0)\), reflecting the presence of high-quality content in the dataset. In contrast, the lowest interval \([0.1, 0.2)\) contains only a minimal fraction, indicating a limited amount of poor-quality data. Note that the quantity of quality scores in the range [0, 0.1) is zero, so this interval has been omitted. This quality distribution provides a valuable reference for LLM researchers, enabling them to select data based on desired quality thresholds.
79
+
80
+ **Human Acceptance Evaluation**
81
+ To validate the consistency between quality evaluation and human judgments, Figure~\ref{fig:human_acceptance} displays human acceptance rates across different score intervals, showing a clear positive trend: higher scores correlate with higher acceptance rates.
82
+ Specifically, the highest score interval \([0.5, 1.0)\) achieves an acceptance rate exceeding 90\%, while the lowest interval \([0.1, 0.2)\) still maintains an acceptance rate of 80\%. This trend highlights the overall high quality of the data.
83
+
84
+ In summary, the dataset is primarily concentrated in the mid-quality range, with higher scores strongly correlating to greater human acceptance. This alignment underscores the dataset's potential for high-quality applications, where consistency in human-like quality is essential.
85
+
86
+
87
+ #### Domain Distribution
88
+
89
+ To investigate the distribution of our dataset across different domains, in this section, we conduct an in-depth analysis of the data distribution across twelve distinct domains: *book*, *dialogue*, *education*, *encyclopedia*, *finance*, *law*, *math*, *medicine*, *military*, *news*, *technology*, and *general*. This analysis considers two perspectives: the overall domain distribution and the quality-related domain distribution, providing comprehensive insights into the dataset's composition across different domains.
90
+
91
+ **Overall Domain Distribution**
92
+
93
+ As illustrated in Figure 8, the sample counts and corresponding proportions across various domains are presented. The Encyclopedia, General, and News domains dominate the dataset, comprising 33.43\%, 32.63\%, and 28.01\% of the data, respectively. In contrast, the Math domain has the smallest share at 0.55\%, yet it still includes over 8 million samples. Figure 9 complements this with a bar chart that provides a more intuitive visualization of the data distribution. This comprehensive domain distribution enables LLM researchers to select suitable datasets, facilitating the enhancement of the model’s knowledge and capabilities in specific domains.
94
+
95
+ <!-- table and figure -->
96
+
97
+
98
+ \paragraph{Quality-Related Domain Distribution}
99
+ In order to explore the domain distribution across different quality intervals, we perform an analysis focusing on the quality-related domain distribution. Specifically, we calculate the proportions of various domains within each quality interval. As shown in Table~\ref{tab:quality_domain_distribution}, this table provides a detailed breakdown of domain proportions across different quality intervals. From the results, we observe that the distribution of domain data within each quality interval aligns closely with their overall distribution in the dataset. Based on the proportions in Table~\ref{tab:quality_domain_distribution}, researchers can filter domain-specific data within targeted quality intervals, enabling the extraction of higher-quality domain-specific data subsets.
100
+
101
+ \begin{table}[htpb!]
102
+ \centering
103
+ \caption{Domain Distribution Across Quality Levels.}
104
+ \label{tab:quality_domain_distribution}
105
+ \begin{tabular}{lcccccccccc}
106
+ % \hline
107
+ \toprule
108
+ \multirow{2.5}{*}{Domain} & \multicolumn{9}{c}{Proportion in Each Quality Interval (\%)} \\
109
+ \cmidrule{2-11}
110
+ & 0.1-0.2 & 0.2-0.3 & 0.3-0.4 & 0.4-0.5 & 0.5-0.6 & 0.6-0.7 & 0.7-0.8 & 0.8-0.9 & 0.9-1.0 & \textbf{Total} \\
111
+ \midrule
112
+ % Domain & 0.1-0.2 & 0.2-0.3 & 0.3-0.4 & 0.4-0.5 & 0.5-0.6 & 0.6-0.7 & 0.7-0.8 & 0.8-0.9 & 0.9-1.0 & \textbf{Total}\\
113
+ % \hline
114
+ Book & 2.37 & 1.98 & 2.61 & 2.72 & 2.57 & 2.83 & 3.27 & 3.83 & 5.47 & 2.97 \\
115
+ Dialogue & 7.99 & 12.57 & 17.48 & 21.57 & 24.99 & 26.32 & 27.79 & 27.94 & 21.26 & 20.54 \\
116
+ Education & 8.43 & 11.70 & 15.00 & 15.33 & 15.96 & 16.11 & 15.44 & 14.82 & 13.93 & 14.44 \\
117
+ Encyclopedia & 17.98 & 27.67 & 31.51 & 34.92 & 38.66 & 40.03 & 39.52 & 37.29 & 29.48 & 33.43 \\
118
+ Finance & 1.57 & 3.16 & 4.82 & 6.67 & 8.81 & 10.48 & 11.86 & 12.23 & 9.65 & 7.26 \\
119
+ Law & 0.34 & 0.80 & 1.43 & 1.97 & 2.63 & 3.54 & 4.71 & 5.46 & 5.40 & 2.66 \\
120
+ Math & 0.42 & 0.41 & 0.48 & 0.58 & 0.65 & 0.68 & 0.71 & 0.67 & 0.49 & 0.55 \\
121
+ Medicine & 3.21 & 6.72 & 7.71 & 7.02 & 7.25 & 7.28 & 7.00 & 6.61 & 5.41 & 6.89 \\
122
+ Military & 0.12 & 0.24 & 0.50 & 0.82 & 0.95 & 1.05 & 1.23 & 1.41 & 1.41 & 0.81 \\
123
+ News & 18.78 & 19.60 & 24.33 & 28.55 & 32.62 & 34.78 & 34.72 & 33.61 & 31.19 & 28.01 \\
124
+ Technology & 16.75 & 18.52 & 18.44 & 20.42 & 22.26 & 22.42 & 21.45 & 19.66 & 15.32 & 19.47 \\
125
+ General & 53.44 & 44.00 & 35.62 & 30.36 & 26.52 & 24.57 & 24.12 & 25.29 & 31.20 & 32.63 \\
126
+ \bottomrule
127
+ % \hline
128
+ \end{tabular}
129
+ \end{table}
130
+
131
+
132
+ \subsection{Data Toxicity Analysis}
133
+
134
+ \begin{figure}[htp]
135
+ \centering
136
+ \includegraphics[width=15cm]{picture/toxicity_distribution.pdf}
137
+ \caption{The Distribution of Toxicity: A threshold of 0.99 was established, and samples with scores exceeding 0.99 were categorised as toxic.}
138
+ \label{fig:Toxicity_distribution}
139
+ \end{figure}
140
+
141
+
142
+ During the training procedure of LLMs, toxic data introduces harmful knowledge and information, which may lead the model to generate toxic outputs. In this section, we analyze the toxicity distribution within our dataset. As shown in Figure~\ref{fig:Toxicity_distribution}, it depicts the toxicity distribution of the dataset. In this figure, a higher toxicity score indicates greater toxicity. It is evident that the majority of the data in our dataset has a toxicity score of 0.0, signifying non-toxic, high-quality data. These non-toxic texts comprise 97.41\% of the dataset.
143
+
144
+ Additionally, through manual analysis of the toxicity scores, we identify that data with scores above 0.99 are classified as toxic. By applying this empirical threshold, we filter our dataset and obtain a 3.16GB toxic text subset comprising 1,632,620 samples. In Table~\ref{tab:comparison_toxic}, we conduct a comparison between this subset with other publicly available toxic datasets. In this table, OffensEval 2019\cite{offenseval}, AbusEval\cite{abuseval}, HatEval\cite{hateval}, RAL-E\cite{hatebert} and ToxiGen\cite{hartvigsen2022toxigen} are English toxicity datasets, while COLD\cite{deng2022cold}, ToxiCN\cite{toxicn}, SWSR\cite{jiang2022swsr} and CDial-Bias\cite{zhou2022towards} are Chinese toxicity datasets. The OffensEval 2019, AbusEval, and HatEval datasets are derived from Twitter and focus on the analysis of offensive language, abusive language, and hate speech, respectively. The RAL-E dataset, sourced from a banned Reddit community, is a large-scale, unannotated English dataset. In contrast, ToxiGen is a toxicity dataset generated using GPT-3, targeting multiple groups. The COLD, SWSR, CDial-Bias, and ToxiCN datasets are collected from Chinese social media platforms including Zhihu, Weibo, and Tieba, with each dataset focusing on different groups. Compared to these datasets, ours features the largest collection of toxicity data and each text contains a toxicity score, providing researchers with a valuable resource to better optimize and evaluate LLMs' safety.
145
+
146
+ <!-- tabel====== -->
147
+
148
+
149
  ## Citation
150
  Please cite the paper if you use the data or code in this repo.
151
 
152
  ```shell
153
 
154
+ ```
155
+
156
+
157
+