File size: 13,449 Bytes
7dbb89e
 
 
db497b1
4d0b85c
c15ed9a
87f486c
6ffc49a
 
 
 
 
82273e0
6ffc49a
 
 
 
 
 
 
 
82273e0
f36be3e
13ed587
0175ea7
c63fe7b
 
87f486c
ba436a1
87f486c
e191955
64d8614
e191955
ba436a1
 
892332e
ba436a1
 
 
892332e
 
e36f3de
ba436a1
 
 
fd428f3
ba436a1
1d0f5a3
 
ba436a1
1d0f5a3
 
40b2020
58c0792
ba436a1
58c0792
ba436a1
497eac9
ba436a1
 
497eac9
ba436a1
 
497eac9
ba436a1
 
497eac9
ba436a1
8716223
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba436a1
 
 
 
 
 
d691e90
a467d31
ba436a1
 
 
15f5a9c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
---
license: odc-by
---
# TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
<center><img src="llm360_logo(1).png" alt="k2 eval table" /></center>

## We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting, obtain the largest high-quality open source dataset, and train the most performant models.

# TxT360 Compared to Common Pretraining Datasets
| Data Source               | TxT360 | FineWeb | RefinedWeb | PedPajamaV2 | C4 | Dolma | RedPajamaV1 | The Pile           |
|---------------------------|--------|---------|------------|-------------|----|-------|-------------|--------------------|
| CommonCrawl Snapshots      | 99     | 96      | 90         | 84          | 1  | 24    | 5           | 0.6% of 74         |
| Papers                     | 5 Sources | -     | -          | -           | -  | 1 Source | 1 Source  | 4 Sources          |
| Wikipedia                  | 310+ Languages | - | -        | -           | -  | Included | Included  | English Only       |
| FreeLaw                    | Included | -      | -          | -           | -  | -     | -           | Included            |
| DM Math                    | Included | -      | -          | -           | -  | -     | -           | Included            |
| USPTO                      | Included | -      | -          | -           | -  | -     | -           | Included            |
| PG-19                      | Included | -      | -          | -           | -  | Included | Included  | Included            |
| HackerNews                 | Included | -      | -          | -           | -  | -     | -           | Included            |
| Ubuntu IRC                 | Included | -      | -          | -           | -  | -     | -           | Included            |
| EuroParl                   | Included | -      | -          | -           | -  | -     | -           | Included            |
| StackExchange              | Included | -      | -          | -           | -  | -     | -           | Included            |
| Code                       | *     | -      | -          | -           | -  | Included | Included  | Included            |

  * TxT360 does not include code. This decision was made due to the perceived low duplication code with other sources.


Complete details on the dataset can be found in our blog post [here](https://huggingface.co/spaces/LLM360/TxT360).

## TxT360 Performance
To evaluate the training efficiency of our dataset, we sampled 1.5T tokens from both FineWeb and TxT360 (using the aforementioned weighting) and conducted a training ablation on an 8x8B Mixture-of-Experts architecture, similar to Mixtral. We compared the learning curves by tracking training loss, validation scores, and performance across a wide array of diverse evaluation benchmarks. The validation set was sampled independently from SlimPajama. Note that this experiment is done on a slightly earlier version of the dataset. 
<center><img src="txttofineweb.png" alt="comparison" /></center>


## Initial Data Representation
To produce TxT360, a comprehensive data processing pipeline was designed to account for the nuances of both web and curated datasets. The pipeline presents a unified framework for processing both data types, making it convenient and easily adaptive for users to revise and fine-tune the pipeline for their own use cases.

Web datasets are inherently noisy and varied. The TxT360 pipeline implements sophisticated filtering and deduplication techniques to clean and remove redundancies while preserving data integrity.

Curated datasets are typically structured and consistently formatted, but also can cause troubles with their own special formatting preferences. TxT360 filters these sources with selective steps to maintain their integrity while providing seamless integration into the larger dataset. Both data source types are globally deduplicated together resulting in ~5T tokens of high-quality data. The table below shows the source distribution of TxT360 tokens. 

We further highlight the importance of mixing the datasets together with the right blend. The raw distribution of the deduplicated dataset is actually suboptimal, a simple working recipe is provided in the studies section. This recipe will create a dataset of 15T+ tokens, the largest high quality open source pre-training dataset.

| Data Source     | Raw Data Size | Token Count | Information Cut-Off Date |
|-----------------|---------------|-------------|--------------------------|
| CommonCrawl     | 9.2 TB         | 4.83T       | 2024-30                  |
| Papers          | 712 GB        | 154.96B     | Q4 2023                  |
| Wikipedia       | 199 GB        | 35.975B       | -                        |
| Freelaw         | 71 GB         | 16.7B       | Q1 2024                  |
| DM Math         | 22 GB         | 5.23B       | -                        |
| USPTO           | 45 GB         | 4.95B       | Q3 2024                  |
| PG-19           | 11 GB         | 2.63B       | -                        |
| HackerNews      | 4.2 GB        | 1.05B       | Q4 2023                  |
| Ubuntu IRC      | 6 GB        | 1.89B       | Q3 2024                  |
| Europarl        | 6.1 GB        | 1.96B       | -                        |
| StackExchange   | 81 GB         | 27.76B       | Q4 2023                  |

The [TxT360](https://huggingface.co/spaces/LLM360/TxT360) blog post provides all the details behind how we approached and implemented the following features:

## CommonCrawl Data Filtering
Complete discussion on how 99 Common Crawl snapshots were filtered and comparison to previous filtering techinques (e.g. Dolma, DataTrove, RedPajamaV2).

## Curated Source Filtering
Each data source was filtered individually with respect to the underlying data. Full details and discussion on how each source was filter are covered.

## Global Deduplication
After the web and curated sources were filtered, all sources globally deduplicated to create TxT360. The tips and tricks behind the deduplication process are included.

## Dataset Structure
The dataset is organized under the ```data``` directory, with each subdirectory representing a data subset. 
Below is an overview of the structure and organization of these subsets:
```
β”œβ”€β”€ data
    β”œβ”€β”€ common-crawl  # data subset
        β”œβ”€β”€ CC-MAIN-2013-20  # common-crawl dumps
            β”œβ”€β”€ 1-1  # number of duplicates
                β”œβ”€β”€ chunk_000_0000.jsonl.gz
                β”œβ”€β”€ ...
            β”œβ”€β”€ 2-5
                β”œβ”€β”€ chunk_000_0000.jsonl.gz
                β”œβ”€β”€ ...
            β”œβ”€β”€ ...
        β”œβ”€β”€ CC-MAIN-2013-48
            β”œβ”€β”€ 1-1
                β”œβ”€β”€ chunk_000_0000.jsonl.gz
                β”œβ”€β”€ ...
            β”œβ”€β”€ ...
        β”œβ”€β”€ ...
    β”œβ”€β”€ dm_math
        β”œβ”€β”€ full_data_1
            β”œβ”€β”€ 0_11255.jsonl
            β”œβ”€β”€ ...
        β”œβ”€β”€ full_data_2
            β”œβ”€β”€ 10000_11255.jsonl
            β”œβ”€β”€ ...
    β”œβ”€β”€ arxiv
        β”œβ”€β”€ 1-1  # number of duplicates
            β”œβ”€β”€ 0_171.jsonl
            β”œβ”€β”€ ...
        β”œβ”€β”€ 2-5
            β”œβ”€β”€ 0_2.jsonl
            β”œβ”€β”€ ...
        β”œβ”€β”€ ...
    β”œβ”€β”€ europarl
        β”œβ”€β”€ 1-1  # number of duplicates
            β”œβ”€β”€ 0_6.jsonl
            β”œβ”€β”€ ...
        β”œβ”€β”€ 2-5
            β”œβ”€β”€ 0_0.jsonl
            β”œβ”€β”€ ...
        β”œβ”€β”€ ...
    β”œβ”€β”€ ...
```

### Common Crawl (common-crawl)
Each subdirectory under ```common-crawl``` corresponds to a specific dump of the dataset. 
Inside each dump folder, the data is further segmented into buckets based on the number of duplicates identified during deduplication:

- ```1-1```: Contains documents with no duplicates across the dataset.
- ```2-5```, ```6-10```, ```11-100```, ```101-1000```, ```1001-30000000```: Each contains documents that fall within the respective range of duplicates.

Example path: ```data/common-crawl/CC-MAIN-2013-20/1-1/chunk_000_0000.jsonl.gz```

### DM Math (dm_math)
The ```dm_math``` subset is divided into two subfolders to comply with the limit of 10,000 files per folder in a HuggingFace Repository:

Example path: ```data/dm_math/full_data_1/0_11255.jsonl```

### Others
Similar to common-crawl, other curated data subsets, such as arxiv, europal, etc., are organized by the number of duplicates:
- ```1-1```, ```2-5```, ```6-10```, ```11-100```, ```101-1000```, ```1001-inf```

Kindly note that some data subsets might not include the folder ```1001-inf``` (```1001-30000000``` in ```common-crawl```) or might contain only a few documents in such a folder due to the rarity of documents duplicated more than 1000 times. 

## Data Schema

### Common Crawl (common-crawl)
The documents in common-crawl follow the schema:
```python
{'text': '...',  # texts in the document
 'meta': 
    {
        'lang': 'en',  # top 1 language detected by fastText model
        'lang_score': 0.912118136882782,  # language score for the detected language
        'url': 'http://www.shopgirljen.com/2017/10/lg-celebrates-5-years-of-lg-oled-tv.html',  # the url that raw webpage is scraped from
        'timestamp': '2024-07-24T00:56:12Z',  # timestamp from Common Crawl raw data
        'cc-path': 'crawl-data/CC-MAIN-2024-30/segments/1720763518130.6/warc/CC-MAIN-20240723224601-20240724014601-00300.warc.gz',  # the path of the document in the raw Common Crawl
        'quality_signals':
            {
                'url_score': 0.0,
                'fraction_of_duplicate_lines': 0.0,
                'fraction_of_characters_in_duplicate_lines': 0.0,
                'fraction_of_duplicate_paragraphs': 0.0,
                'fraction_of_characters_in_duplicate_paragraphs': 0.0,
                'fraction_of_characters_in_most_common_ngram': [[2, 0.03626373626373627],
                    [3, 0.03296703296703297],
                    [4, 0.01868131868131868]],
                'fraction_of_characters_in_duplicate_ngrams': [[5, 0.01868131868131868],
                    [6, 0.01868131868131868],
                    [7, 0.01868131868131868],
                    [8, 0.0],
                    [9, 0.0],
                    [10, 0.0]],
                'fraction_of_words_corrected_in_lines': 0.0,
                'fraction_of_lines_ending_with_ellipsis': 0.0,
                'fraction_of_lines_starting_with_bullet_point': 0.0,
                'fraction_of_lines_with_toxic_words': 0.0,
                'num_of_lines_with_toxic_words': 0,
                'num_of_toxic_words': 0,
                'word_count': 358,
                'mean_word_length': 5.083798882681564,
                'num_of_sentences': 19,
                'symbol_to_word_ratio': 0.0,
                'fraction_of_words_with_alpha_character': 1.0,
                'num_of_stop_words': 82,
                'num_of_paragraphs': 0,
                'has_curly_bracket': False,
                'has_lorem_ipsum': False,
                'orig_text_has_dup_lines': False
                },
        'dup_signals': 
            {
                'dup_doc_count': 166,  # the number of duplicated documents
                'dup_dump_count': 57,  # the number of dumps that the duplicated documents are from
                'dup_details':   # the dump distribution of the duplicated documents
                    {
                        '2024-30': 2,
                        '2024-26': 1,
                        '2024-22': 1,
                        ...
                        }
                }
        },
 'subset': 'commoncrawl'}
```

Please note that documents without duplicates, located in folders `*/1-1/`, have an empty `dup_signals` field. 
Additionally, some documents with duplicates might include an `unknown` entry within the `dup_details`. 
One example could be:
```python
{'text': '...',  # texts in the document
 'meta': 
    {
        ...
        'dup_signals': 
            {
                'dup_doc_count': 7,
                'dup_dump_count': 3,
                'dup_details':
                    {
                        'unknown': 4,
                        '2024-30': 1,
                        '2024-26': 1,
                        '2024-22': 1,
                        }
                }
        },
 'subset': 'commoncrawl'}
```
This occurs because the distribution of duplicates across dumps was not recorded in the early stages of our deduplication process, and only the total count of duplicate documents (`dup_doc_count`) was maintained. 
Due to the high cost of rerunning the deduplication, we have opted to label these distributions as `unknown` when integrating them with other documents for which duplicate distribution data is available.
In these cases, the `dup_dump_count` is calculated excluding the `unknown`.

# Citation

**BibTeX:**

```bibtex
@misc{txt360data2024,
      title={TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend}, 
      author={Liping Tang, Nikhil Ranjan, Omkar Pangarkar, Xuezhi Liang, Zhen Wang, Li An, Bhaskar Rao, Linghao Jin, Huijuan Wang, Zhoujun Cheng, Suqi Sun, Cun Mu, Victor Miller, Xuezhe Ma, Yue Peng, Zhengzhong Liu, Eric P. Xing},
      year={2024}
}
```