guipenedo HF staff commited on
Commit
875453d
·
1 Parent(s): ab3135e

addressed colin's comments

Browse files
Files changed (3) hide show
  1. bibliography.bib +64 -0
  2. data/plots/score_by_dump.json +0 -0
  3. index.html +33 -31
bibliography.bib CHANGED
@@ -127,3 +127,67 @@
127
  year={1912},
128
  publisher={Wiley Online Library}
129
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
  year={1912},
128
  publisher={Wiley Online Library}
129
  }
130
+ @misc{albalak2024survey,
131
+ title={A Survey on Data Selection for Language Models},
132
+ author={Alon Albalak and Yanai Elazar and Sang Michael Xie and Shayne Longpre and Nathan Lambert and Xinyi Wang and Niklas Muennighoff and Bairu Hou and Liangming Pan and Haewon Jeong and Colin Raffel and Shiyu Chang and Tatsunori Hashimoto and William Yang Wang},
133
+ year={2024},
134
+ eprint={2402.16827},
135
+ archivePrefix={arXiv},
136
+ primaryClass={cs.CL}
137
+ }
138
+ @misc{longpre2023pretrainers,
139
+ title={A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity},
140
+ author={Shayne Longpre and Gregory Yauney and Emily Reif and Katherine Lee and Adam Roberts and Barret Zoph and Denny Zhou and Jason Wei and Kevin Robinson and David Mimno and Daphne Ippolito},
141
+ year={2023},
142
+ eprint={2305.13169},
143
+ archivePrefix={arXiv},
144
+ primaryClass={cs.CL}
145
+ }
146
+ @misc{wenzek2019ccnet,
147
+ title={CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data},
148
+ author={Guillaume Wenzek and Marie-Anne Lachaux and Alexis Conneau and Vishrav Chaudhary and Francisco Guzmán and Armand Joulin and Edouard Grave},
149
+ year={2019},
150
+ eprint={1911.00359},
151
+ archivePrefix={arXiv},
152
+ primaryClass={cs.CL}
153
+ }
154
+ @misc{soldaini2024dolma,
155
+ title={Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research},
156
+ author={Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Nathan Lambert and Ian Magnusson and Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and Oyvind Tafjord and Pete Walsh and Luke Zettlemoyer and Noah A. Smith and Hannaneh Hajishirzi and Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo},
157
+ year={2024},
158
+ eprint={2402.00159},
159
+ archivePrefix={arXiv},
160
+ primaryClass={cs.CL}
161
+ }
162
+ @misc{ouyang2022training,
163
+ title={Training language models to follow instructions with human feedback},
164
+ author={Long Ouyang and Jeff Wu and Xu Jiang and Diogo Almeida and Carroll L. Wainwright and Pamela Mishkin and Chong Zhang and Sandhini Agarwal and Katarina Slama and Alex Ray and John Schulman and Jacob Hilton and Fraser Kelton and Luke Miller and Maddie Simens and Amanda Askell and Peter Welinder and Paul Christiano and Jan Leike and Ryan Lowe},
165
+ year={2022},
166
+ eprint={2203.02155},
167
+ archivePrefix={arXiv},
168
+ primaryClass={cs.CL}
169
+ }
170
+ @misc{hoffmann2022training,
171
+ title={Training Compute-Optimal Large Language Models},
172
+ author={Jordan Hoffmann and Sebastian Borgeaud and Arthur Mensch and Elena Buchatskaya and Trevor Cai and Eliza Rutherford and Diego de Las Casas and Lisa Anne Hendricks and Johannes Welbl and Aidan Clark and Tom Hennigan and Eric Noland and Katie Millican and George van den Driessche and Bogdan Damoc and Aurelia Guy and Simon Osindero and Karen Simonyan and Erich Elsen and Jack W. Rae and Oriol Vinyals and Laurent Sifre},
173
+ year={2022},
174
+ eprint={2203.15556},
175
+ archivePrefix={arXiv},
176
+ primaryClass={cs.CL}
177
+ }
178
+ @misc{muennighoff2023scaling,
179
+ title={Scaling Data-Constrained Language Models},
180
+ author={Niklas Muennighoff and Alexander M. Rush and Boaz Barak and Teven Le Scao and Aleksandra Piktus and Nouamane Tazi and Sampo Pyysalo and Thomas Wolf and Colin Raffel},
181
+ year={2023},
182
+ eprint={2305.16264},
183
+ archivePrefix={arXiv},
184
+ primaryClass={cs.CL}
185
+ }
186
+ @misc{hernandez2022scaling,
187
+ title={Scaling Laws and Interpretability of Learning from Repeated Data},
188
+ author={Danny Hernandez and Tom Brown and Tom Conerly and Nova DasSarma and Dawn Drain and Sheer El-Showk and Nelson Elhage and Zac Hatfield-Dodds and Tom Henighan and Tristan Hume and Scott Johnston and Ben Mann and Chris Olah and Catherine Olsson and Dario Amodei and Nicholas Joseph and Jared Kaplan and Sam McCandlish},
189
+ year={2022},
190
+ eprint={2205.10487},
191
+ archivePrefix={arXiv},
192
+ primaryClass={cs.LG}
193
+ }
data/plots/score_by_dump.json CHANGED
The diff for this file is too large to render. See raw diff
 
index.html CHANGED
@@ -168,11 +168,10 @@
168
  <d-contents>
169
  </d-contents>
170
 
171
- <!-- Your JavaScript file -->
172
-
173
  <p>We have recently released 🍷FineWeb, our new large scale
174
- (15T tokens, 44TB disk space) dataset of clean text sourced from the web for LLM pretraining. You can
175
  download it <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb">here</a>.</p>
 
176
  <p>As 🍷FineWeb has gathered a lot of interest from the
177
  community, we decided to further explain the steps involved in creating it, our processing decisions and
178
  some lessons learned along the way. Read on for all the juicy details on large text dataset creation!</p>
@@ -199,7 +198,7 @@
199
  They have been crawling the web since 2007 (long before LLMs were a thing) and release a new dump usually
200
  every 1 or 2 months, which can be freely downloaded. </p>
201
  <p>As an example, their latest crawl (2024-10) contains 3.16
202
- billion web pages, totaling 424.7 TiB of uncompressed content (the size changes from dump to dump). There
203
  are 95 dumps since 2013 and 3 dumps from 2008 to 2012, which are in a different (older) format.<d-footnote>We have not processed these 3 older dumps.</d-footnote> </p>
204
  <h3>Processing at scale</h3>
205
  <p>Given the sheer size of the data involved, one of the main
@@ -213,20 +212,19 @@
213
  href="https://github.com/huggingface/datatrove">library</a>.</p>
214
  <h3>What is clean, good data?</h3>
215
  <p>This is probably the main question to keep in mind when
216
- creating a dataset. A good first lesson is that data that would intuitively be considered high quality by a
217
- human may not be necessarily the best data (or at least not all that you need) to train a good model on.</p>
218
  <p>It is still common to train a model on a given corpus
219
  (wikipedia, or some other web dataset considered clean) and use it to check the perplexity on the dataset
220
- that we were trying to curate. Unfortunately this does not always correlate with performance on downstream
221
- tasks, and so another often used approach is to train small models (small because training models is
222
- expensive and time consuming, and we want to be able to quickly iterate) on our dataset and evaluate them on
223
  a set of evaluation tasks. As we are curating a dataset for pretraining a generalist LLM, it is important to
224
  choose a diverse set of tasks and try not to overfit to any one individual benchmark.</p>
225
  <p>Another way to evaluate different datasets would be to
226
  train a model on each one and have humans rate and compare the outputs of each one (like on the <a
227
  href="https://chat.lmsys.org/">LMSYS Chatbot Arena</a>)<d-cite bibtex-key="chiang2024chatbot"></d-cite>. This would arguably provide the most
228
  reliable results in terms of representing real model usage, but getting ablation results this way is too
229
- expensive and slow.</p>
230
  <p>The approach we ultimately went with was to train small
231
  models and evaluate them on a set of benchmark tasks. We believe this is a reasonable proxy for the quality
232
  of the data used to train these models.</p>
@@ -234,14 +232,14 @@
234
  <p>To be able to compare the impact of a given processing
235
  step, we would train 2 models, one where the data included the extra step and another where this step was
236
  ablated (cut/removed). These 2 models would have the same number of parameters, architecture, and be trained
237
- on an equal number of tokens and with the same hyperparameters — the only difference would be in the
238
  training data. We would then evaluate each model on the same set of tasks and compare the average
239
  scores.</p>
240
  <p>Our ablation models were trained using <a
241
  href="https://github.com/huggingface/nanotron"><code>nanotron</code></a> with this config [<strong>TODO:
242
  INSERT SIMPLIFIED NANOTRON CONFIG HERE</strong>]. The models had 1.82B parameters, used the Llama
243
  architecture with a 2048 sequence length, and a global batch size of ~2 million tokens. For filtering
244
- ablations we mostly trained on ~28B tokens (which is roughly the Chinchilla optimal training size for this
245
  model size).</p>
246
  <p>We evaluated the models using <a
247
  href="https://github.com/huggingface/lighteval/"><code>lighteval</code></a>. We tried selecting
@@ -281,15 +279,16 @@
281
  starting point. In our experience the default text extraction (extracting the main text of a webpage from
282
  its HTML) used to create these WET files is suboptimal and there are a variety of open-source libraries that
283
  provide better text extraction (by, namely, keeping less boilerplate content/navigation menus). We extracted
284
- the text content from the WARC files using the trafilatura library<d-cite bibtex-key="barbaresi-2021-trafilatura"></d-cite>. It is important to note, however, that text extraction is one of the most costly steps of our
285
- processing, so we believe that using the readily available WET data could be a reasonable trade-off for
286
- lower budget teams.</p>
287
  <p>To validate this decision, we processed the 2019-18 dump
288
- directly using the WET files and with text extracted from WARC files using trafilatura. We applied the same
289
  processing to each one (our base filtering+minhash, detailed below) and trained two models. While the
290
- resulting dataset is considerably larger for the WET data (around 254BT), it proves to be of much worse
291
- quality than the one that used trafilatura to extract text from WARC files (which is around 200BT). Many of
292
  these additional tokens on the WET files are unnecessary page boilerplate.</p>
 
 
 
293
  <div class="main-plot-container">
294
  <figure><img src="plots/wet_comparison.png"/></figure>
295
  <div id="plot-wet_comparison"></div>
@@ -330,7 +329,7 @@
330
  <p>Removing these duplicates (deduplicating) has been linked to an improvement in model performance<d-cite bibtex-key="lee2022deduplicating"></d-cite> and a reduction in memorization of pretraining data<d-cite bibtex-key="carlini2023quantifying"></d-cite>, which might
331
  allow for better generalization. Additionally, the performance uplift can also be tied to increased training
332
  efficiency: by removing duplicated content, for the same number of training tokens, a model will have seen
333
- more diverse data.</p>
334
  <p>There are different ways to identify and even define
335
  duplicated data. Common approaches rely on hashing techniques to speed up the process, or on building
336
  efficient data structures to index the data (like suffix arrays). Methods can also be “fuzzy”, by using some
@@ -338,19 +337,20 @@
338
  documents (or lines, paragraphs, or whatever other granularity level being used).</p>
339
  <h4>Our deduplication parameters</h4>
340
  <p>Similarly to RefinedWeb, we decided to apply MinHash, a
341
- fuzzy hash based deduplication technique. We chose to compute minhashes on each document’s 5-grams, using
342
  112 hash functions in total, split into 14 buckets of 8 hashes each — targeting documents that are at least
343
  75% similar. Documents with the same 8 minhashes in any bucket are considered a duplicate of each other.</p>
344
  <p>This would mean that for two documents with a similarity (<code>s</code>)
345
  of 0.7, 0.75, 0.8 and 0.85, the probability that they would be identified as duplicates would be 56%, 77%,
346
  92% and 98.8% respectively ($$1-(1-s^8)^{14}$$). See the plot below for a match probability
347
  comparison between our setup with 112 hashes and the one from RefinedWeb, with 9000 hashes, divided into 450
348
- buckets of 20 hashes (that requires a substantially larger amount of compute resources):</p>
349
  <figure><img src="plots/minhash_parameters_comparison.png"/>
350
  </figure>
351
  <p>While the high number of hash functions in RefinedWeb
352
- allows for a steeper, more well defined cut off, we believe the compute and storage savings are a reasonable
353
  trade off.</p>
 
354
  <h4>More deduplication is always better, right?</h4>
355
  <p>Our initial approach was to take the entire dataset (all
356
  95 dumps) and deduplicate them as one big dataset using MinHash.</p>
@@ -381,7 +381,7 @@
381
  removed)
382
  </li>
383
  </ul>
384
- <p>As an experiment, we tried training two models on 28BT
385
  sampled from the following data from 2013-48:</p>
386
  <ul>
387
  <li>the fully deduplicated remaining ~31 billion tokens (<em>originally kept
@@ -391,8 +391,10 @@
391
  <ul>
392
  <li>171 billion tokens obtained by individually deduplicating (without
393
  considering the other dumps) the ~460 billion tokens that had been removed from this dump in the
394
- iterative dedup process (<em>originally removed data</em>)
 
395
  </li>
 
396
  </ul>
397
  <div class="main-plot-container">
398
  <figure><img src="plots/removed_data_cross_dedup.png"/></figure>
@@ -400,7 +402,8 @@
400
  </div>
401
  <p>These results show that, for this older dump where we were
402
  removing over 90% of the original data, the data that was kept was actually <em>worse</em> than the data
403
- removed (considered independently of all the other dumps).</p>
 
404
  <h4>Taking a step back: individual dump dedup</h4>
405
  <p>We then tried an alternative approach: we deduplicated
406
  each dump with MinHash individually (without considering the other dumps). This resulted in 20 trillion
@@ -469,9 +472,9 @@
469
  documents duplicated up to 8 times. This simulation illustrates the inherent difficulties associated with
470
  measuring deduplication impact on the training of LLMs, once the biggest document clusters have been
471
  removed.</p>
472
- <h4>Other (failed) approaches</h4>
473
  <p>We attempted to improve the performance of the
474
- independently minhash deduped 20T of data by further deduplicating it with the following methods</p>
475
  <ul>
476
  <li>URL deduplication, where we only kept one document per normalized
477
  (lowercased) URL (71.5% of tokens removed, 5.6T left) — <em>FineWeb URL dedup</em></li>
@@ -479,7 +482,7 @@
479
  <ul>
480
  <li>Line deduplication:
481
  <ul>
482
- <li>remove all but 1 occurrence of each duplicated line (77.8% of
483
  tokens dropped, 4.4T left) — <em>FineWeb line dedup</em></li>
484
  </ul>
485
  <ul>
@@ -489,7 +492,7 @@
489
  </ul>
490
  <ul>
491
  <li>remove all but 1 occurrence of each span of 3 duplicated lines
492
- with all numbers replaced by 0 (80.9% of tokens removed, 3.7T left) — <em>FineWeb 3-line
493
  dedup</em></li>
494
  </ul>
495
  </li>
@@ -518,8 +521,7 @@
518
  benchmark, one of the benchmarks in our “early signal” group with the stronger signal and highest
519
  signal-over-noise ratio. As such, it has stayed a common sub-set of typical LLM training, for instance in
520
  the relatively recent Llama1 model<d-cite bibtex-key="touvron2023llama"></d-cite>. We experimented applying
521
- each of the different filters used in C4 to a baseline of the independently deduped FineWeb 2019-18 dump
522
- (plot smoothed with a 3 checkpoints sliding window):</p>
523
  <div class="main-plot-container">
524
  <figure><img src="plots/c4_filters_hellaswag.png"/></figure>
525
  <div id="plot-c4_filters_hellaswag"></div>
 
168
  <d-contents>
169
  </d-contents>
170
 
 
 
171
  <p>We have recently released 🍷FineWeb, our new large scale
172
+ (15T gpt2 tokens, 44TB disk space) dataset of clean text sourced from the web for LLM pretraining. You can
173
  download it <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb">here</a>.</p>
174
+ <p>[TODO: ADD MORE INTRODUCTION]</p>
175
  <p>As 🍷FineWeb has gathered a lot of interest from the
176
  community, we decided to further explain the steps involved in creating it, our processing decisions and
177
  some lessons learned along the way. Read on for all the juicy details on large text dataset creation!</p>
 
198
  They have been crawling the web since 2007 (long before LLMs were a thing) and release a new dump usually
199
  every 1 or 2 months, which can be freely downloaded. </p>
200
  <p>As an example, their latest crawl (2024-10) contains 3.16
201
+ billion web pages, totaling 424.7 TiB of uncompressed HTML text content (the size changes from dump to dump). There
202
  are 95 dumps since 2013 and 3 dumps from 2008 to 2012, which are in a different (older) format.<d-footnote>We have not processed these 3 older dumps.</d-footnote> </p>
203
  <h3>Processing at scale</h3>
204
  <p>Given the sheer size of the data involved, one of the main
 
212
  href="https://github.com/huggingface/datatrove">library</a>.</p>
213
  <h3>What is clean, good data?</h3>
214
  <p>This is probably the main question to keep in mind when
215
+ creating a dataset. In the context of large language model pretraining, "high quality" is not a very well defined term<d-cite bibtex-key="albalak2024survey"></d-cite>, and often not a property of documents that can be easily perceived through direct observation alone.<d-cite bibtex-key="longpre2023pretrainers"></d-cite></p>
 
216
  <p>It is still common to train a model on a given corpus
217
  (wikipedia, or some other web dataset considered clean) and use it to check the perplexity on the dataset
218
+ that we were trying to curate<d-cite bibtex-key="wenzek2019ccnet"></d-cite>. Unfortunately this does not always correlate with performance on downstream
219
+ tasks<d-cite bibtex-key="soldaini2024dolma"></d-cite>, and so another often used approach is to train small models (small because training models is
220
+ expensive and time consuming, and we want to be able to quickly iterate) on a representative subset of our dataset and evaluate them on
221
  a set of evaluation tasks. As we are curating a dataset for pretraining a generalist LLM, it is important to
222
  choose a diverse set of tasks and try not to overfit to any one individual benchmark.</p>
223
  <p>Another way to evaluate different datasets would be to
224
  train a model on each one and have humans rate and compare the outputs of each one (like on the <a
225
  href="https://chat.lmsys.org/">LMSYS Chatbot Arena</a>)<d-cite bibtex-key="chiang2024chatbot"></d-cite>. This would arguably provide the most
226
  reliable results in terms of representing real model usage, but getting ablation results this way is too
227
+ expensive and slow. It also often requires that the models have undergone at least an instruction finetuning stage, as pretrained models have difficulty following instructions.<d-cite bibtex-key="ouyang2022training"></d-cite></p>
228
  <p>The approach we ultimately went with was to train small
229
  models and evaluate them on a set of benchmark tasks. We believe this is a reasonable proxy for the quality
230
  of the data used to train these models.</p>
 
232
  <p>To be able to compare the impact of a given processing
233
  step, we would train 2 models, one where the data included the extra step and another where this step was
234
  ablated (cut/removed). These 2 models would have the same number of parameters, architecture, and be trained
235
+ on an equal number of randomly sampled tokens from each step's data, for a single epoch, and with the same hyperparameters — the only difference would be in the
236
  training data. We would then evaluate each model on the same set of tasks and compare the average
237
  scores.</p>
238
  <p>Our ablation models were trained using <a
239
  href="https://github.com/huggingface/nanotron"><code>nanotron</code></a> with this config [<strong>TODO:
240
  INSERT SIMPLIFIED NANOTRON CONFIG HERE</strong>]. The models had 1.82B parameters, used the Llama
241
  architecture with a 2048 sequence length, and a global batch size of ~2 million tokens. For filtering
242
+ ablations we mostly trained on ~28B tokens (which is roughly the Chinchilla<d-cite bibtex-key="hoffmann2022training"></d-cite> optimal training size for this
243
  model size).</p>
244
  <p>We evaluated the models using <a
245
  href="https://github.com/huggingface/lighteval/"><code>lighteval</code></a>. We tried selecting
 
279
  starting point. In our experience the default text extraction (extracting the main text of a webpage from
280
  its HTML) used to create these WET files is suboptimal and there are a variety of open-source libraries that
281
  provide better text extraction (by, namely, keeping less boilerplate content/navigation menus). We extracted
282
+ the text content from the WARC files using the trafilatura library<d-cite bibtex-key="barbaresi-2021-trafilatura"></d-cite>, which from visual inspection of the results provided good quality extraction when compared to other libraries.</p><aside>You can also find a benchmark on text extraction libraries <a href="https://github.com/scrapinghub/article-extraction-benchmark/blob/master/README.rst">here</a>.</aside>
 
 
283
  <p>To validate this decision, we processed the 2019-18 dump
284
+ directly using the WET files and with text extracted from WARC files using trafilatura<d-footnote>We used trafilatura default options with <code>favour_precision=True</code>.</d-footnote>. We applied the same
285
  processing to each one (our base filtering+minhash, detailed below) and trained two models. While the
286
+ resulting dataset is about 25% larger for the WET data (around 254 billion tokens), it proves to be of much worse
287
+ quality than the one that used trafilatura to extract text from WARC files (which is around 200 billion tokens). Visual inspection of some samples confirms that many of
288
  these additional tokens on the WET files are unnecessary page boilerplate.</p>
289
+ <p>It is important to note, however, that text extraction is one of the most costly steps of our
290
+ processing, so we believe that using the readily available WET data could be a reasonable trade-off for
291
+ lower budget teams.</p>
292
  <div class="main-plot-container">
293
  <figure><img src="plots/wet_comparison.png"/></figure>
294
  <div id="plot-wet_comparison"></div>
 
329
  <p>Removing these duplicates (deduplicating) has been linked to an improvement in model performance<d-cite bibtex-key="lee2022deduplicating"></d-cite> and a reduction in memorization of pretraining data<d-cite bibtex-key="carlini2023quantifying"></d-cite>, which might
330
  allow for better generalization. Additionally, the performance uplift can also be tied to increased training
331
  efficiency: by removing duplicated content, for the same number of training tokens, a model will have seen
332
+ more diverse data.<d-cite bibtex-key="muennighoff2023scaling"></d-cite><d-cite bibtex-key="hernandez2022scaling"></d-cite></p>
333
  <p>There are different ways to identify and even define
334
  duplicated data. Common approaches rely on hashing techniques to speed up the process, or on building
335
  efficient data structures to index the data (like suffix arrays). Methods can also be “fuzzy”, by using some
 
337
  documents (or lines, paragraphs, or whatever other granularity level being used).</p>
338
  <h4>Our deduplication parameters</h4>
339
  <p>Similarly to RefinedWeb, we decided to apply MinHash, a
340
+ fuzzy hash based deduplication technique that scales well and allows us to tune similarity thresholds (by changing the number and size of buckets) and the granularity of the matches (changing the n-gram size). We chose to compute minhashes on each document’s 5-grams, using
341
  112 hash functions in total, split into 14 buckets of 8 hashes each — targeting documents that are at least
342
  75% similar. Documents with the same 8 minhashes in any bucket are considered a duplicate of each other.</p>
343
  <p>This would mean that for two documents with a similarity (<code>s</code>)
344
  of 0.7, 0.75, 0.8 and 0.85, the probability that they would be identified as duplicates would be 56%, 77%,
345
  92% and 98.8% respectively ($$1-(1-s^8)^{14}$$). See the plot below for a match probability
346
  comparison between our setup with 112 hashes and the one from RefinedWeb, with 9000 hashes, divided into 450
347
+ buckets of 20 hashes (that requires a substantially larger amount of compute resources, as each individual hash must be computed, stored and then compared with hashes from other documents):</p>
348
  <figure><img src="plots/minhash_parameters_comparison.png"/>
349
  </figure>
350
  <p>While the high number of hash functions in RefinedWeb
351
+ allows for a steeper, more well defined cut off (documents with real similarity near the threshold are more likely to be correctly identified), we believe the compute and storage savings are a reasonable
352
  trade off.</p>
353
+ <p>It should also be noted that intra-document deduplication is already handled by our repetition filter, which removes documents with many repeated lines and paragraphs.</p>
354
  <h4>More deduplication is always better, right?</h4>
355
  <p>Our initial approach was to take the entire dataset (all
356
  95 dumps) and deduplicate them as one big dataset using MinHash.</p>
 
381
  removed)
382
  </li>
383
  </ul>
384
+ <p>As an experiment, we tried training two models on 28 billion tokens
385
  sampled from the following data from 2013-48:</p>
386
  <ul>
387
  <li>the fully deduplicated remaining ~31 billion tokens (<em>originally kept
 
391
  <ul>
392
  <li>171 billion tokens obtained by individually deduplicating (without
393
  considering the other dumps) the ~460 billion tokens that had been removed from this dump in the
394
+ iterative dedup process (<em>originally removed data</em>)<d-footnote>While there may be documents in <em>originally kept
395
+ data</em> similar to documents in <em>originally removed data</em>, we estimate the overlap to be small (around 4 billion tokens)</d-footnote>
396
  </li>
397
+
398
  </ul>
399
  <div class="main-plot-container">
400
  <figure><img src="plots/removed_data_cross_dedup.png"/></figure>
 
402
  </div>
403
  <p>These results show that, for this older dump where we were
404
  removing over 90% of the original data, the data that was kept was actually <em>worse</em> than the data
405
+ removed (considered independently of all the other dumps). This is also confirmed by visual inspection: <em>originally kept
406
+ data</em> contains far more ads, lists of keywords and generally badly formatted text than <em>originally removed data</em>.</p>
407
  <h4>Taking a step back: individual dump dedup</h4>
408
  <p>We then tried an alternative approach: we deduplicated
409
  each dump with MinHash individually (without considering the other dumps). This resulted in 20 trillion
 
472
  documents duplicated up to 8 times. This simulation illustrates the inherent difficulties associated with
473
  measuring deduplication impact on the training of LLMs, once the biggest document clusters have been
474
  removed.</p>
475
+ <h4>Other (failed) global approaches</h4>
476
  <p>We attempted to improve the performance of the
477
+ independently minhash deduped 20 trillion tokens of data by further deduplicating it (globally, over all crawls) with the following methods</p>
478
  <ul>
479
  <li>URL deduplication, where we only kept one document per normalized
480
  (lowercased) URL (71.5% of tokens removed, 5.6T left) — <em>FineWeb URL dedup</em></li>
 
482
  <ul>
483
  <li>Line deduplication:
484
  <ul>
485
+ <li>remove all but 1 (randomly chosen) occurrence of each duplicated line (77.8% of
486
  tokens dropped, 4.4T left) — <em>FineWeb line dedup</em></li>
487
  </ul>
488
  <ul>
 
492
  </ul>
493
  <ul>
494
  <li>remove all but 1 occurrence of each span of 3 duplicated lines
495
+ with each number treated as 0 when finding duplicates, (80.9% of tokens removed, 3.7T left) — <em>FineWeb 3-line
496
  dedup</em></li>
497
  </ul>
498
  </li>
 
521
  benchmark, one of the benchmarks in our “early signal” group with the stronger signal and highest
522
  signal-over-noise ratio. As such, it has stayed a common sub-set of typical LLM training, for instance in
523
  the relatively recent Llama1 model<d-cite bibtex-key="touvron2023llama"></d-cite>. We experimented applying
524
+ each of the different filters used in C4 to a baseline of the independently deduped FineWeb 2019-18 dump:</p>
 
525
  <div class="main-plot-container">
526
  <figure><img src="plots/c4_filters_hellaswag.png"/></figure>
527
  <div id="plot-c4_filters_hellaswag"></div>