hynky HF staff commited on
Commit
ffb95ea
Β·
2 Parent(s): 2a86960 96ee97e

Merge branch 'main' of hf.co:spaces/HuggingFaceFW/blogpost-fineweb-v1

Browse files
Files changed (3) hide show
  1. bibliography.bib +25 -0
  2. index.html +42 -19
  3. style.css +4 -0
bibliography.bib CHANGED
@@ -209,4 +209,29 @@ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
209
  eprint={2401.04088},
210
  archivePrefix={arXiv},
211
  primaryClass={cs.LG}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
212
  }
 
209
  eprint={2401.04088},
210
  archivePrefix={arXiv},
211
  primaryClass={cs.LG}
212
+ }
213
+ @article{yuan2024self,
214
+ title={Self-rewarding language models},
215
+ author={Yuan, Weizhe and Pang, Richard Yuanzhe and Cho, Kyunghyun and Sukhbaatar, Sainbayar and Xu, Jing and Weston, Jason},
216
+ journal={arXiv preprint arXiv:2401.10020},
217
+ year={2024}
218
+ }
219
+ @article{verga2024replacing,
220
+ title={Replacing Judges with Juries: Evaluating LLM Generations with a Panel of Diverse Models},
221
+ author={Verga, Pat and Hofstatter, Sebastian and Althammer, Sophia and Su, Yixuan and Piktus, Aleksandra and Arkhangorodsky, Arkady and Xu, Minjie and White, Naomi and Lewis, Patrick},
222
+ journal={arXiv preprint arXiv:2404.18796},
223
+ year={2024}
224
+ }
225
+ @article{abdin2024phi,
226
+ title={Phi-3 technical report: A highly capable language model locally on your phone},
227
+ author={Abdin, Marah and Jacobs, Sam Ade and Awan, Ammar Ahmad and Aneja, Jyoti and Awadallah, Ahmed and Awadalla, Hany and Bach, Nguyen and Bahree, Amit and Bakhtiari, Arash and Behl, Harkirat and others},
228
+ journal={arXiv preprint arXiv:2404.14219},
229
+ year={2024}
230
+ }
231
+ @misc{meta2024responsible,
232
+ title = {Our responsible approach to Meta AI and Meta Llama 3},
233
+ author = {Meta},
234
+ year = {2024},
235
+ url = {https://ai.meta.com/blog/meta-llama-3-meta-ai-responsibility/},
236
+ note = {Accessed: 2024-05-31}
237
  }
index.html CHANGED
@@ -11,6 +11,7 @@
11
  <link rel="stylesheet" href="style.css">
12
  <meta name="viewport" content="width=device-width, initial-scale=1">
13
  <meta charset="utf8">
 
14
  <title>FineWeb: 15T tokens of high quality web data</title>
15
  <style>
16
 
@@ -325,7 +326,7 @@
325
  </li>
326
  </ul>
327
  <ul>
328
- <li>Applied quality and repetition filters from the Gopher<d-cite bibtex-key="rae2022scaling"></d-cite> paper (using the default thresholds)
329
  </li>
330
  </ul>
331
  <p>After applying this filtering to each of the text
@@ -581,7 +582,7 @@
581
  minhashed version and the result from the (worse quality) full dedup from 2013-48 and 2015-22 crawls (older crawls). We then compared the
582
  statistics at a macro level, by looking at the distribution of these metrics for each one.</p>
583
  <p>The collected statistics ranged from common document-level
584
- metrics (e.g. number of lines, avg. line/word length, etc) to inter-document repetition metrics (gopher
585
  inspired). Perhaps not too surprisingly given our findings for deduplication, we found significant
586
  disparities in most of the metrics for the two deduplication methods. For instance, the <code>line-char-duplicates</code>
587
  metric (nb. of characters in duplicated lines / nb. characters), roughly doubled from the independent dedup
@@ -611,7 +612,7 @@
611
  </ul>
612
  <ul>
613
  <li>Remove documents where the fraction of characters in duplicated lines β‰₯ 0.1
614
- (12.47% of tokens removed) β€” the original Gopher threshold for this ratio is β‰₯ 0.2
615
  </li>
616
  </ul>
617
  <ul>
@@ -684,33 +685,33 @@
684
  <div id="plot-dataset_ablations"></div>
685
  </div>
686
  <h2>πŸ“š FineWeb-Edu</h2>
687
- <p>A new approach has recently emerged for filtering LLM training datasets: using synthetic data to develop classifiers for identifying educational content. This technique was used in the trainings of <a href="https://ai.meta.com/blog/meta-llama-3-meta-ai-responsibility/">LLama3</a> and <a href="https://arxiv.org/abs/2404.14219">Phi3</a>, but its large-scale impact on web data filtering hasn't been fully explored or published.</p>
688
- <p>The popular Phi3 models were trained on 3.3 and 4.8 trillion tokens, with the <a href="https://arxiv.org/abs/2404.14219">paper</a> stating:</p>
689
  <blockquote>Our training data consists of heavily filtered publicly available web data (according to the 'educational level') from various open internet sources, as well as synthetic LLM-generated data.</blockquote>
690
- <p>Similarly, <a href="https://ai.meta.com/blog/meta-llama-3-meta-ai-responsibility/">LLama3 blog post</a> notes:</p>
691
  <blockquote>We found that previous generations of Llama are good at identifying high-quality data, so we used Llama 2 to help build the text-quality classifiers that are powering Llama 3.</blockquote>
692
- <p>However, these classifiers and filtered datasets are not publicly available. To enhance 🍷 FineWeb's quality, we developed an educational quality classifier using annotations generated by <a href="https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct">Llama3-70B-Instruct</a> to create πŸ“š FineWeb-Edu.</p>
693
  <h3>Annotation</h3>
694
- <p>We used Llama3-70B-Instruct to annotate 500k samples from the 🍷 FineWeb dataset, scoring each for their educational quality on a scale from 0 to 5.</p>
695
- <p>We explored various prompts and found that the additive scale by <a href="https://arxiv.org/pdf/2401.10020">Yuan et al.</a> worked best. This scale allows the LLM to reason about each additional point awarded, unlike the single-rating Likert scale which fits samples into predefined boxes. Then, to avoid the LLM favoring highly technical pages like arXiv abstracts and submissions, we focused on grade-school and middle-school level knowledge. By setting a threshold of 3 (on a scale of 0 to 5) during the filtering process, we were able to also retain some high-level educational pages.</p>
696
  <div style="text-align: center; margin: 20px 0;">
697
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/fjZQ4izIj1rx1xQnBTKKr.png" alt="Prompt for LLM annotation" style="width: 90%; max-width: 800px; height: auto;">
698
  <figcaption style="font-style: italic; margin-top: 10px;">Prompt used for Llama3 annotations of the educational score, also available on <a href="https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/blob/main/utils/prompt.txt">here</a>.</figcaption>
699
  </div>
700
- <p>We also experimented with <a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral-8x-7B-Instruct</a> and <a href="https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1">Mixtral-8x22B-Instruct</a> and a jury of all three models following <a href="https://arxiv.org/abs/2404.18796">Verga et al.</a>, but found that Llama3 alone gave the most reliable results.</p>
701
  <h3>Classifier Training</h3>
702
- <p>We added a classification head with a single regression output to <a href="https://huggingface.co/Snowflake/snowflake-arctic-embed-m">Snowflake-arctic-embed</a> and trained it on 450,000 Llama3 annotations for 20 epochs with a learning rate of 3e-4, freezing the embedding and encoder layers. We saved the checkpoint with the highest F1 score on our held-out validation set of ~47k samples, treating Llama3 annotations as ground-truth. After training, we rounded the scores to integers from 0 to 5.</p>
703
  <p>We then converted the problem to a binary classification task by using a fixed threshold to determine if a file is educational. With a threshold of 3, the model achieved an F1 score of 82% on the validation set, indicating strong performance in distinguishing high-quality educational content.</p>
704
- <p>The classifier is available at: <a href="https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier">https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier</a>. The training and inference code is available on <a href="https://github.com/huggingface/cosmopedia/tree/main/classification">GitHub</a>.</p>
705
  <h3>Filtering and results</h3>
706
- <p>We applied the classifier to the 15T tokens of 🍷 FineWeb, a process that required 6,000 H100 GPU hours. We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best results. The plot below shows the performance of each threshold compared to FineWeb on six different benchmarks; it uses a 1.82B model trained on 8B tokens.</p>
707
  <div class="main-plot-container">
708
  <figure>
709
  <img src="plots/edu-8k.png">
710
  </figure>
711
  <div id="plot-edu-8k"></div>
712
  </div>
713
- <p>We then built πŸ“š FineWeb-Edu by filtering out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.2T educational tokens. To evaluate the effectiveness of this filtering at a larger scale, we conducted an ablation using a 1.82B model trained on 350 billion tokens, similar to the FineWeb filtering ablation mentioned above:</p>
714
  <div class="main-plot-container">
715
  <figure>
716
  <img src="plots/edu-100k.png">
@@ -720,12 +721,11 @@
720
  <p>Here are the key highlights of the ablation results above:</p>
721
  <ul>
722
  <li>πŸ“š FineWeb-Edu surpasses 🍷 FineWeb and all other open web datasets, with remarkable improvements on educational benchmarks such as MMLU, ARC, and OpenBookQA.</li>
723
- <li>It achieves the same performance with significantly less data, requiring 10x fewer tokens compared to C4 and Dolma1.7 to match MMLU results.</li>
724
  <li>This demonstrates the effectiveness of using classifiers trained on LLM annotations for large-scale data filtering.</li>
725
  </ul>
726
- <p>Given that a threshold of 2 also demonstrated strong performance while retaining more data, we are releasing an additional dataset filtered with this threshold, containing 5.4 trillion tokens. Additionally, for research purposes, we are providing the dataset filtered with a threshold of 4 with 300 billion tokens.</p>
727
- <p>You can find the three datasets along with the classifier used for the filtering in this collection:TODO</p>
728
- <p><strong>TODO: add dataset links and a collection</strong></p>
729
  <h2>Next steps</h2>
730
  <p>We want to continue improving FineWeb and will also
731
  release a technical report with more details soon.</p>
@@ -750,7 +750,7 @@
750
  const isException = el.getAttribute('no-toc');
751
  if (isInTitle || isException) continue;
752
  el.setAttribute('id', el.textContent.toLowerCase().replaceAll(" ", "_"))
753
- const link = '<a href="' + '#' + el.getAttribute('id') + '">' + el.textContent + '</a>';
754
 
755
  const level = el.tagName === 'H2' ? 0 : (el.tagName === 'H3' ? 1 : 2);
756
  while (prevLevel < level) {
@@ -774,6 +774,29 @@
774
  ToC += '</nav>';
775
  toc.innerHTML = ToC;
776
  toc.setAttribute('prerendered', 'true');
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
777
  }
778
  </script>
779
  </body>
 
11
  <link rel="stylesheet" href="style.css">
12
  <meta name="viewport" content="width=device-width, initial-scale=1">
13
  <meta charset="utf8">
14
+ <base target="_blank">
15
  <title>FineWeb: 15T tokens of high quality web data</title>
16
  <style>
17
 
 
326
  </li>
327
  </ul>
328
  <ul>
329
+ <li>Applied quality and repetition filters from MassiveText<d-cite bibtex-key="rae2022scaling"></d-cite> (using the default thresholds)
330
  </li>
331
  </ul>
332
  <p>After applying this filtering to each of the text
 
582
  minhashed version and the result from the (worse quality) full dedup from 2013-48 and 2015-22 crawls (older crawls). We then compared the
583
  statistics at a macro level, by looking at the distribution of these metrics for each one.</p>
584
  <p>The collected statistics ranged from common document-level
585
+ metrics (e.g. number of lines, avg. line/word length, etc) to inter-document repetition metrics (MassiveText
586
  inspired). Perhaps not too surprisingly given our findings for deduplication, we found significant
587
  disparities in most of the metrics for the two deduplication methods. For instance, the <code>line-char-duplicates</code>
588
  metric (nb. of characters in duplicated lines / nb. characters), roughly doubled from the independent dedup
 
612
  </ul>
613
  <ul>
614
  <li>Remove documents where the fraction of characters in duplicated lines β‰₯ 0.1
615
+ (12.47% of tokens removed) β€” the original MassiveText threshold for this ratio is β‰₯ 0.2
616
  </li>
617
  </ul>
618
  <ul>
 
685
  <div id="plot-dataset_ablations"></div>
686
  </div>
687
  <h2>πŸ“š FineWeb-Edu</h2>
688
+ <p>A new approach has recently emerged for filtering LLM training datasets: using synthetic data to develop classifiers for identifying educational content. This technique was used in the trainings of Llama 3<d-cite bibtex-key="llama3modelcard"></d-cite> and Phi3<d-cite bibtex-key="abdin2024phi"></d-cite> but its large-scale impact on web data filtering hasn't been fully explored or published.</p>
689
+ <p>The popular Phi3 models were trained on 3.3 and 4.8 trillion tokens, with the paper<d-cite bibtex-key="abdin2024phi"></d-cite> stating:</p>
690
  <blockquote>Our training data consists of heavily filtered publicly available web data (according to the 'educational level') from various open internet sources, as well as synthetic LLM-generated data.</blockquote>
691
+ <p>Similarly, Llama 3 blog post<d-cite bibtex-key="meta2024responsible"></d-cite> notes:</p>
692
  <blockquote>We found that previous generations of Llama are good at identifying high-quality data, so we used Llama 2 to help build the text-quality classifiers that are powering Llama 3.</blockquote>
693
+ <p>However, these classifiers and filtered datasets are not publicly available. To enhance 🍷 FineWeb's quality, we developed an educational quality classifier using annotations generated by <a href="https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct">Llama-3-70B-Instruct</a> to create πŸ“š FineWeb-Edu.</p>
694
  <h3>Annotation</h3>
695
+ <p>We used Llama-3-70B-Instruct to annotate 500k samples from the 🍷 FineWeb dataset, scoring each for their educational quality on a scale from 0 to 5.</p>
696
+ <p>We explored various prompts and found that the additive scale by Yuan et al.<d-cite bibtex-key="yuan2024self"></d-cite> worked best. This scale allows the LLM to reason about each additional point awarded, unlike the single-rating Likert scale which fits samples into predefined boxes. Then, to avoid the LLM favoring highly technical pages like arXiv abstracts and submissions, we focused on grade-school and middle-school level knowledge. By setting a threshold of 3 (on a scale of 0 to 5) during the filtering process, we were able to also retain some high-level educational pages.</p>
697
  <div style="text-align: center; margin: 20px 0;">
698
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/fjZQ4izIj1rx1xQnBTKKr.png" alt="Prompt for LLM annotation" style="width: 90%; max-width: 800px; height: auto;">
699
  <figcaption style="font-style: italic; margin-top: 10px;">Prompt used for Llama3 annotations of the educational score, also available on <a href="https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/blob/main/utils/prompt.txt">here</a>.</figcaption>
700
  </div>
701
+ <p>We also experimented with <a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral-8x-7B-Instruct</a> and <a href="https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1">Mixtral-8x22B-Instruct</a> and a jury of all three models<d-cite bibtex-key="verga2024replacing"></d-cite> but found that Llama3 alone gave the most reliable results.</p>
702
  <h3>Classifier Training</h3>
703
+ <p>We added a classification head with a single regression output to <a href="https://huggingface.co/Snowflake/snowflake-arctic-embed-m">Snowflake-arctic-embed</a> and trained it on 450,000 Llama 3 annotations for 20 epochs with a learning rate of 3e-4, freezing the embedding and encoder layers. We saved the checkpoint with the highest F1 score on our held-out validation set of 45k samples, treating Llama 3 annotations as ground-truth. After training, we rounded the scores to integers from 0 to 5.</p>
704
  <p>We then converted the problem to a binary classification task by using a fixed threshold to determine if a file is educational. With a threshold of 3, the model achieved an F1 score of 82% on the validation set, indicating strong performance in distinguishing high-quality educational content.</p>
705
+ <p>The classifier is available at: <a href="https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier">HuggingFaceFW/fineweb-edu-classifier</a>. The training and inference code is available on <a href="https://github.com/huggingface/cosmopedia/tree/main/classification">GitHub</a>.</p>
706
  <h3>Filtering and results</h3>
707
+ <p>We applied the classifier to the 15T tokens of 🍷 FineWeb, a process that required 6,000 H100 GPU hours. We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA. The plot below shows the performance of each threshold compared to FineWeb on six different benchmarks; it uses a 1.82B model trained on 8B tokens.</p>
708
  <div class="main-plot-container">
709
  <figure>
710
  <img src="plots/edu-8k.png">
711
  </figure>
712
  <div id="plot-edu-8k"></div>
713
  </div>
714
+ <p>We then built πŸ“š FineWeb-Edu by filtering out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.3 trillion educational tokens. To evaluate the effectiveness of this filtering at a larger scale, we conducted an ablation using a 1.82B model trained on 350 billion tokens, similar to the FineWeb filtering ablation mentioned above:</p>
715
  <div class="main-plot-container">
716
  <figure>
717
  <img src="plots/edu-100k.png">
 
721
  <p>Here are the key highlights of the ablation results above:</p>
722
  <ul>
723
  <li>πŸ“š FineWeb-Edu surpasses 🍷 FineWeb and all other open web datasets, with remarkable improvements on educational benchmarks such as MMLU, ARC, and OpenBookQA.</li>
724
+ <li>It achieves the same performance with significantly less data, requiring 10x fewer tokens compared to C4 and Dolma to match MMLU results.</li>
725
  <li>This demonstrates the effectiveness of using classifiers trained on LLM annotations for large-scale data filtering.</li>
726
  </ul>
727
+ <p>Given that a threshold of 2 also demonstrated strong performance while retaining more data, we are releasing an additional dataset filtered with this threshold, containing 5.4 trillion tokens under <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2">HuggingFaceFW/fineweb-edu-score-2</a>.</p>
728
+ <p>You can find the two datasets along with the classifier used for the filtering in this <a href="https://huggingface.co/collections/HuggingFaceFW/fineweb-edu-6659c3f3d399d0e1d648adfd">collection</a>.</p>
 
729
  <h2>Next steps</h2>
730
  <p>We want to continue improving FineWeb and will also
731
  release a technical report with more details soon.</p>
 
750
  const isException = el.getAttribute('no-toc');
751
  if (isInTitle || isException) continue;
752
  el.setAttribute('id', el.textContent.toLowerCase().replaceAll(" ", "_"))
753
+ const link = '<a target="_self" href="' + '#' + el.getAttribute('id') + '">' + el.textContent + '</a>';
754
 
755
  const level = el.tagName === 'H2' ? 0 : (el.tagName === 'H3' ? 1 : 2);
756
  while (prevLevel < level) {
 
774
  ToC += '</nav>';
775
  toc.innerHTML = ToC;
776
  toc.setAttribute('prerendered', 'true');
777
+ const toc_links = document.querySelectorAll('d-contents > nav a');
778
+
779
+ window.addEventListener('scroll', (_event) => {
780
+ if (typeof (headings) != 'undefined' && headings != null && typeof (toc_links) != 'undefined' && toc_links != null) {
781
+ // Then iterate forwards, on the first match highlight it and break
782
+ find_active: {
783
+ for (let i = headings.length - 1; i >= 0; i--) {
784
+ if (headings[i].getBoundingClientRect().top - 50 <= 0) {
785
+ if (!toc_links[i].classList.contains("active")) {
786
+ toc_links.forEach((link, _index) => {
787
+ link.classList.remove("active");
788
+ });
789
+ toc_links[i].classList.add('active');
790
+ }
791
+ break find_active;
792
+ }
793
+ }
794
+ toc_links.forEach((link, _index) => {
795
+ link.classList.remove("active");
796
+ });
797
+ }
798
+ }
799
+ });
800
  }
801
  </script>
802
  </body>
style.css CHANGED
@@ -137,3 +137,7 @@ d-byline .byline {
137
  #title-plot {
138
  margin-top: 0px;
139
  }
 
 
 
 
 
137
  #title-plot {
138
  margin-top: 0px;
139
  }
140
+
141
+ d-contents > nav a.active {
142
+ text-decoration: underline;
143
+ }