Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
ArXiv:
License:
license: odc-by | |
task_categories: | |
- text-generation | |
language: | |
- en | |
tags: | |
- web | |
- common crawl | |
size_categories: | |
- 10B<n<100B | |
# ๐ fineweb-pro | |
<p align="center"> | |
<img src="prox-teaser.png"> | |
</p> | |
[ArXiv](http://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX) | |
fineweb-pro is refined from [fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)(350BT sample) using the **ProX** refining framework. | |
It contains about 100B high quality tokens, ready for general language model pre-training. | |
## License | |
fineweb-pro is based on fineweb, which is made available under an ODC-By 1.0 license; users should also abide by the CommonCrawl ToU: https://commoncrawl.org/terms-of-use/. We do not alter the license of any of the underlying data. | |
### Citation | |
``` | |
@article{zhou2024programming, | |
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale}, | |
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei}, | |
journal={arXiv preprint arXiv:2409.17115}, | |
year={2024} | |
} | |
``` |