|
--- |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- math |
|
size_categories: |
|
- 10B<n<100B |
|
configs: |
|
- config_name: arxiv |
|
data_files: |
|
- split: train |
|
path: arxiv/train/*.jsonl.zst |
|
- split: validation |
|
path: arxiv/validation/*.jsonl.zst |
|
- split: test |
|
path: arxiv/test/*.jsonl.zst |
|
- config_name: open-web-math |
|
data_files: |
|
- split: train |
|
path: open-web-math/train/*.jsonl.zst |
|
- split: validation |
|
path: open-web-math/validation/*.jsonl.zst |
|
- split: test |
|
path: open-web-math/test/*.jsonl.zst |
|
- config_name: algebraic-stack |
|
data_files: |
|
- split: train |
|
path: algebraic-stack/train/*.jsonl.zst |
|
- split: validation |
|
path: algebraic-stack/validation/*.jsonl.zst |
|
- split: test |
|
path: algebraic-stack/test/*.jsonl.zst |
|
--- |
|
<img src="proofpile_logo.jpg" width="500"> |
|
|
|
[ArXiv](http://arxiv.org/abs/2310.10631) | [Models](https://huggingface.co/EleutherAI/llemma_34b) | [Data](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | [Code](https://github.com/EleutherAI/math-lm) | [Blog](https://blog.eleuther.ai/llemma/) | [Sample Explorer](https://llemma-demo.github.io/) |
|
|
|
[Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/) |
|
|
|
The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) models. It consists of three subsets: |
|
- `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) |
|
- `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet. |
|
- `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics. |
|
|
|
You can download the dataset as follows |
|
```python |
|
from datasets import load_dataset |
|
ds = load_dataset("EleuetherAI/proof-pile-2") |
|
|
|
# To load only a specific subset, pass it as an argument, e.g |
|
ds_arxiv = load_dataset("EleutherAI/proof-pile-2", "arxiv") |
|
``` |
|
|
|
### Schema |
|
Each dataset row has the following structure |
|
```python |
|
{ |
|
"text": ..., # document text |
|
"meta": ..., # JSON string of metadata, schema specific to data source |
|
} |
|
``` |
|
|
|
### Dataset Contents |
|
For detailed documentation of the ArXiv and web subsets, refer to [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math). The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics. |
|
|
|
| Language | AlgebraicStack tokens | |
|
|-----------|-----------------------| |
|
| Agda | 35.2 M | |
|
| C | 25.1 M | |
|
| C++ | 954.1 M | |
|
| Coq | 281.9 M | |
|
| Fortran | 724.9 M | |
|
| GAP | 3.6 M | |
|
| Haskell | 9.1 M | |
|
| Idris | 10.9 M | |
|
| Isabelle | 1,089.7 M | |
|
| Julia | 531.0 M | |
|
| Jupyter | 199.1 M | |
|
| Lean | 285.6 M | |
|
| Maple | 2.0 M | |
|
| Matlab | 65.8 M | |
|
| Python | 6,098.8 M | |
|
| R | 71.3 M | |
|
| Tex | 567.7 M | |
|
| **Total** | **10,955.7 M** | |
|
|
|
### License |
|
We do not alter the license of any of the underlying data. |
|
|
|
### Version History |
|
**v1.1.0**: Contains an updated version of OpenWebMath, precisely the one available at [open-web-math/open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math). This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents. |
|
|
|
**v1.0.0**: The data used to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b). Uses a development version of OpenWebMath. |
|
|
|
### Citation |
|
For the entire Proof-Pile-2, cite |
|
``` |
|
@misc{azerbayev2023llemma, |
|
title={Llemma: An Open Language Model For Mathematics}, |
|
author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck}, |
|
year={2023}, |
|
eprint={2310.10631}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
For the ArXiv subset, cite |
|
``` |
|
@software{together2023redpajama, |
|
author = {Together Computer}, |
|
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset}, |
|
month = April, |
|
year = 2023, |
|
url = {https://github.com/togethercomputer/RedPajama-Data} |
|
} |
|
``` |
|
For OpenWebMath, cite |
|
``` |
|
@misc{paster2023openwebmath, |
|
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, |
|
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba}, |
|
year={2023}, |
|
eprint={2310.06786}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.AI} |
|
} |
|
``` |
|
|