Datasets:

Languages:
English
ArXiv:
Tags:
math
zhangir-azerbayev commited on
Commit
ac8dab6
1 Parent(s): e6cbf7f
Files changed (2) hide show
  1. README.md +92 -1
  2. proofpile_logo.jpg +3 -0
README.md CHANGED
@@ -7,4 +7,95 @@ tags:
7
  - math
8
  size_categories:
9
  - 10B<n<100B
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - math
8
  size_categories:
9
  - 10B<n<100B
10
+ ---
11
+ <img src="proofpile_logo.jpg" width="500">
12
+
13
+ [Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/)
14
+
15
+ [Github ](https://github.com/EleutherAI/math-lm) | [ArXiv](#)
16
+
17
+ The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. It consists of three subsets:
18
+ - `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
19
+ - `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet.
20
+ - `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.
21
+
22
+ You can download the dataset as follows
23
+ ```
24
+ from datasets import load_dataset
25
+ ds = load_dataset("EleuetherAI/proof-pile-2")
26
+
27
+ # To load only a specific subset, pass it as an argument, e.g
28
+ ds_arxiv = load_dataset("EleutherAI/proof-pile-2", "arxiv")
29
+ ```
30
+
31
+ ### Schema
32
+ Each dataset row has the following structure
33
+ ```
34
+ {
35
+ "text": ..., # document text
36
+ "meta": ..., # JSON string of metadata, schema specific to data source
37
+ }
38
+ ```
39
+
40
+ ### Dataset Contents
41
+ For detailed documentation of the ArXiv and web subsets, refer to [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math). The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.
42
+
43
+ | Language | AlgebraicStack tokens |
44
+ |-----------|-----------------------|
45
+ | Agda | 35.2 M |
46
+ | C | 25.1 M |
47
+ | C++ | 954.1 M |
48
+ | Coq | 281.9 M |
49
+ | Fortran | 724.9 M |
50
+ | GAP | 3.6 M |
51
+ | Haskell | 9.1 M |
52
+ | Idris | 10.9 M |
53
+ | Isabelle | 1,089.7 M |
54
+ | Julia | 531.0 M |
55
+ | Jupyter | 199.1 M |
56
+ | Lean | 285.6 M |
57
+ | Maple | 2.0 M |
58
+ | Matlab | 65.8 M |
59
+ | Python | 6,098.8 M |
60
+ | R | 71.3 M |
61
+ | Tex | 567.7 M |
62
+ | **Total** | **10,955.7 M** |
63
+
64
+ ### License
65
+ We do not alter the license of any of the underlying data.
66
+
67
+ ### Version History
68
+ **v1.0.0**: The data used to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b). Uses a development version of OpenWebMath.
69
+
70
+ ### Citation
71
+ For the entire Proof-Pile-2, cite
72
+ ```
73
+ @article{azerbayev2023llemma,
74
+ title={Llemma: an open language model for mathematics},
75
+ author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck},
76
+ eprint={xyz.xyz},
77
+ archivePrefix={arXiv}
78
+ year={2023}
79
+ }
80
+ ```
81
+ For the ArXiv subset, cite
82
+ ```
83
+ @software{together2023redpajama,
84
+ author = {Together Computer},
85
+ title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
86
+ month = April,
87
+ year = 2023,
88
+ url = {https://github.com/togethercomputer/RedPajama-Data}
89
+ }
90
+ ```
91
+ For OpenWebMath, cite
92
+ ```
93
+ @misc{paster2023openwebmath,
94
+ title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
95
+ author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
96
+ year={2023},
97
+ eprint={2310.06786},
98
+ archivePrefix={arXiv},
99
+ primaryClass={cs.AI}
100
+ }
101
+ ```
proofpile_logo.jpg ADDED

Git LFS Details

  • SHA256: 66851a518fffeebcf59ff1472ab32f59d64e4005ec94056b6a74207285662cbc
  • Pointer size: 131 Bytes
  • Size of remote file: 123 kB