zhangir-azerbayev
commited on
Commit
•
1b88a1d
1
Parent(s):
152fd61
update readme
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ size_categories:
|
|
14 |
|
15 |
[Github ](https://github.com/EleutherAI/math-lm) | [ArXiv](#)
|
16 |
|
17 |
-
The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. It consists of three subsets:
|
18 |
- `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
|
19 |
- `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet.
|
20 |
- `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.
|
|
|
14 |
|
15 |
[Github ](https://github.com/EleutherAI/math-lm) | [ArXiv](#)
|
16 |
|
17 |
+
The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) models. It consists of three subsets:
|
18 |
- `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
|
19 |
- `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet.
|
20 |
- `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.
|