Datasets:
BerenMillidge
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ configs:
|
|
36 |
|
37 |
Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
|
38 |
|
39 |
-
To construct Zyda-2, we took the best open-source datasets available: Zyda, FineWeb, DCLM, Dolma. Models trained on Zyda-2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda-2 outperforms all its constituent datasets in resulting model quality.
|
40 |
|
41 |
An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
|
42 |
|
|
|
36 |
|
37 |
Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers.
|
38 |
|
39 |
+
To construct Zyda-2, we took the best open-source datasets available: [Zyda](https://huggingface.co/datasets/Zyphra/Zyda), [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0), and [Dolma](https://huggingface.co/datasets/allenai/dolma). Models trained on Zyda-2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda-2 outperforms all its constituent datasets in resulting model quality.
|
40 |
|
41 |
An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset.
|
42 |
|