Datasets:
yury-zyphra
commited on
Commit
•
cb4049c
1
Parent(s):
6de3b89
Update README.md
Browse files
README.md
CHANGED
@@ -73,7 +73,7 @@ In this repository we provide raw results of cross deduplication and filtering.
|
|
73 |
We found the following optimal weights by number of tokens (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
|
74 |
|
75 |
Below you will find an example of how to get proper dataset object.
|
76 |
-
It demonstrates how to select only `nemo_id` and `text` columns, and then interleave the
|
77 |
One needs to be careful with weights normalization, as `interleave_datasets()` returns documents, while our weights are token-wise. We provide precomputed document-wise weights in the example below.
|
78 |
To stream the dataset, add `streaming=True` to the `load_dataset()` commands.
|
79 |
|
|
|
73 |
We found the following optimal weights by number of tokens (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24.
|
74 |
|
75 |
Below you will find an example of how to get proper dataset object.
|
76 |
+
It demonstrates how to select only `nemo_id` and `text` columns, and then interleave the datasets with probabilities computed from the weights above.
|
77 |
One needs to be careful with weights normalization, as `interleave_datasets()` returns documents, while our weights are token-wise. We provide precomputed document-wise weights in the example below.
|
78 |
To stream the dataset, add `streaming=True` to the `load_dataset()` commands.
|
79 |
|