BramVanroy
commited on
Commit
•
8e81d3f
1
Parent(s):
8a9c55d
Update README.md
Browse files
README.md
CHANGED
@@ -288,7 +288,7 @@ This is a combined and filtered version of [CulturaX](https://huggingface.co/dat
|
|
288 |
|
289 |
Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokenization is done with the big vocabulary of the `google/gemma-2b` tokenizer so depending on your tokenizer these exact numbers may differ.
|
290 |
|
291 |
-
Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (
|
292 |
|
293 |
Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.
|
294 |
|
|
|
288 |
|
289 |
Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokenization is done with the big vocabulary of the `google/gemma-2b` tokenizer so depending on your tokenizer these exact numbers may differ.
|
290 |
|
291 |
+
Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~26M tokens).
|
292 |
|
293 |
Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.
|
294 |
|