BramVanroy commited on
Commit
0b8891c
·
verified ·
1 Parent(s): 30cb9b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -293,13 +293,13 @@ This is a combined and filtered version of [CulturaX](https://huggingface.co/dat
293
 
294
  Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokens are counted as white-space tokens, so depending on your tokenizer, you'll likely end up with more tokens than indicated here.
295
 
296
- Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~26M tokens).
297
 
298
  Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.
299
 
300
  ## Configs
301
 
302
- ### 10k -- 79 samples -- 10,087 tokens
303
  - ratio_wikipedia: 100.00%
304
  - total_num_tokens: 10,087
305
  - train_num_tokens: 9,205
@@ -308,7 +308,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
308
  - train_num_samples: 78
309
  - test_num_samples: 1
310
 
311
- ### 100k -- 1,057 samples -- 100,075 tokens
312
  - ratio_wikipedia: 100.00%
313
  - total_num_tokens: 100,075
314
  - train_num_tokens: 98,044
@@ -317,7 +317,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
317
  - train_num_samples: 1,047
318
  - test_num_samples: 10
319
 
320
- ### 1M -- 10,802 samples -- 1,000,239 tokens
321
  - ratio_wikipedia: 100.00%
322
  - total_num_tokens: 1,000,239
323
  - train_num_tokens: 991,119
@@ -326,7 +326,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
326
  - train_num_samples: 10,694
327
  - test_num_samples: 108
328
 
329
- ### 10M -- 141,263 samples -- 10,000,022 tokens
330
  - ratio_wikipedia: 100.00%
331
  - total_num_tokens: 10,000,022
332
  - train_num_tokens: 9,874,772
@@ -335,7 +335,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
335
  - train_num_samples: 139,851
336
  - test_num_samples: 1,412
337
 
338
- ### 100M -- 1,028,484 samples -- 100,000,047 tokens
339
  - ratio_wikipedia: 100.00%
340
  - total_num_tokens: 100,000,047
341
  - train_num_tokens: 99,013,372
@@ -344,7 +344,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
344
  - train_num_samples: 1,018,200
345
  - test_num_samples: 10,284
346
 
347
- ### 1B -- 5,153,898 samples -- 1,000,000,187 tokens
348
  - ratio_wikipedia: 61.21%
349
  - total_num_tokens: 1,000,000,187
350
  - train_num_tokens: 989,990,190
@@ -353,7 +353,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
353
  - train_num_samples: 5,102,360
354
  - test_num_samples: 51,538
355
 
356
- ### 5B -- 20,833,009 samples -- 5,000,000,076 tokens
357
  - ratio_wikipedia: 25.35%
358
  - total_num_tokens: 5,000,000,076
359
  - train_num_tokens: 4,984,493,654
@@ -362,7 +362,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
362
  - train_num_samples: 20,769,009
363
  - test_num_samples: 64,000
364
 
365
- ### 10B -- 40,240,566 samples -- 10,000,000,115 tokens
366
  - ratio_wikipedia: 18.41%
367
  - total_num_tokens: 10,000,000,115
368
  - train_num_tokens: 9,984,156,828
@@ -371,7 +371,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
371
  - train_num_samples: 40,176,566
372
  - test_num_samples: 64,000
373
 
374
- ### 15B -- 59,648,123 samples -- 15,000,000,154 tokens
375
  - ratio_wikipedia: 15.98%
376
  - total_num_tokens: 15,000,000,154
377
  - train_num_tokens: 14,983,970,518
@@ -380,7 +380,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
380
  - train_num_samples: 59,584,123
381
  - test_num_samples: 64,000
382
 
383
- ### 20B -- 79,055,679 samples -- 20,000,000,009 tokens
384
  - ratio_wikipedia: 14.75%
385
  - total_num_tokens: 20,000,000,009
386
  - train_num_tokens: 19,983,799,357
@@ -389,7 +389,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
389
  - train_num_samples: 78,991,679
390
  - test_num_samples: 64,000
391
 
392
- ### 25B -- 98,463,236 samples -- 25,000,000,048 tokens
393
  - ratio_wikipedia: 14.00%
394
  - total_num_tokens: 25,000,000,048
395
  - train_num_tokens: 24,983,765,326
@@ -398,7 +398,7 @@ Wikipedia and CulturaX were suffled before merging and the teset set creation wa
398
  - train_num_samples: 98,399,236
399
  - test_num_samples: 64,000
400
 
401
- ### 30B -- 117,870,793 samples -- 30,000,000,087 tokens
402
  - ratio_wikipedia: 13.50%
403
  - total_num_tokens: 30,000,000,087
404
  - train_num_tokens: 29,983,707,932
 
293
 
294
  Different configs are available based on the number of tokens (see a section below with an overview). This can be useful if you want to know exactly how many tokens you have. Great for using as a streaming dataset, too. Tokens are counted as white-space tokens, so depending on your tokenizer, you'll likely end up with more tokens than indicated here.
295
 
296
+ Every config also has a test set (for validation) of 1% the total size of the dataset, minimally 1 max. 64k samples (~16M tokens).
297
 
298
  Wikipedia and CulturaX were suffled before merging and the teset set creation was also shuffled. Priority is given to Wikipedia to prioritize knowledge-content, so the smaller configs will consist exclusively of Wikipedia and for the larger configs we augment with CulturaX. Every config builds further on the previous, so this means that every config contains the same data as the smaller ones and more HOWEVER their train/test splits are not the same, so test set of one config may overlap with samples for another training set. This is usually not a problem but just be aware that you do not train on one config's training set and test with another config's test set.
299
 
300
  ## Configs
301
 
302
+ ### `10k` -- 79 samples -- 10,087 tokens
303
  - ratio_wikipedia: 100.00%
304
  - total_num_tokens: 10,087
305
  - train_num_tokens: 9,205
 
308
  - train_num_samples: 78
309
  - test_num_samples: 1
310
 
311
+ ### `100k` -- 1,057 samples -- 100,075 tokens
312
  - ratio_wikipedia: 100.00%
313
  - total_num_tokens: 100,075
314
  - train_num_tokens: 98,044
 
317
  - train_num_samples: 1,047
318
  - test_num_samples: 10
319
 
320
+ ### `1M` -- 10,802 samples -- 1,000,239 tokens
321
  - ratio_wikipedia: 100.00%
322
  - total_num_tokens: 1,000,239
323
  - train_num_tokens: 991,119
 
326
  - train_num_samples: 10,694
327
  - test_num_samples: 108
328
 
329
+ ### `10M` -- 141,263 samples -- 10,000,022 tokens
330
  - ratio_wikipedia: 100.00%
331
  - total_num_tokens: 10,000,022
332
  - train_num_tokens: 9,874,772
 
335
  - train_num_samples: 139,851
336
  - test_num_samples: 1,412
337
 
338
+ ### `100M` -- 1,028,484 samples -- 100,000,047 tokens
339
  - ratio_wikipedia: 100.00%
340
  - total_num_tokens: 100,000,047
341
  - train_num_tokens: 99,013,372
 
344
  - train_num_samples: 1,018,200
345
  - test_num_samples: 10,284
346
 
347
+ ### `1B` -- 5,153,898 samples -- 1,000,000,187 tokens
348
  - ratio_wikipedia: 61.21%
349
  - total_num_tokens: 1,000,000,187
350
  - train_num_tokens: 989,990,190
 
353
  - train_num_samples: 5,102,360
354
  - test_num_samples: 51,538
355
 
356
+ ### `5B` -- 20,833,009 samples -- 5,000,000,076 tokens
357
  - ratio_wikipedia: 25.35%
358
  - total_num_tokens: 5,000,000,076
359
  - train_num_tokens: 4,984,493,654
 
362
  - train_num_samples: 20,769,009
363
  - test_num_samples: 64,000
364
 
365
+ ### `10B` -- 40,240,566 samples -- 10,000,000,115 tokens
366
  - ratio_wikipedia: 18.41%
367
  - total_num_tokens: 10,000,000,115
368
  - train_num_tokens: 9,984,156,828
 
371
  - train_num_samples: 40,176,566
372
  - test_num_samples: 64,000
373
 
374
+ ### `15B` -- 59,648,123 samples -- 15,000,000,154 tokens
375
  - ratio_wikipedia: 15.98%
376
  - total_num_tokens: 15,000,000,154
377
  - train_num_tokens: 14,983,970,518
 
380
  - train_num_samples: 59,584,123
381
  - test_num_samples: 64,000
382
 
383
+ ### `20B` -- 79,055,679 samples -- 20,000,000,009 tokens
384
  - ratio_wikipedia: 14.75%
385
  - total_num_tokens: 20,000,000,009
386
  - train_num_tokens: 19,983,799,357
 
389
  - train_num_samples: 78,991,679
390
  - test_num_samples: 64,000
391
 
392
+ ### `25B` -- 98,463,236 samples -- 25,000,000,048 tokens
393
  - ratio_wikipedia: 14.00%
394
  - total_num_tokens: 25,000,000,048
395
  - train_num_tokens: 24,983,765,326
 
398
  - train_num_samples: 98,399,236
399
  - test_num_samples: 64,000
400
 
401
+ ### `30B` -- 117,870,793 samples -- 30,000,000,087 tokens
402
  - ratio_wikipedia: 13.50%
403
  - total_num_tokens: 30,000,000,087
404
  - train_num_tokens: 29,983,707,932