eminorhan commited on
Commit
15fd916
1 Parent(s): c962c37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -56,8 +56,8 @@ configs:
56
  ---
57
  This repository contains random subsets of the English wikipedia obtained from
58
  [`"wikimedia/wikipedia"`](https://huggingface.co/datasets/wikimedia/wikipedia) (`"20231101.en"`).
59
- It includes two subsets of the English wikipedia, one containing roughly 10M words total (23k articles), the other containing roughly 100M words total (228K articles).
60
- These data are intended to be used for the BabyLM challenge. For convenience, this repository also includes the full English wikipedia containing roughly 2.8B words total
61
  (6.4M articles).
62
 
63
  You can load these datasets as follows:
@@ -71,4 +71,4 @@ ds_100M = load_dataset("eminorhan/wikipedia", "100M") # 100M word subset
71
  ds_all = load_dataset("eminorhan/wikipedia", "all") # the full data (2.8B words)
72
  ```
73
  Both subsets come with `train`/`validation` splits, whereas the full data only has a `train` split.
74
- We applied lightweight preprocessing to the article texts, mainly stripping away some sections of the articles like "References" or "See also", using this script.
 
56
  ---
57
  This repository contains random subsets of the English wikipedia obtained from
58
  [`"wikimedia/wikipedia"`](https://huggingface.co/datasets/wikimedia/wikipedia) (`"20231101.en"`).
59
+ It includes two random subsets of the English wikipedia, one containing roughly 10M words total (23k articles), the other containing roughly 100M words total (228K articles).
60
+ These data are intended to be used for the BabyLM challenge. For convenience, the repository also includes the full English wikipedia containing roughly 2.8B words total
61
  (6.4M articles).
62
 
63
  You can load these datasets as follows:
 
71
  ds_all = load_dataset("eminorhan/wikipedia", "all") # the full data (2.8B words)
72
  ```
73
  Both subsets come with `train`/`validation` splits, whereas the full data only has a `train` split.
74
+ We applied lightweight preprocessing to the article texts, mainly stripping away some sections of the articles like "References", "See also", *etc.*, using this script.