--- language: - en dataset_info: - config_name: 100M features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 650667681.8337438 num_examples: 227211 - name: validation num_bytes: 2863715.5852214186 num_examples: 1000 download_size: 778394816 dataset_size: 653531397.4189652 - config_name: 10M features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 62042398.153822035 num_examples: 21665 - name: validation num_bytes: 2863715.5852214186 num_examples: 1000 download_size: 80814159 dataset_size: 64906113.73904346 configs: - config_name: 100M data_files: - split: train path: 100M/train-* - split: validation path: 100M/validation-* - config_name: 10M data_files: - split: train path: 10M/train-* - split: validation path: 10M/validation-* --- This repository contains random subsets of the English wikipedia obtained from [`"wikimedia/wikipedia"`](https://huggingface.co/datasets/wikimedia/wikipedia) (`"20231101.en"`). It includes two random subsets of the English wikipedia, one containing roughly 10M words total (23k articles), the other containing roughly 100M words total (228K articles). These data are intended to be used for the BabyLM challenge. For convenience, the repository also includes the full English wikipedia containing roughly 2.8B words total (6.4M articles). You can load these datasets as follows: ```python from datasets import load_dataset ds_10M = load_dataset("eminorhan/wikipedia", "10M") # 10M word subset ds_100M = load_dataset("eminorhan/wikipedia", "100M") # 100M word subset ds_all = load_dataset("eminorhan/wikipedia", "all") # the full data (2.8B words) ``` Both subsets come with `train`/`validation` splits, whereas the full data only has a `train` split. We applied lightweight preprocessing to the article texts, mainly stripping away some sections of the articles like "References", "See also", *etc.*, using this script.