--- language: - en dataset_info: - config_name: 100M features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 650667681.8337438 num_examples: 227211 - name: validation num_bytes: 2863715.5852214186 num_examples: 1000 download_size: 778394816 dataset_size: 653531397.4189652 - config_name: 10M features: - name: id dtype: string - name: url dtype: string - name: title dtype: string - name: text dtype: string splits: - name: train num_bytes: 62042398.153822035 num_examples: 21665 - name: validation num_bytes: 2863715.5852214186 num_examples: 1000 download_size: 80814159 dataset_size: 64906113.73904346 configs: - config_name: 100M data_files: - split: train path: 100M/train-* - split: validation path: 100M/validation-* - config_name: 10M data_files: - split: train path: 10M/train-* - split: validation path: 10M/validation-* --- This repository contains a random subset of the English wikipedia (`"wikimedia/wikipedia", "20231101.en"`). It includes two versions of the dataset, one containing roughly 10M words total (~23k articles), the other containing roughly 100M words total ( ~228K articles). These data are intended to be used for the BabyLM challenge. You can load these datasets as follows: ```python from datasets import load_dataset ds_10M = load_dataset("eminorhan/random_wikipedia", "10M") ds_100M = load_dataset("eminorhan/random_wikipedia", "100M") ``` Both datasets come with `train`/`validation` splits.