metadata
language:
- en
dataset_info:
features:
- name: text
dtype: string
- name: metadata
struct:
- name: pile_set_name
sequence: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 641011558
num_examples: 403027
download_size: 397624536
dataset_size: 641011558
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Description
This dataset is a sampled subset of the Pile dataset.
The subset sample distribution is:
{
'Pile-CC': 198245,
'OpenWebText2': 122382,
'FreeLaw': 37517,
'USPTO Backgrounds': 10195,
'Wikipedia (en)': 8072,
'PubMed Central': 5849,
'PubMed Abstracts': 4965,
'Gutenberg (PG-19)': 2712,
'BookCorpus2': 2550,
'Books3': 2432,
'StackExchange': 1753,
'PhilPapers': 1560,
'YoutubeSubtitles': 1187,
'OpenSubtitles': 1015,
'ArXiv': 610,
'NIH ExPorter': 476,
'Enron Emails': 439,
'EuroParl': 419,
'Github': 390,
'HackerNews': 259
}
The dataset contains ~100M words of text. This can be checked with:
from datasets import load_dataset
ds = load_dataset("PatrickHaller/pile-100M-words")
count = 0
for row in ds["train"]:
count += len(row["text"].split(" "))
print(count)
# Out: 99999861