Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
pile-hackernews / README.md
jqhoogland's picture
Update README.md
a5d6a32 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: meta
      struct:
        - name: pile_set_name
          dtype: string
  splits:
    - name: train
      num_bytes: 500518158
      num_examples: 100000
  download_size: 315468008
  dataset_size: 500518158
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset Creation Process

These subsets were created by streaming over the rows from monology/pile-uncopyrighted and filtering by the meta column. Each subset is generally limited to the first 100,000 qualifying rows encountered.

Citations

If you use this dataset, please cite the original Pile papers:

@article{gao2020pile,
title={The Pile: An 800GB dataset of diverse text for language modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
@article{biderman2022datasheet,
title={Datasheet for the pile},
author={Biderman, Stella and Bicheno, Kieran and Gao, Leo},
journal={arXiv preprint arXiv:2201.07311},
year={2022}
}