Datasets:
metadata
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: contents
dtype: string
- name: title
dtype: string
- name: wikipedia_id
dtype: string
splits:
- name: train
num_bytes: 18038881943
num_examples: 35678076
download_size: 10150820540
dataset_size: 18038881943
language:
- en
KILT Corpus
This dataset contains approximately 36 million Wikipedia passages from the "Multi-task retrieval for knowledge-intensive tasks" paper. It is also the retrieval corpus used in the paper Chain-of-Retrieval Augmented Generation.
Fields
id
: A unique identifier for each passage.title
: The title of the Wikipedia page from which the passage originates.contents
: The textual content of the passage.wikipedia_id
: The unique identifier for the Wikipedia page, used for KILT evaluation.
How to Load the Dataset
You can easily load this dataset using the datasets
library from Hugging Face. Make sure you have the library installed (pip install datasets
).
from datasets import load_dataset
ds = load_dataset('corag/kilt-corpus', split='train')
# You can inspect the dataset structure and the first few examples:
print(ds)
print(ds[0])
References
@article{maillard2021multi,
title={Multi-task retrieval for knowledge-intensive tasks},
author={Maillard, Jean and Karpukhin, Vladimir and Petroni, Fabio and Yih, Wen-tau and O{\u{g}}uz, Barlas and Stoyanov, Veselin and Ghosh, Gargi},
journal={arXiv preprint arXiv:2101.00117},
year={2021}
}
@article{wang2025chain,
title={Chain-of-Retrieval Augmented Generation},
author={Wang, Liang and Chen, Haonan and Yang, Nan and Huang, Xiaolong and Dou, Zhicheng and Wei, Furu},
journal={arXiv preprint arXiv:2501.14342},
year={2025}
}