configs:
- config_name: default
data_files:
- split: train
path:
- train_0.jsonl
- train_1.jsonl
task_categories:
- question-answering
- feature-extraction
tags:
- InformationRetrieval
- MSMARCO
- IR
size_categories:
- 100K<n<1M
The MSMARCO training data used to train the models from Hypencoder: Hypernetworks for Information Retrieval .
Dataset Overview
This dataset is based on the MSMARCO Passage dataset and includes all the queries which have a positive passage in the original dataset (there are additional queries with no positive passages which we do not use). Each query has the known positive passage as well as 200 additional passages. These additional passages may be unlabeled positives or negatives. Each associated passage has a score from cross-encoder/ms-marco-MiniLM-L-12-v2 which were used for distillation training. The passages are ordered by the teacher score.
Dataset Construction
The 200 associated passages for each query were found by retrieving the top 800 passages using an early version of Hypencoder and then (1) selecting the top 100 passages (2) another 100 from the bottom 700 passages uniformly at random. Additionally, the known relevant passage provided by MSMARCO is added.
All query-passage pairs were then scored with cross-encoder/ms-marco-MiniLM-L-12-v2 to get teacher scores.
Dataset Structure
{
"query": {
"id": query ID which is the same as the one used in MSMARCO,
"content": query text,
"tokenized_content": token IDs for bert-base-uncased,
},
"items": [
{
"id": passage ID which is the same as the one used in MSMARCO,
"content": passage text,
"tokened_content": token IDs for bert-base-uncased,
"score": Teacher score from 'cross-encoder/ms-marco-MiniLM-L-12-v2/',
"type": Has two values [null, "given"] if the passage was the provided positive by MSMARCO the type is "given" otherwise it is null.
},
]
}
Cite
If you use this dataset please cite our paper:
@misc{killingback2025hypencoderhypernetworksinformationretrieval,
title={Hypencoder: Hypernetworks for Information Retrieval},
author={Julian Killingback and Hansi Zeng and Hamed Zamani},
year={2025},
eprint={2502.05364},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2502.05364},
}
and the original MSMARCO paper:
@inproceedings{MSMARCO,
author = {Tri Nguyen and
Mir Rosenberg and
Xia Song and
Jianfeng Gao and
Saurabh Tiwary and
Rangan Majumder and
Li Deng},
editor = {Tarek Richard Besold and
Antoine Bordes and
Artur S. d'Avila Garcez and
Greg Wayne},
title = {{MS} {MARCO:} {A} Human Generated MAchine Reading COmprehension Dataset},
booktitle = {Proceedings of the Workshop on Cognitive Computation: Integrating
neural and symbolic approaches 2016 co-located with the 30th Annual
Conference on Neural Information Processing Systems {(NIPS} 2016),
Barcelona, Spain, December 9, 2016},
series = {{CEUR} Workshop Proceedings},
volume = {1773},
publisher = {CEUR-WS.org},
year = {2016},
url = {https://ceur-ws.org/Vol-1773/CoCoNIPS\_2016\_paper9.pdf},
timestamp = {Thu, 11 Apr 2024 13:33:56 +0200},
biburl = {https://dblp.org/rec/conf/nips/NguyenRSGTMD16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}