Datasets:

Modalities:
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
eugene-yang's picture
update bibtex
8a9f31b
---
license: mit
---
# MS MARCO Distillation Scores for Translate-Distill
This repository contains [MS MARCO](https://microsoft.github.io/msmarco/) training
query-passage scores produced by MonoT5 reranker
[`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) and
[`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k).
Each training query is associated with the top-50 passages retrieved by the [ColBERTv2](https://arxiv.org/abs/2112.01488) model.
Files are gzip compressed and with the naming scheme of `{teacher}-monot5-{msmarco, mmarco}-{qlang}{plang}.jsonl.gz`,
which indicates the teacher reranker that inferenced using `qlang` queries and `plang` passages from MS MARCO.
For languages other than English (eng), we use the translated text provided by mmarco and [neuMarco](https://ir-datasets.com/neumarco.html).
We additionally provide the Persian translation of the MS MARCO training queries since they were not included in either neuMARCO or mMARCO.
You can find the tsv files containing the translation in `msmarco.train.query.fas.tsv.gz`.
## Usage
We recommand downloading the files to incorporate with the training script in the [PLAID-X](https://github.com/hltcoe/ColBERT-X/tree/plaid-x) codebase.
## Citation and Bibtex Info
Please cite the following paper if you use the scores.
```bibtext
@inproceedings{translate-distill,
author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller},
title = {Translate-Distill: Learning Cross-Language \ Dense Retrieval by Translation and Distillation},
booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)},
year = {2024},
url = {https://arxiv.org/abs/2401.04810}
}
```