Datasets:
ArXiv:
License:
metadata
license: mit
The dataset used to train and evaluate ReT for multimodal information retrieval. The dataset is almost the same as the original M2KR, with a few modifications:
- we exlude any data from MSMARCO, as it does not contain query images;
- we add passage images to OVEN, InfoSeek, E-VQA, and OKVQA. Refer to the paper for more details.
Sources
- Repository: https://github.com/aimagelab/ReT
- Paper: Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval (CVPR 2025)
Download images
- Initialize git LFS
git lfs install
- Clone the repository (it will take a lot)
git clone https://huggingface.co/datasets/aimagelab/ReT-M2KR
- Decompress images (it will take a lot, again)
cat ret-img-{000..129}.tar.gz | tar xzf -
RAG - InfoSeek
jsonl/rag/kb_infoseek50k.jsonl
is the knowledge base used to execute experiments on Retrieval-Augmented Generation on the InfoSeek benchmark. The field passage_image_path
contains a relative path to the Wikipedia image associated with a given passage. The Wikipedia images can be downloaded from the OVEN repository.
Citation
BibTeX:
@inproceedings{caffagni2025recurrence,
title={{Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval}},
author={Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}