Datasets:

ArXiv:
License:
dcaffo commited on
Commit
03b32a4
·
verified ·
1 Parent(s): 5c38849

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -3
README.md CHANGED
@@ -1,3 +1,43 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ The dataset used to train and evaluate [ReT](https://www.arxiv.org/abs/2503.01980) for multimodal information retrieval. The dataset is almost the same as the original M2KR, with a few modifications:
6
+ - we exlude any data from MSMARCO, as it does not contain query images;
7
+ - we add passage images to OVEN, InfoSeek, E-VQA, and OKVQA. Refer to the paper for more details.
8
+
9
+
10
+ ## Sources
11
+ - **Repository:** https://github.com/aimagelab/ReT
12
+ - **Paper:** [Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval](https://www.arxiv.org/abs/2503.01980) (CVPR 2025)
13
+
14
+
15
+ ## Downloads images
16
+ 1. Initialize git LFS
17
+ ```
18
+ git lfs install
19
+ ```
20
+
21
+ 2. Clone the repository (it will take a lot)
22
+ ```
23
+ git clone https://huggingface.co/datasets/aimagelab/ReT-M2KR
24
+ ```
25
+
26
+ 3. Decompress images (it will take a lot, again)
27
+ ```
28
+ cat ret-img-{000..129}.tar.gz | tar xzf -
29
+ ```
30
+
31
+ ## Citation
32
+
33
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
34
+
35
+ **BibTeX:**
36
+ ```
37
+ @inproceedings{caffagni2025recurrence,
38
+ title={{Recurrence-Enhanced Vision-and-Language Transformers for Robust Multimodal Document Retrieval}},
39
+ author={Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
40
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
41
+ year={2025}
42
+ }
43
+ ```