MMDocRAG

community
Activity Feed

AI & ML interests

Retrieval, MLLM, VLM, LLM

Recent Activity

daviddongdong  updated a Space 3 days ago
MMDocIR/README
daviddongdong  updated a dataset 3 days ago
MMDocIR/MMDocRAG
View all activity

MMDocRAG Overview

MMDocRAG is built for (i) multimodal document retrieval and (ii) retrieval-augmented multimodal generation:


MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents

Kuicai Dong* · Yujing Chang* · Derrick Xin Deik Goh* · Dexun Li · Ruiming Tang · Yong Liu

📖Paper | 🏠Homepage | 🤗Huggingface | 👉Github

Multimodal document retrieval aims to identify and retrieve various forms of multimodal content, such as figures, tables, charts, and layout information from extensive documents. Despite its increasing popularity, there is a notable lack of a comprehensive and robust benchmark to effectively evaluate the performance of systems in such tasks. To address this gap, this work introduces a new benchmark, named MMDocIR, that encompasses two distinct tasks: page-level and layout-level retrieval. The former evaluates the performance of identifying the most relevant pages within a long document, while the later assesses the ability of detecting specific layouts, providing a more fine-grained measure than whole-page analysis. A layout refers to a variety of elements, including textual paragraphs, equations, figures, tables, or charts. The MMDocIR benchmark comprises a rich dataset featuring 1,685 questions annotated by experts and 173,843 questions with bootstrapped labels, making it a valuable resource in multimodal document retrieval for both training and evaluation. Through rigorous experiments, we demonstrate that (i) visual retrievers significantly outperform their text counterparts, (ii) MMDocIR training set effectively enhances the performance of multimodal document retrieval and (iii) text retrievers leveraging VLM-text significantly outperforms retrievers relying on OCR-text.

Logo


MMDocRAG: Benchmarking Retrieval-Augmented Multimomal Generation for Document Question Answering

Kuicai Dong* · Yujing Chang* · Shijie Huang · Yasheng Wang · Ruiming Tang · Yong Liu

📖Paper | 🏠Homepage | 🤗Huggingface | 👉Github

Document Visual Question Answering (DocVQA) faces dual challenges in processing lengthy multimodal documents (text, images, tables) and performing cross-modal reasoning. Current document retrieval-augmented generation (DocRAG) methods remain limited by their text-centric approaches, frequently missing critical visual information. The field also lacks robust benchmarks for assessing multimodal evidence integration and selection. We introduce MMDocRAG, a comprehensive benchmark featuring 4,055 expert-annotated QA pairs with multi-page, cross-modal evidence chains. Our framework introduces innovative metrics for evaluating multimodal quote selection and enables answers that combine text with relevant visual elements. Through large-scale experiments with 60 language/vision models and 14 retrieval systems, we identify persistent challenges in multimodal evidence handling. Key findings reveal proprietary vision-language models show moderate advantages over text-only models, while open-source alternatives trail significantly. Notably, fine-tuned LLMs achieve substantial improvements when using detailed image descriptions. MMDocRAG establishes a rigorous testing ground and provides actionable insights for developing more robust multimodal DocVQA systems.

Logo

Citation

If you use any datasets from this organization in your research, please cite the original dataset as follows:

@misc{dong2025mmdocirbenchmarkingmultimodalretrieval,
      title={MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents}, 
      author={Kuicai Dong and Yujing Chang and Xin Deik Goh and Dexun Li and Ruiming Tang and Yong Liu},
      year={2025},
      eprint={2501.08828},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2501.08828}, 
}

@misc{dong2025benchmarkingretrievalaugmentedmultimomalgeneration,
      title={Benchmarking Retrieval-Augmented Multimomal Generation for Document Question Answering}, 
      author={Kuicai Dong and Yujing Chang and Shijie Huang and Yasheng Wang and Ruiming Tang and Yong Liu},
      year={2025},
      eprint={2505.16470},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2505.16470}, 
}