File size: 3,592 Bytes
13db956 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
---
language:
- en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- question-answering
- visual-question-answering
- multiple-choice
pretty_name: MRAG-Bench
dataset_info:
features:
- name: id
dtype: string
- name: aspect
dtype: string
- name: scenario
dtype: string
- name: image
dtype: image
- name: gt_images
sequence: image
- name: question
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer_choice
dtype: string
- name: answer
dtype: string
- name: image_type
dtype: string
- name: source
dtype: string
- name: retrieved_images
sequence: image
splits:
- name: test
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# MRAG-Bench: Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models
[**π Homepage**](https://mragbench.github.io/) | [**π Paper**](https://arxiv.org/abs/) | [**π» Evaluation**](https://github.com/mragbench/MRAG-Bench)
## Intro
MRAG-Bench consists of 16,130 images and 1,353 human-annotated multiple-choice questions across 9 distinct scenarios, providing a robust and systematic evaluation of Large Vision Language Model (LVLM)βs vision-centric multimodal retrieval-augmented generation (RAG) abilities.
<img src="https://gordonhu608.github.io/images/mragbench_teaser.png" width="1000" />
## Results
Evaluated upon 10 open-source and 4 proprietary LVLMs, our results show that all LVLMs exhibit greater improvements when augmented with images compared to textual knowledge. Notably, the top-performing model, GPT-4o, faces challenges in effectively leveraging retrieved knowledge, achieving only a 5.82% improvement with ground-truth information, in contrast to a 33.16% improvement observed in human participants. These findings highlight the importance of MRAG-Bench in encouraging the community to enhance LVLMs' ability to utilize retrieved visual knowledge more effectively.
<img src="https://gordonhu608.github.io/images/mragbench_qual.png" width="800" />
## Load Dataset
The `data/` directory contains the full dataset annotations and images pre-loaded for processing with HF Datasets. It can be loaded as follows:
```python
from datasets import load_dataset
mrag_bench = load_dataset("uclanlp/MRAG-Bench")
```
## Dataset Description
The dataset contains the following fields:
| Field Name | Description |
| :--------- | :---------- |
| `id` | Unique identifier for the example |
| `aspect`| Aspect type for the example |
| `scenario` | The type of scenario associated with the entry |
| `image`| Contains image data in byte format |
| `gt_images`| A list of top 5 ground-truth images information |
| `question` | Question asked about the image |
| `A` | Choice A for the question |
| `B` | Choice B for the question |
| `C` | Choice C for the question |
| `D` | Choice D for the question |
|`answer_choice`| Correct choice identifier |
| `answer` | Correct answer to the question |
| `image_type`| Type of image object |
| `source`| Source of the image |
| `retrieved_images`| A list of top 5 retrieved images information by CLIP |
<br>
## Contact
* Wenbo Hu: [email protected]
## Citation
```
@article{hu2024mragbench,
title={MRAG-Bench: Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models},
author={Hu, Wenbo and Gu, Jia-Chen and Dou, Zi-Yi and Fayyaz, Mohsen and Lu, Pan and Chang, Kai-Wei and Peng, Nanyun},
journal={arXiv preprint arXiv:24},
year={2024}
}
```
|