File size: 7,804 Bytes
70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf f9092e1 70ad7cf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
---
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
configs:
- config_name: testmini
data_files:
- split: testmini
path: "testmini.parquet"
- config_name: testmini_version_split
data_files:
- split: text_lite
path: "testmini_text_lite.parquet"
- split: text_dominant
path: "testmini_text_dominant.parquet"
- split: vision_dominant
path: "testmini_vision_dominant.parquet"
- split: vision_intensive
path: "testmini_vision_intensive.parquet"
- split: vision_only
path: "testmini_vision_only.parquet"
- config_name: testmini_text_only
data_files:
- split: text_only
path: "testmini_text_only.parquet"
dataset_info:
- config_name: testmini
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: testmini
num_bytes: 166789963
num_examples: 3940
- config_name: testmini_version_split
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: image
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: text_lite
num_examples: 788
- name: text_dominant
num_examples: 788
- name: vision_dominant
num_examples: 788
- name: vision_intensive
num_examples: 788
- name: vision_only
num_examples: 788
- config_name: testmini_text_only
features:
- name: sample_index
dtype: string
- name: problem_index
dtype: string
- name: problem_version
dtype: string
- name: question
dtype: string
- name: image
dtype: string
- name: answer
dtype: string
- name: question_type
dtype: string
- name: metadata
struct:
- name: split
dtype: string
- name: source
dtype: string
- name: subject
dtype: string
- name: subfield
dtype: string
- name: query_wo
dtype: string
- name: query_cot
dtype: string
splits:
- name: text_only
num_bytes: 250959
num_examples: 788
---
# Dataset Card for MathVerse
This is the version for [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval). This shares the same data with the [official dataset](https://huggingface.co/datasets/AI4Math/MathVerse?row=3).
- [Dataset Description](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-description)
- [Paper Information](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#paper-information)
- [Dataset Examples](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#dataset-examples)
- [Leaderboard](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#leaderboard)
- [Citation](https://huggingface.co/datasets/AI4Math/MathVerse/blob/main/README.md#citation)
## Dataset Description
The capabilities of **Multi-modal Large Language Models (MLLMs)** in **visual math problem-solving** remain insufficiently evaluated and understood. We investigate current benchmarks to incorporate excessive visual content within textual questions, which potentially assist MLLMs in deducing answers without truly interpreting the input diagrams.
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig1.png" width="90%"> <br>
</p>
To this end, we introduce **MathVerse**, an all-around visual math benchmark designed for an equitable and in-depth evaluation of MLLMs. We meticulously collect 2,612 high-quality, multi-subject math problems with diagrams from publicly available sources. Each problem is then transformed by human annotators into **six distinct versions**, each offering varying degrees of information content in multi-modality, contributing to **15K** test samples in total. This approach allows MathVerse to comprehensively assess ***whether and how much MLLMs can truly understand the visual diagrams for mathematical reasoning.***
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig2.png" width="90%"> <br>
Six different versions of each problem in <b>MathVerse</b> transformed by expert annotators.
</p>
In addition, we propose a **Chain-of-Thought (CoT) Evaluation strategy** for a fine-grained assessment of the output answers. Rather than naively judging True or False, we employ GPT-4(V) to adaptively extract crucial reasoning steps, and then score each step with detailed error analysis, which can reveal the intermediate CoT reasoning quality by MLLMs.
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/fig3.png" width="90%"> <br>
The two phases of the CoT evaluation strategy.
</p>
## Paper Information
- Code: https://github.com/ZrrSkywalker/MathVerse
- Project: https://mathverse-cuhk.github.io/
- Visualization: https://mathverse-cuhk.github.io/#visualization
- Leaderboard: https://mathverse-cuhk.github.io/#leaderboard
- Paper: https://arxiv.org/abs/2403.14624
## Dataset Examples
🖱 Click to expand the examples for six problems versions within three subjects</summary>
<details>
<summary>🔍 Plane Geometry</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver1.png" width="50%"> <br>
</p>
</details>
<details>
<summary>🔍 Solid Geometry</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver2.png" width="50%"> <br>
</p>
</details>
<details>
<summary>🔍 Functions</summary>
<p align="center">
<img src="https://raw.githubusercontent.com/ZrrSkywalker/MathVerse/main/figs/ver3.png" width="50%"> <br>
</p>
</details>
## Leaderboard
### Contributing to the Leaderboard
🚨 The [Leaderboard](https://mathverse-cuhk.github.io/#leaderboard) is continuously being updated.
The evaluation instructions and tools will be released soon. For now, please send your results on the ***testmini*** set to this email: [email protected]. Please refer to the following template to prepare your result json file.
- [output_testmini_template.json]()
## Citation
If you find **MathVerse** useful for your research and applications, please kindly cite using this BibTeX:
```latex
@inproceedings{zhang2024mathverse,
title={MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?},
author={Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li},
booktitle={arXiv},
year={2024}
}
``` |